Garch Models

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 38

Garch Models

https://rdrr.io/cran/prophet/
man/rmse.html
https://rpubs.com/yevonnael/
garch-models-demo

Yevonnael Andrew
8/27/2020

library(tidyquant)

Computing returns
For managing financial risk, we need to first measure the risk by analyzing the return
series. In this exercise, you are given the S&P 500 price series and you need to plot the
daily returns. You will see that large (positive or negative) returns tends to be followed
by large returns, of either sign, and small returns tend to be followed by small returns.
The periods of sustained low volatility and high volatility are called volatility
clusters.

# Fetch Stock Prices


GOOG <- getSymbols(Symbols = "KO", from = "2000-01-01", to = "2018-01-
01", src = "yahoo", adjust=TRUE, auto.assign = FALSE)

## 'getSymbols' currently uses auto.assign=TRUE by default, but will


## use auto.assign=FALSE in 0.5-0. You will still be able to use
## 'loadSymbols' to automatically load data. getOption("getSymbols.env")
## and getOption("getSymbols.auto.assign") will still be checked for
## alternate defaults.
##
## This message is shown once per session and may be disabled by setting
## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for details.
# Take the adjusted price only
GOOG <- Ad(GOOG)

# Plot daily stock price


plot(GOOG)

# Compute daily returns


GOOG_ret <- CalculateReturns(GOOG) %>% na.omit()
# Plot daily returns
plot(GOOG_ret)

Standard deviation on subsamples


In the financial crisis of 2008-2009 the volatility was substantially higher than
average. Let’s get our hands dirty with analyzing the volatility of the daily returns for
the S&P 500 index. You will see that the standard deviation over the complete sample
can be substantially different from the standard deviation on subsamples. Recall that
standard deviations on daily returns give daily standard deviations. They are
annualized by multiplying them using the square-root-of-time rule.
The daily returns sp500ret you calculated in the previous exercise are available in your
workspace.

# Compute the annualized volatility for the complete sample


sqrt(252) * sd(GOOG_ret)

## [1] 0.2087576
# Compute the annualized standard deviation for the year 2009
sqrt(252) * sd(GOOG_ret["2009"])

## [1] 0.228392
# Compute the annualized standard deviation for the year 2017
sqrt(252) * sd(GOOG_ret["2017"])

## [1] 0.09050465

Roll, roll, roll You can visualize the time-variation in volatility by using the function
chart.RollingPerformance() in the package PerformanceAnalytics. An important tuning
parameter is the choice of the window length. The shorter the window, the
more responsive the rolling volatility estimate is to recent returns. The
longer the window, the smoother it will be. The function sd.annualized lets you
compute annualized volatility under the assumption that the number of trading days in
a year equals the number specified in the scale argument.
In this exercise you need to complete the code to compute the rolling estimate of
annualized volatility for the daily S&P 500 returns in sp500ret for the period 2005
until 2017.

# Load the package PerformanceAnalytics


library(PerformanceAnalytics)

# Showing two plots on the same figure


par(mfrow=c(2,1))

# Compute the rolling 1 month estimate of annualized volatility


chart.RollingPerformance(R = GOOG_ret, width = 22,
FUN = "sd.annualized", scale = 252, main = "One month rolling
volatility")

# Compute the rolling 3 months estimate of annualized volatility


chart.RollingPerformance(R = GOOG_ret, width = 66,
FUN = "sd.annualized", scale = 252, main = "Three months rolling
volatility")

The GARCH equation for volatility


prediction
We can use GARCH models to predict volatility of the future return. Input: Time series
of returns
Prediction error Under the GARCH model, the variance is driven by the square of
the prediction errors e=R−μ. In order to calculate a GARCH variance, you thus need to
first compute the prediction errors. For daily returns, it is common practice to set μ
equal to the sample average.
You’re going to implement this and then verify that there is a large positive
autocorrelation in the absolute value of the prediction errors. Positive autocorrelation
reflects the presence of volatility clusters. When volatility is above average, it stays
above average for some time. When volatility is low, it stays low for some time.

# Compute the mean daily return


m <- mean(GOOG_ret)

# Define the series of prediction errors


e <- GOOG_ret - m

# Plot the absolute value of the prediction errors


par(mfrow = c(2,1),mar = c(3, 2, 2, 2))
plot(abs(e))
# Plot the acf of the absolute prediction errors
acf(abs(e))

Note the waves in the absolute prediction errors in the top plot. They correspond to
the presence of high and low volatility clusters. In the bottom plot, you can see the
large positive autocorrelations in the absolute prediction errors. Almost all of them are
above 0.15.

e2 = e^2
predvar <- rep(NA, 4527)
# Compute the predicted variances
predvar[1] <- var(GOOG_ret)
for(t in 2:4527){
predvar[t] <- 0.001 + 0.1 * e2[t-1] + 0.8 * predvar[t-1]
}

# Create annualized predicted volatility


ann_predvol <- xts(sqrt(252) * sqrt(predvar), order.by = time(GOOG_ret))

# Plot the annual predicted volatility in 2008 and 2009


plot(ann_predvol["2008::2009"], main = "Ann. GOOG vol in 2008-2009")

The large spike in volatility in mid september 2008 is due to the collapse of Lehman
Brothers on September 15, 2008.

The rugarch package


Three steps: ugarchspec(): Specify which GARCH model you want to use ugarchfit():
Estimate the GARCH model on your time series with returns R_1,…,R_TR
ugarchforecast(): Use the estimated GARCH model to make volatility predictions for
R_{T+1}
Application to tactical asset allocation: w in a risky asset (volatility sigma(t)) and keeps
1-w on a risk-free bank deposit accounts has volatility equal to wsigma(t). How to set
w? One approach is volatility targeting: w is such that the predicted annualized
portfolio volatility equals a target level, say 5%, then w = 0.05/sigma(t) Since
GARCH volatilities change, the optimal weight changes as well.
Specify and taste the GARCH model flavors In the next chapters, you will see
that GARCH models come in many flavors. You thus need to start off by specifying the
mean model, the variance model and the error distribution that you want to use. The
best model to use is application-specific. A realistic GARCH analysis thus involves
specifying, estimating and testing various GARCH models.
In R, this is simple thanks to the rugarch package of Alexios Ghalanos. This package
has already been loaded for you. You will apply it to analyze the daily returns in
sp500ret.
library(rugarch)

## Loading required package: parallel


##
## Attaching package: 'rugarch'
## The following object is masked from 'package:stats':
##
## sigma
# Specify a standard GARCH model with constant mean
garchspec <- ugarchspec(mean.model = list(armaOrder = c(0,0)),
variance.model = list(model = "sGARCH"),
distribution.model = "norm")

# Estimate the model


garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

# Use the method sigma to retrieve the estimated volatilities


garchvol <- sigma(garchfit)

# Plot the volatility for 2017


plot(garchvol["2017"])

Well done. Notice the typical GARCH behavior: after a large unexpected return,
volatility spikes upwards and then decays away until there is another shock. Let’s move
on to forecasting!

Out-of-sample forecasting
The garchvol series is the series of predicted volatilities for each of the returns in the
observed time series sp500ret.
For decision making, it is the volatility of the future (not yet observed) return that
matters. You get it by applying the ugarchforecast() function to the output from
ugarchfit() In forecasting, we call this the out-of-sample volatility forecasts, as they
involve predictions of returns that have not been used when estimating the GARCH
model.
This exercise uses the garchfit and garchvol objects that you created in the previous
exercise. If you need to check which arguments a function takes, you can use ?
name_of_function in the Console to access the documentation.

# Compute unconditional volatility


sqrt(uncvariance(garchfit))

## [1] 0.01288
# Print last 10 ones in garchvol
tail(garchvol, 10)
## [,1]
## 2017-12-15 0.008501463
## 2017-12-18 0.008302661
## 2017-12-19 0.008234302
## 2017-12-20 0.008072244
## 2017-12-21 0.007873386
## 2017-12-22 0.008165352
## 2017-12-26 0.007953158
## 2017-12-27 0.007821857
## 2017-12-28 0.007640453
## 2017-12-29 0.007565508
# Forecast volatility 5 days ahead and add
garchforecast <- ugarchforecast(fitORspec = garchfit,
n.ahead = 5)

# Extract the predicted volatilities and print them


print(sigma(garchforecast))

## 2017-12-29
## T+1 0.007415905
## T+2 0.007477487
## T+3 0.007538063
## T+4 0.007597661
## T+5 0.007656309

Well done. In the next exercise you learn how to use GARCH models for portfolio
allocation.
Volatility targeting in tactical asset allocation GARCH volatility predictions are
of direct practical use in portfolio allocation. According to the two-fund separation
theorem of James Tobin, you should invest a proportion w of your wealth in a risky
portfolio and the remainder in a risk free asset, like a US Treasury bill.
When you target a portfolio with 5% annualized volatility, and the annualized volatility
of the risky asset is σt, then you should invest 0.05/σt in the risky asset.

# Compute the annualized volatility


annualvol <- sqrt(252) * sigma(garchfit)

# Compute the 5% vol target weights


vt_weights <- 0.05 / annualvol

# Compare the annualized volatility to the portfolio weights in a plot


plot(merge(annualvol, vt_weights), multi.panel = TRUE)

Non-normality of standardized returns


Actually, it is not realistic to assume a normal distribution when analyzing stock
returns using a GARCH model. Normal dist is not consistent with stock market
phenomenon. To account this,
change distribution.model of ugarchspec() from norm to sstd.

# Estimated standardized returns


stdret <- residuals(garchfit, standardize = TRUE)
library(PerformanceAnalytics)
chart.Histogram(GOOG_ret, methods = c("add.normal", "add.density"),
colorset=c("gray","red","blue"))

A realistic distribution thus needs to accommodate the presence of: fat tails: higher
probability to observe large (positive or negative) returns than under the normal
distribution skewness: asymmetry of the return distribution
Compared to the normal dist, the skewed student t distribution has two extra params:
degree of freedom, the lower is v, the fatter the tails skewness parameter, <1 negative
skewness, >1 positive skewness

library(rugarch)
# Specify a standard GARCH model with skewed student t
garchspec <- ugarchspec(mean.model = list(armaOrder = c(0,0)),
variance.model = list(model = "sGARCH"),
distribution.model = "sstd")

# Estimate the model


garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

# Use the method sigma to retrieve the estimated volatilities


garchvol <- sigma(garchfit)

# Plot the volatility for 2017


plot(garchvol["2017"])

coef(garchfit)

## mu omega alpha1 beta1 skew


shape
## 5.340569e-04 7.420376e-07 4.934102e-02 9.466871e-01 1.015420e+00
5.149017e+00
# Estimated standardized returns
stdret <- residuals(garchfit, standardize = TRUE)
library(PerformanceAnalytics)
chart.Histogram(stdret, methods = c("add.normal", "add.density"),
colorset=c("gray","red","blue"))

Leverage effect
Size and sign of et matter for volatility prediction! Negative returns induce higher
leverage. We thus need two equations: - One explaining the GARCH variance following
a negative unexpected returns - One equation describing the variance reaction after a
positive surprise in returns In case the positive surprise in returns, we take the usual
GARCH(1,1). In case of a negative surprise, the predicted variance should be higher
than after a positive surprise, then we need a higher coefficient: GJR model. These
two define the GJR GARCH model.
Just change model="sGARCH" to model="gjrGARCH"

library(rugarch)
# Specify a gjrGARCH model with skewed student t
garchspec <- ugarchspec(mean.model = list(armaOrder = c(0,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")

# Estimate the model


garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

# Use the method sigma to retrieve the estimated volatilities


garchvol <- sigma(garchfit)

# Plot the volatility for 2017


plot(garchvol["2017"])

coef(garchfit)

## mu omega alpha1 beta1 gamma1


skew
## 4.361528e-04 1.063630e-06 2.910480e-02 9.368581e-01 5.672383e-02
1.005481e+00
## shape
## 5.324901e+00
# Estimated standardized returns
stdret <- residuals(garchfit, standardize = TRUE)
library(PerformanceAnalytics)
chart.Histogram(stdret, methods = c("add.normal", "add.density"),
colorset=c("gray","red","blue"))

Visualize volatility response using newsimpact()

out <- newsimpact(garchfit)


plot(out$zx, out$zy, xlab="prediction error", ylab="predicted variance")

Mean model GARCH-in-mean model quantify the risk-reward trade-off.

# Specify an appropiate mean.model


garchspec <- ugarchspec(mean.model = list(armaOrder = c(0,0), archm =
TRUE, archpow = 2),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")

# Estimate the model


garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

# Use the method sigma to retrieve the estimated volatilities


garchvol <- sigma(garchfit)

# Plot the volatility for 2017


plot(garchvol["2017"])

coef(garchfit)

## mu archm omega alpha1 beta1


gamma1
## 4.227656e-04 1.757936e-01 1.064220e-06 2.915116e-02 9.369175e-01
5.626739e-02
## skew shape
## 1.006670e+00 5.310319e+00

archm is a risky parameter lambda

plot(fitted(garchfit))
GARCH-in-mean model uses the financial theory of a risk-reward trade-off to build a
conditional mean model. Let’s now use statistical theory to make a mean model that
exploits the correlation between today’s return and tomorrow’s return.
The most popular model is the AR(1) model: AR(1) stands for autoregressive model of
order 1 It predicts the next return using the deviation of the return from its long term
mean value μ.
If the AR coeff is positive, when the return above it mean value, the next will be also
above the mean value. Possible explanation: market underreact to news and hence
there is momentum n returns.
If the AR coeff is negative, then a higher than average return is followed by a lower
than average return. Possible explanation: market overreact to news and hence there is
reversal in returns.

# Specify an appropiate mean.model


garchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")

# Estimate the model


garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

coef(garchfit)

## mu ar1 omega alpha1 beta1


## 4.332635e-04 -6.528526e-03 1.064851e-06 2.949195e-02 9.367644e-01
## gamma1 skew shape
## 5.606955e-02 1.005186e+00 5.307971e+00

Here the AR1 is negative which hints towards overreaction and thus a reversal on the
next day.
Other popular model for the conditional mean are the MA(1) and ARMA(1,1) model.
That’s right! In contrast with using an ARMA model for the mean, we have that a
GARCH-in-mean model does not exploit the correlation between today’s and
tomorrow’s return. It exploits the relationship between the expected return and the
variance of the return. The higher the risk in terms of variance, the higher should be
the expected return on investment.
Avoid unnecessary complexity The parameters of a GARCH model are estimated
by maximum likelihood. Because of sampling uncertainty, the estimated parameters
have for sure some estimation error. If we know the true parameter value, it is
therefore best to impose that value and not to estimate it.
Let’s do this in case of the daily EUR/USD returns available in the console as the
variable EURUSDret and for which an AR(1)-GARCH model with skewed student t
distribution has already been estimated, and made available as the ugarchfit object
called flexgarchfit.
# Print the flexible GARCH parameters
coef(flexgarchfit)

# Restrict the flexible GARCH model by impose a fixed ar1 and skew
parameter
rflexgarchspec <- flexgarchspec
setfixed(rflexgarchspec) <- list(ar1 = 0, skew = 1)

# Estimate the restricted GARCH model


rflexgarchfit <- ugarchfit(data = EURUSDret, spec = rflexgarchspec)

# Compare the volatility of the unrestricted and restriced GARCH models


plotvol <- plot(abs(EURUSDret), col = "grey")
plotvol <- addSeries(sigma(flexgarchfit), col = "black", lwd = 4, on=1 )
plotvol <- addSeries(sigma(rflexgarchfit), col = "red", on=1)
plotvol

Parameter bounds and impact on forecasts Let’s take again the flexible GARCH
model specification for which the estimated coefficients are printed in the console.
Now assume that you believe that the GARCH parameter α should be between 0.05
and 0.1, while the β parameter is between 0.8 and 0.95. You are asked here to re-
estimate the model by imposing those bounds and see the effect on the volatility
forecasts for the next ten days obtained using ugarchforecast.

# Define bflexgarchspec as the bound constrained version


bflexgarchspec <- flexgarchspec
setbounds(bflexgarchspec) <- list(alpha1 = c(0.05,0.2), beta1 =
c(0.8,0.95))

# Estimate the bound constrained model


bflexgarchfit <- ugarchfit(data = EURUSDret, spec = bflexgarchspec)

# Inspect coefficients
coef(bflexgarchfit)

# Compare forecasts for the next ten days


cbind(sigma(ugarchforecast(flexgarchfit, n.ahead = 10)),
sigma(ugarchforecast(bflexgarchfit, n.ahead = 10)))

Variance targeting Financial return volatility clusters through time: periods of


above average volatility are followed by period of below average volatility. The long run
prediction is that: - when volatility is high, it will decrease and revert to its long run
average. - when volatility is low, it will increase and revert to its long run average. In
the estimation of GARCH models we can exploit this mean reversion behavior of
volatility by means of volatility targeting. We then estimate the GARCH parameters in
such a way that the long run volatility implied by the GARCH model equals the sample
standard deviation.
Let’s do this for the EUR/USD returns.

# Complete the specification to do variance targeting


garchspec <- ugarchspec(mean.model = list(armaOrder = c(0,0)),
variance.model = list(model = "sGARCH",
variance.targeting = TRUE),
distribution.model = "std")

# Estimate the model


garchfit <- ugarchfit(data = EURUSDret, spec = garchspec)

# Print the GARCH model implied long run volatility


sqrt(uncvariance(garchfit))

# Verify that it equals the standard deviation (after rounding)


all.equal(sqrt(uncvariance(garchfit)), sd(EURUSDret), tol = 1e-4)

Well done. Since volatility is mean reverting, it is wise to use this empirical fact when
estimating the GARCH model. As you’ve probably realized by now, there is a lot of skill
involved in finding the right model. You sharpen those skills in the next chapter.

Statistical significance
Are the variables in your GARCH model relevant? Can you simplify the model? ⟺ Are
there parameters zero? - If the ar1 parameter is zero, you can use a constant mean
model. - If the gamma1 parameter is zero, there is no GARCH-in-mean and you can
use a standard GARCH model instead of the GJR. It seem simple, but actually we don’t
know the true parameter value is, all needs to be estimated.

# Specify an appropiate mean.model


garchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")

# Estimate the model


garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

round(garchfit@fit$matcoef, 6)

## Estimate Std. Error t value Pr(>|t|)


## mu 0.000433 0.000136 3.195030 0.001398
## ar1 -0.006529 0.014611 -0.446814 0.655009
## omega 0.000001 0.000000 3.092377 0.001986
## alpha1 0.029492 0.004696 6.279831 0.000000
## beta1 0.936764 0.005014 186.812603 0.000000
## gamma1 0.056070 0.011499 4.875930 0.000001
## skew 1.005186 0.020473 49.098137 0.000000
## shape 5.307971 0.397403 13.356652 0.000000
ar1 is statistically insignificant, we can made a simpler model.
Goodness of fit Do the GARCH predictions fit well with the observed returns?

# Goodness of fit for the mean prediction


e <- residuals(garchfit)
mean(e^2)

## [1] 0.0001728729
# Goodness of fit for the variance prediction
e <- residuals(garchfit)
d <- e^2 - sigma(garchfit)^2
mean(d^2)

## [1] 2.974194e-07
# Compute the likelihood
likelihood(garchfit)

## [1] 14248.54

The result of the likelihood should not be interpreted by itself, but compare it with the
likelihood obtained using other models, like a GJR model, etc.

# Results information criteria


infocriteria(garchfit)

##
## Akaike -6.291382
## Bayes -6.280040
## Shibata -6.291388
## Hannan-Quinn -6.287386

That’s right! By choosing the model with the highest likelihood, you will end up with
the most complex model that is not parsimonious and has a high risk of overfitting.
Diagnosing absolute standardized returns Check 1: Mean and sd of
standardized returns. Sample mean 0, sample sd 1 Check 2: Time series plot of
standardized returns. Should have constant variability Check 3: No predictability in the
absolute standardized returns Verify that there is no correlation between the past
absolute standardized return and the current absolute standardized return. Why? The
magnitude of the absolute standardized return should be constant -> no correlations n
the absolute standardized returns. Check 4: Ljung-Box test

acf(abs(stdret), 22)
# We want it to have p-value > 0.05
Box.test(abs(stdret), 22, type = "Ljung-Box")

##
## Box-Ljung test
##
## data: abs(stdret)
## X-squared = 29.575, df = 22, p-value = 0.1292

Backtesting using ugarchroll


Solution to avoid look-ahead bias: Rolling estimation/ - Program a for loop - Built-in
function ugarchroll() You can reduce the computational cost by estimating the model
every K observations.

tgarchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),


variance.model = list(model = "sGARCH"),
distribution.model = "std")

garchroll <- ugarchroll(tgarchspec, data = GOOG_ret, n.start = 2500,


refit.window = "moving", refit.every = 500)

preds <- as.data.frame(garchroll)

head(preds)

Mu Sigma Skew Shape Shape(GIG)


<dbl> <dbl> <dbl> <dbl> <dbl>
2009-12-11 0.0004243413 0.008781712 0 5.706398 0 0.
2009-12-14 0.0004197187 0.008805602 0 5.706398 0 -0.
2009-12-15 0.0004076571 0.008605275 0 5.706398 0 0.
2009-12-16 0.0004094526 0.008403752 0 5.706398 0 -0.
2009-12-17 0.0003962788 0.008615900 0 5.706398 0 -0.
2009-12-18 0.0003799956 0.010234061 0 5.706398 0 -0.
6 rows

garchvol <- xts(preds$Sigma, order.by = as.Date(rownames(preds)))


plot(garchvol)
Evaluate the accuracy of preds$Mu and preds$Sigma by comparing it
with preds$Realized

# Prediction error for the mean


e <- preds$Realized - preds$Mu

mean(e^2)

## [1] 8.333045e-05
# Prediction error for the mean
e <- preds$Realized - preds$Mu

# Prediction error for the variance


d <- e^2 - preds$Sigma^2

mean(d^2)

## [1] 3.764772e-08

Value-at-risk
How much would you lose in the best of the 5% worst cases?
A popular measure of downside risk: 5% value-at-risk. The 5% quantile of the return
distribution represents the best return in the 5% worst scenarios.
The workflow to obtain predicted 5% quantiles from ugarchroll is:

garchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),


variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")

garchroll <- ugarchroll(garchspec, data = GOOG_ret, n.start = 2500,


refit.window = "moving", refit.every = 100)

garchVaR <- quantile(garchroll, probs = 0.05)

actual <- xts(as.data.frame(garchroll)$Realized, time(garchVaR))


VaRplot(alpha = 0.05, actual = actual, VaR = garchVaR)

# Calculation of coverage for S&P 500 returns and 5% probability level


mean(actual < garchVaR)

## [1] 0.04982733

VaR coverage and model validation Interpretation of coverage for VaR at loss
probability α (e.g. 5%): Valid prediction model has a coverage that is close to the
probability level α used. If coverage ≫ α: too many exceedances: the predicted quantile
should be more negative. Risk of losing money has been underestimated. If coverage ≪
α: too few exceedances, the predicted quantile was too negative. Risk of losing money
has been overestimated.

For production and simulation


Use the validated GARCH model in production Use ugarchfilter() for analyzing the
recent dynamics in the mean and volatility Use ugarchforecast() applied to
a ugarchspec object (instead of ugarchfit()) object for making the predictions about
the future mean and volatility
Step 1: Defines the final model specs

# specify AR(1)-GJR GARCH model with skewed student t distribution


garchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")
# estimate the model
garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

progarchspec <- garchspec


setfixed(progarchspec) <- as.list(coef(garchfit))

Step 2: Analysis of past mean and volatility dynamics

garchfilter <- ugarchfilter(data = GOOG_ret, spec = progarchspec)


plot(sigma(garchfilter))

Step 3: Make predictions about future returns

# Make the predictions for the mean and vol for the next ten days
garchforecast <- ugarchforecast(data = GOOG_ret,
fitORspec = progarchspec,
n.ahead = 10)

cbind(fitted(garchforecast), sigma(garchforecast))

## 2017-12-29 2017-12-29
## T+1 0.0004132459 0.007659785
## T+2 0.0004333941 0.007707519
## T+3 0.0004332626 0.007754692
## T+4 0.0004332635 0.007801316
## T+5 0.0004332635 0.007847402
## T+6 0.0004332635 0.007892961
## T+7 0.0004332635 0.007938003
## T+8 0.0004332635 0.007982539
## T+9 0.0004332635 0.008026579
## T+10 0.0004332635 0.008070131

You can also use the complete model spec to simulate artificial log-returns, defined as
the difference between the current log-price and the past log-price. The simulation is
useful to assess the randomness in future returns and their impact on future price
levels.
Step 1: Calibrate the simulation model

# Compute log returns


GOOG_log_ret <- diff(log(GOOG))[(-1)]

# Estimate the model and assign model params to the simulation model
garchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")
# Estimate the model
garchfit <- ugarchfit(data = GOOG_log_ret["/2010-12"], spec = garchspec)

# Set that estimated model as the model to be used in the simulation


simgarchspec <- garchspec
setfixed(simgarchspec) <- as.list(coef(garchfit))

Step 2: Run the simulation with ugarchpath() Requries to choose: spec : completely
specified GARCH model m.sim : number of time series of simulated returns you want
n.sim: number of observations in the simulated time series (e.g. 252) rseed : any
number to fix the seed used to generate the simulated series (needed for
reproducibility)

# Four series of ten years of 252 observations


simgarch <- ugarchpath(spec = simgarchspec, m.sim = 4,
n.sim = 10 * 252, rseed = 12345)

Step 3: Analysis of simulated returns

simret <- fitted(simgarch)


plot.zoo(simret)

plot.zoo(sigma(simgarch))
Analysis of simulated prices
simprices <- exp(apply(simret, 2, "cumsum"))
matplot(simprices, type = "l", lwd = 3)

That’s right! Stock returns tend to have fat tails and an asymmetric distribution. The
normal distribution is not a realistic assumption for stock returns.
In a corporate environment, there is often a distinction between the stage of model
engineering and the stage to using the model in production. When using the model in
production, it may be that the model is not re-estimated at each stage. You then use the
model with fixed coefficients but integrating on each prediction day the new data. The
function ugarchfilter() is designed to complete this task.
In this exercise you use a model fitted on the January 1989 till December 2007 daily
S&P 500 returns to make a prediction of the future volatility in a turbulent period
(September 2008) and a stable period (September 2017). The model has already been
specified as is available as garchspec in the R console.

# Estimate the model


garchfit <- ugarchfit(data = GOOG_ret["/2006-12"], spec = garchspec)

# Fix the parameters


progarchspec <- garchspec
setfixed(progarchspec) <- as.list(coef(garchfit))

# Use ugarchfilter to obtain the estimated volatility for the complete


period
garchfilter <- ugarchfilter(data = GOOG_ret, spec = progarchspec)
plot(sigma(garchfilter))

# Compare the 252 days ahead forecasts made at the end of September 2008
and September 2017
garchforecast2008 <- ugarchforecast(data = GOOG_ret["/2008-09"],
fitORspec = progarchspec, n.ahead = 252)
garchforecast2017 <- ugarchforecast(data = GOOG_ret["/2017-09"],
fitORspec = progarchspec, n.ahead = 252)
par(mfrow = c(2,1), mar = c(3,2,3,2))
plot(sigma(garchforecast2008), main = "/2008-09", type = "l")
plot(sigma(garchforecast2017), main = "/2017-09", type = "l")

Note that the on the long run the volatility is predicted to return to its average level.
This explains why the predicted volatility at T+252 is similar in September 2008 and
2017.
Model Risk Sources: modeling choices starting values in the optimization outliers in
the return series Solution: Protect yourself through a robust approach model-
averaging: averaging the predictions of multiple models trying several starting values
and choosing the one that leads to the highest likelihood
If you cannot choose which model to use, you could estimate them all

variance.models <- c("sGARCH", "gjrGARCH")


distribution.models <- c("norm", "std", "std")
c <- 1
for (variance.model in variance.models) {
for (distribution.model in distribution.models) {
garchspec <- ugarchspec(mean.model = list(armaOrder = c(0, 0)),
variance.model = list(model =
variance.model),
distribution.model = distribution.model)
garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)
if (c==1) {
msigma <- sigma(garchfit)
} else {
msigma <- merge(msigma, sigma(garchfit))
}
c <- c + 1
}
}

plot(msigma)

The average vol prediction

avesigma <- xts(rowMeans(msigma), order.by = time(msigma))


plot(avesigma)

Cleaning the data Avoid that outliers distort the volatility predictions How?
Through winsorization: reduce the magnitude of the return to an acceptable level
using the function Return.clean() in the
package PerformanceAnalytics with method="boudt":

# Clean the return series


library(PerformanceAnalytics)
library(robustbase)
cl_GOOG_ret <- Return.clean(GOOG_ret, method = "boudt")

# Plot them on top of each other


plotret <- plot(GOOG_ret, col = "red")
plotret <- addSeries(cl_GOOG_ret, col = "blue", on = 1)
plotret
garchspec <- ugarchspec(mean.model = list(armaOrder = c(1,0)),
variance.model = list(model = "gjrGARCH"),
distribution.model = "sstd")

garchfit <- ugarchfit(data = GOOG_ret, spec = garchspec)

clgarchfit <- ugarchfit(data = cl_GOOG_ret, spec = garchspec)

plotvol <- plot(abs(GOOG_ret), col = "gray")


plotvol <- addSeries(sigma(garchfit), col = "red", on = 1)
plotvol <- addSeries(sigma(clgarchfit), col = "blue", on = 1)
plotvol

Be a robustnik: it is better to be roughly right than exactly wrong.


That’s indeed wrong: if your returns are affected by outliers, also the maximum
likelihood estimation of the various models will be affected. Garbage in, garbage out.
When all models become unreliable due to the outliers, model averaging is not a
solution. Instead a robust approach should be used that either cleans the data directly
or uses robust estimation methods.
GARCH covariance GARCH volatility leads to time-varying variability of the
returns. GARCH covariance estimation in four steps
Step 1: Use ugarchfit() to estimate the GARCH model for each return series.

msftgarchfit <- ugarchfit(data = msftret, spec = garchspec)


wmtgarchfit <- ugarchfit(data = wmtret, spec = garchspec)

Step 2: Use residuals() to compute the standardized returns.

stdmsftret <- residuals(msftgarchfit, standardize = TRUE)


stdwmtret <- residuals(wmtgarchfit, standardize = TRUE)

Step 3: Use cor() to estimate ρ as the sample correlation of the standardized returns.

msftwmtcor <- as.numeric(cor(stdmsftret, stdwmtret))


msftwmtcor

Step 4: Compute the GARCH covariance by multiplying the estimated correlation and
volatilities

msftwmtcov <- msftwmtcor * sigma(msftgarchfit) * sigma(wmtgarchfit)

Minimum variance portfolio weights

msftvar <- sigma(msftgarchfit)^2


wmtvar <- sigma(wmtgarchfit)^2
msftwmtcov <- msftwmtcor * sigma(msftgarchfit) * sigma(wmtgarchfit)
msftweight <- (wmtvar - msftwmtcov) / (msftvar + wmtvar - 2 * msftwmtcov)

The daily beta of MSFT

# Compute the covariance between MSFT and S&P500 returns


msftsp500cor <- as.numeric(cor(stdmsftret, stdsp500ret))
msftsp500cov <- msftsp500cor * sigma(msftgarchfit) * sigma(sp500garchfit)
# Compute the variance of the S&P 500 returns
sp500var <- sigma(sp500garchfit)^2
# Compute the beta
msftbeta <- msftsp500cov / sp500var

discussion3
https://rpubs.com/jwzhang52/403782
Jingwen Zhang

Import the data


I’m using the same data set as last discussion’s data set

dis2data <- read.csv("~/Documents/bc/forecasting/dis2/dis2data.csv")


library(forecast)
## Warning: package 'forecast' was built under R version 3.4.4
mydata<-dis2data
myts<-ts(mydata[,2],frequency=12)
str(myts)
## Time-Series [1:77] from 1 to 7.33: 154 96 73 49 36 59 95 169 219
278 ...

Train and test set


train=ts(myts[1:60],frequency = 12)
test=ts(myts[61:77],frequency=12)

Build Arima model


myarima=auto.arima(train)
myarima
## Series: train
## ARIMA(1,1,0)(0,1,0)[12]
##
## Coefficients:
## ar1
## -0.5484
## s.e. 0.1202
##
## sigma^2 estimated as 2032: log likelihood=-245.36
## AIC=494.72 AICc=494.99 BIC=498.42

Since it requires to build two Arima model, I’m going to try antoher one with (1,1,0)(0,1,1)[12]

myarima2=arima(train,order=c(1,1,1), seasonal=c(0,1,0))
myarima2
##
## Call:
## arima(x = train, order = c(1, 1, 1), seasonal = c(0, 1, 0))
##
## Coefficients:
## ar1 ma1
## -0.7461 0.2950
## s.e. 0.1674 0.2454
##
## sigma^2 estimated as 1939: log likelihood = -244.78, aic = 495.56

Just by comparing the aic, myarima AIC< myarima 2 AIC, so auto.arima is the best Arima
Model

Forecasting using auto.arima Model


f1=forecast(myarima,12)
acc1=accuracy(f1,test[1:12])
plot(f1)

Build first ETS Model


mye=ets(train,model="ZZZ")
mye
## ETS(M,N,M)
##
## Call:
## ets(y = train, model = "ZZZ")
##
## Smoothing parameters:
## alpha = 0.5675
## gamma = 1e-04
##
## Initial states:
## l = 173.9233
## s=1.2686 1.7926 1.7704 1.3514 1.0016 0.7249
## 0.4926 0.4843 0.4442 0.6365 0.8313 1.2015
##
## sigma: 0.213
##
## AIC AICc BIC
## 710.6951 721.6042 742.1102

Build second ETS Model


mye2=ets(train,model="MAM")
mye2
## ETS(M,A,M)
##
## Call:
## ets(y = train, model = "MAM")
##
## Smoothing parameters:
## alpha = 0.341
## beta = 1e-04
## gamma = 0.0013
##
## Initial states:
## l = 106.6754
## b = 5.5695
## s=1.3398 1.742 1.8086 1.3795 0.9632 0.7054
## 0.459 0.5098 0.4416 0.6285 0.7967 1.2261
##
## sigma: 0.1973
##
## AIC AICc BIC
## 707.6150 722.1865 743.2189

Since model MAM aic< model ZZZ aic, so I choose MAM and I’m going to continue to use this
model for forecasting

Forecasting using ETS“MAM” Model


f2=forecast(mye2,12)#12 here i want to forecast for the next 12 month
acc2=accuracy(f2,test[1:12])
acc2
## ME RMSE MAE MPE MAPE
MASE
## Training set -0.02513019 39.56000 30.66490 -3.638508 14.47064
0.4670183
## Test set 28.85524028 56.07996 42.26627 5.826222 10.95152
0.6437041
## ACF1
## Training set 0.372519
## Test set NA
plot(f2)

Comparing the accuracy between auto.arima and


ETS “MAM” Model
acc1
## ME RMSE MAE MPE MAPE
MASE
## Training set 2.170204 39.47170 27.03341 -1.926934 12.08717
0.4117118
## Test set -43.707248 87.17862 75.84093 -17.706821 21.85138
1.1550374
## ACF1
## Training set 0.06643286
## Test set NA
acc2
## ME RMSE MAE MPE MAPE
MASE
## Training set -0.02513019 39.56000 30.66490 -3.638508 14.47064
0.4670183
## Test set 28.85524028 56.07996 42.26627 5.826222 10.95152
0.6437041
## ACF1
## Training set 0.372519
## Test set NA

Conclustion: By comparing aic, bic, Arima model has the lower value than ETS, Arima model is
better. When forecasting, by comparing RMSE, MAE, ME, MAPE, MASE, ETS has lower
values and it will get better result due to higher accuracy.

Discussion 5: Comparing GARCH


model with other models
https://rpubs.com/Judo/644354
Rodrigue
7/29/2020
For this week’s discussion, I picked the data set from Bank of America on yahoo
finance: https://finance.yahoo.com/quote/BAC/history?p=BAC
#Reading our dataset
#Summary of our dataset

str(BOA)
## 'data.frame': 60 obs. of 7 variables:
## $ Date : chr "2015-08-01" "2015-09-01" "2015-10-01" "2015-11-01"
...
## $ Open : num 17.9 15.9 15.5 16.9 17.5 ...
## $ High : num 18.1 16.5 17.4 18.1 17.9 ...
## $ Low : num 14.6 15.2 14.6 16.9 16.5 ...
## $ Close : num 16.4 15.6 16.8 17.4 16.8 ...
## $ Adj.Close: num 15 14.2 15.4 16 15.4 ...
## $ Volume : num 2.33e+09 1.78e+09 1.85e+09 1.44e+09 1.77e+09 ...
head(BOA)
## Date Open High Low Close Adj.Close Volume
## 1 2015-08-01 17.91 18.07 14.60 16.44 15.00615 2325559300
## 2 2015-09-01 15.95 16.48 15.25 15.58 14.22115 1777912500
## 3 2015-10-01 15.52 17.44 14.63 16.78 15.36581 1848594500
## 4 2015-11-01 16.90 18.09 16.87 17.43 15.96102 1439390500
## 5 2015-12-01 17.52 17.89 16.50 16.83 15.41159 1771496200
## 6 2016-01-01 16.45 16.59 12.94 14.14 12.98476 2648604300
tail(BOA)
## Date Open High Low Close Adj.Close Volume
## 55 2020-02-01 33.00 35.45 27.70 28.50 28.12317 1072992600
## 56 2020-03-01 28.35 29.75 17.95 21.23 20.94929 2826499000
## 57 2020-04-01 19.93 25.32 19.51 24.05 23.88343 1636882500
## 58 2020-05-01 23.38 26.17 20.10 24.12 23.95295 1447546400
## 59 2020-06-01 24.28 29.01 23.02 23.75 23.58551 1801607100
## 60 2020-07-01 24.03 24.87 22.39 24.36 24.36000 1193407900
#Creating our time series

BOA.ts = ts(BOA$Adj.Close, frequency = 12, start = c(2015,8))


autoplot(BOA.ts) +xlab("Year") + ylab("Adjusting Closing Price in $")

#Splitting our time series into training and testing

train.ts = window(BOA.ts, end=c(2020,6))


test.ts = window(BOA.ts, start=c(2015,8))

#Building ARIMA Model and ETS models

M1.ARIMA = auto.arima(train.ts)
summary(M1.ARIMA)
## Series: train.ts
## ARIMA(0,1,0)
##
## sigma^2 estimated as 4.329: log likelihood=-124.79
## AIC=251.59 AICc=251.66 BIC=253.65
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1456674 2.06294 1.582365 0.3593268 7.018769 0.3124246
## ACF1
## Training set -0.009918847

#Building ETS model

M1.ETS = ets(train.ts, model = "ZZZ")


summary(M1.ETS)
## ETS(M,N,N)
##
## Call:
## ets(y = train.ts, model = "ZZZ")
##
## Smoothing parameters:
## alpha = 0.9458
##
## Initial states:
## l = 14.848
##
## sigma: 0.0909
##
## AIC AICc BIC
## 326.8948 327.3312 333.1274
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
ACF1
## Training set 0.1569204 2.065125 1.587724 0.397414 7.046455 0.3134828
0.03991386

#Forecasting ARIMA model

fcast.M1.ARIMA = forecast(M1.ARIMA,h=12)
plot(fcast.M1.ARIMA)
lines(test.ts, col="red")
legend("topleft",lty=1,col=c("red","blue"),c("actual values","forecast"))

#Forecasting ETS Model

fcast.M1.ETS = forecast(M1.ETS,h=12)
plot(fcast.M1.ETS)
lines(test.ts, col="red")
legend("topleft",lty=1,col=c("red","blue"),c("actual values","forecast"))

#Building GARCH model – For the first time

GBOA = garch(BOA.ts)
##
## ***** ESTIMATION WITH ANALYTICAL GRADIENT *****
##
##
## I INITIAL X(I) D(I)
##
## 1 3.486087e+01 1.000e+00
## 2 5.000000e-02 1.000e+00
## 3 5.000000e-02 1.000e+00
##
## IT NF F RELDF PRELDF RELDX STPPAR D*STEP
NPRELDF
## 0 1 3.660e+02
## 1 2 2.146e+02 4.14e-01 5.48e+00 1.4e-02 2.0e+03 1.0e+00
5.50e+03
## 2 4 2.142e+02 1.79e-03 1.76e-03 7.2e-04 2.0e+00 5.0e-02
1.40e-01
## 3 5 2.136e+02 2.57e-03 2.23e-03 1.1e-03 2.2e+00 1.0e-01
2.56e-03
## 4 7 2.136e+02 6.60e-05 6.61e-05 5.9e-05 1.1e+01 5.3e-03
3.47e-04
## 5 9 2.136e+02 1.10e-04 1.10e-04 1.2e-04 2.1e+00 1.1e-02
2.66e-04
## 6 10 2.136e+02 1.31e-04 1.32e-04 2.4e-04 2.6e+00 2.1e-02
1.54e-04
## 7 12 2.136e+02 4.15e-06 4.15e-06 1.5e-05 1.1e+01 1.3e-03
2.15e-05
## 8 14 2.136e+02 6.94e-06 6.94e-06 3.1e-05 2.1e+00 2.6e-03
1.76e-05
## 9 16 2.136e+02 1.17e-06 1.17e-06 6.3e-06 1.8e+01 5.2e-04
1.07e-05
## 10 18 2.136e+02 2.13e-06 2.13e-06 1.3e-05 3.0e+00 1.0e-03
9.72e-06
## 11 20 2.136e+02 3.41e-06 3.41e-06 2.6e-05 2.0e+00 2.1e-03
7.64e-06
## 12 23 2.136e+02 5.70e-08 5.70e-08 5.4e-07 1.3e+02 4.2e-05
4.32e-06
## 13 25 2.136e+02 1.13e-07 1.13e-07 1.1e-06 1.8e+01 8.4e-05
4.47e-06
## 14 27 2.136e+02 2.23e-08 2.23e-08 2.2e-07 3.3e+02 1.7e-05
4.37e-06
## 15 29 2.136e+02 4.44e-08 4.44e-08 4.4e-07 4.2e+01 3.3e-05
4.36e-06
## 16 31 2.136e+02 8.81e-08 8.81e-08 8.8e-07 2.1e+01 6.7e-05
4.31e-06
## 17 34 2.136e+02 1.75e-09 1.75e-09 1.8e-08 4.0e+03 1.3e-06
4.23e-06
## 18 36 2.136e+02 3.50e-09 3.50e-09 3.5e-08 5.0e+02 2.7e-06
4.24e-06
## 19 38 2.136e+02 6.99e-09 6.99e-09 7.0e-08 2.5e+02 5.4e-06
4.24e-06
## 20 40 2.136e+02 1.40e-09 1.40e-09 1.4e-08 5.0e+03 1.1e-06
4.23e-06
## 21 42 2.136e+02 2.80e-10 2.80e-10 2.8e-09 2.5e+04 2.1e-07
4.23e-06
## 22 44 2.136e+02 5.59e-11 5.59e-11 5.6e-10 1.3e+05 4.3e-08
4.23e-06
## 23 46 2.136e+02 1.12e-11 1.12e-11 1.1e-10 6.3e+05 8.6e-09
4.23e-06
## 24 48 2.136e+02 2.24e-11 2.24e-11 2.2e-10 7.8e+04 1.7e-08
4.23e-06
## 25 50 2.136e+02 4.47e-12 4.47e-12 4.5e-11 1.6e+06 3.4e-09
4.23e-06
## 26 52 2.136e+02 8.94e-12 8.94e-12 9.0e-11 2.0e+05 6.8e-09
4.23e-06
## 27 54 2.136e+02 1.79e-11 1.79e-11 1.8e-10 9.8e+04 1.4e-08
4.23e-06
## 28 56 2.136e+02 3.58e-12 3.58e-12 3.6e-11 2.0e+06 2.7e-09
4.23e-06
## 29 58 2.136e+02 7.15e-13 7.15e-13 7.2e-12 9.8e+06 5.5e-10
4.23e-06
## 30 61 2.136e+02 1.46e-14 1.43e-14 1.4e-13 4.9e+08 1.1e-11
4.23e-06
## 31 63 2.136e+02 2.40e-15 2.86e-15 2.9e-14 2.4e+09 2.2e-12
4.23e-06
## 32 66 2.136e+02 2.28e-14 2.29e-14 2.3e-13 7.6e+07 1.8e-11
4.23e-06
## 33 69 2.136e+02 5.32e-16 4.58e-16 4.6e-15 1.5e+10 3.5e-13
4.23e-06
## 34 71 2.136e+02 9.32e-16 9.16e-16 9.2e-15 1.9e+09 7.0e-13
4.23e-06
## 35 72 2.136e+02 -4.68e+07 1.83e-15 1.8e-14 3.8e+09 1.4e-12
4.23e-06
##
## ***** FALSE CONVERGENCE *****
##
## FUNCTION 2.135736e+02 RELDX 1.840e-14
## FUNC. EVALS 72 GRAD. EVALS 35
## PRELDF 1.832e-15 NPRELDF 4.228e-06
##
## I FINAL X(I) D(I) G(I)
##
## 1 3.486220e+01 1.000e+00 1.401e-03
## 2 9.537147e-01 1.000e+00 1.127e-01
## 3 8.032921e-13 1.000e+00 2.551e-01
summary(GBOA)
##
## Call:
## garch(x = BOA.ts)
##
## Model:
## GARCH(1,1)
##
## Residuals:
## Min 1Q Median 3Q Max
## 0.7457 0.9471 0.9956 1.0618 1.2193
##
## Coefficient(s):
## Estimate Std. Error t value Pr(>|t|)
## a0 3.486e+01 1.026e+03 0.034 0.973
## a1 9.537e-01 6.388e+00 0.149 0.881
## b1 8.033e-13 5.417e+00 0.000 1.000
##
## Diagnostic Tests:
## Jarque Bera Test
##
## data: Residuals
## X-squared = 1.8201, df = 2, p-value = 0.4025
##
##
## Box-Ljung test
##
## data: Squared.Residuals
## X-squared = 0.18546, df = 1, p-value = 0.6667
plot(GBOA)

library(fGarch) #for GARCH model


## Loading required package: timeDate
## Loading required package: timeSeries
##
## Attaching package: 'timeSeries'
## The following object is masked from 'package:zoo':
##
## time<-
## Loading required package: fBasics
library(rugarch)
## Loading required package: parallel
##
## Attaching package: 'rugarch'
## The following object is masked from 'package:stats':
##
## sigma
library(quantmod)
## Loading required package: TTR
##
## Attaching package: 'TTR'
## The following object is masked from 'package:fBasics':
##
## volatility
## Version 0.4-0 included new data defaults. See ?getSymbols.
library(rmgarch)
##
## Attaching package: 'rmgarch'
## The following objects are masked from 'package:xts':
##
## first, last
## The following objects are masked from 'package:dplyr':
##
## first, last
M1.GARCH = garchFit(formula = ~garch(1,1), data = train.ts)
##
## Series Initialization:
## ARMA Model: arma
## Formula Mean: ~ arma(0, 0)
## GARCH Model: garch
## Formula Variance: ~ garch(1, 1)
## ARMA Order: 0 0
## Max ARMA Order: 0
## GARCH Order: 1 1
## Max GARCH Order: 1
## Maximum Order: 1
## Conditional Dist: norm
## h.start: 2
## llh.start: 1
## Length of Series: 59
## Recursion Init: mci
## Series Scale: 6.27541
##
## Parameter Initialization:
## Initial Parameters: $params
## Limits of Transformations: $U, $V
## Which Parameters are Fixed? $includes
## Parameter Matrix:
## U V params includes
## mu -37.02908711 37.02909 3.702909 TRUE
## omega 0.00000100 100.00000 0.100000 TRUE
## alpha1 0.00000001 1.00000 0.100000 TRUE
## gamma1 -0.99999999 1.00000 0.100000 FALSE
## beta1 0.00000001 1.00000 0.800000 TRUE
## delta 0.00000000 2.00000 2.000000 FALSE
## skew 0.10000000 10.00000 1.000000 FALSE
## shape 1.00000000 10.00000 4.000000 FALSE
## Index List of Parameters to be Optimized:
## mu omega alpha1 beta1
## 1 2 3 5
## Persistence: 0.9
##
##
## --- START OF TRACE ---
## Selected Algorithm: nlminb
##
## R coded nlminb Solver:
##
## 0: 79.743710: 3.70291 0.100000 0.100000 0.800000
## 1: 77.303392: 3.81192 0.0800889 0.0947001 0.777029
## 2: 72.256050: 4.09168 0.0594709 0.121489 0.756411
## 3: 69.771178: 4.18718 0.0265619 0.130810 0.739854
## 4: 69.022880: 4.29806 0.0384650 0.153382 0.741550
## 5: 68.243546: 4.29522 0.0277395 0.154416 0.737282
## 6: 68.021533: 4.29738 0.0226070 0.156210 0.735014
## 7: 67.840062: 4.31641 0.0147524 0.167994 0.727283
## 8: 67.780420: 4.36378 0.0234580 0.173487 0.723136
## 9: 67.350435: 4.31263 0.0210322 0.180042 0.716275
## 10: 67.146102: 4.32431 0.0164987 0.192082 0.705725
## 11: 66.847463: 4.32239 0.0227089 0.204733 0.696350
## 12: 66.615504: 4.31998 0.0192022 0.216613 0.684821
## 13: 66.278567: 4.32056 0.0254987 0.241816 0.663091
## 14: 65.987920: 4.31958 0.0198269 0.261223 0.635921
## 15: 65.730118: 4.32874 0.0297725 0.272677 0.605741
## 16: 65.556880: 4.31724 0.0265267 0.305272 0.597714
## 17: 65.509834: 4.33676 0.0250803 0.306047 0.594648
## 18: 65.506776: 4.33826 0.0293260 0.308206 0.590523
## 19: 65.473951: 4.33668 0.0264988 0.308433 0.589205
## 20: 65.383790: 4.34157 0.0253648 0.319690 0.566664
## 21: 65.227224: 4.32624 0.0359681 0.337160 0.520657
## 22: 64.948348: 4.35401 0.0400496 0.424073 0.463584
## 23: 64.777615: 4.34120 0.0437692 0.490271 0.383110
## 24: 64.735483: 4.34122 0.0490963 0.536541 0.328511
## 25: 64.717027: 4.33182 0.0539522 0.571903 0.290392
## 26: 64.711803: 4.32491 0.0566985 0.585517 0.276344
## 27: 64.710991: 4.32295 0.0578448 0.588564 0.272757
## 28: 64.710917: 4.32290 0.0580753 0.588092 0.272847
## 29: 64.710911: 4.32303 0.0580771 0.587537 0.273197
## 30: 64.710911: 4.32306 0.0580653 0.587303 0.273317
## 31: 64.710910: 4.32304 0.0580635 0.587242 0.273314
## 32: 64.710910: 4.32303 0.0580659 0.587254 0.273288
## 33: 64.710910: 4.32302 0.0580668 0.587261 0.273281
##
## Final Estimate of the Negative LLH:
## LLH: 173.0726 norm LLH: 2.933434
## mu omega alpha1 beta1
## 27.1287506 2.2867152 0.5872611 0.2732807
##
## R-optimhess Difference Approximated Hessian Matrix:
## mu omega alpha1 beta1
## mu -3.0948203 -0.5466344 0.8779323 2.159859
## omega -0.5466344 -1.2524038 -1.8395149 -7.040958
## alpha1 0.8779323 -1.8395149 -50.0471588 -53.502921
## beta1 2.1598592 -7.0409581 -53.5029209 -94.621066
## attr(,"time")
## Time difference of 0.005975008 secs
##
## --- END OF TRACE ---
##
##
## Time to Estimate Parameters:
## Time difference of 0.04784489 secs
## Warning: Using formula(x) is deprecated when x is a character vector
of length > 1.
## Consider formula(paste(x, collapse = " ")) instead.
summary(M1.GARCH)
##
## Title:
## GARCH Modelling
##
## Call:
## garchFit(formula = ~garch(1, 1), data = train.ts)
##
## Mean and Variance Equation:
## data ~ garch(1, 1)
## <environment: 0x000000002d61f650>
## [data = train.ts]
##
## Conditional Distribution:
## norm
##
## Coefficient(s):
## mu omega alpha1 beta1
## 27.12875 2.28672 0.58726 0.27328
##
## Std. Errors:
## based on Hessian
##
## Error Analysis:
## Estimate Std. Error t value Pr(>|t|)
## mu 27.1288 0.7210 37.627 <2e-16 ***
## omega 2.2867 1.7835 1.282 0.1998
## alpha1 0.5873 0.3014 1.948 0.0514 .
## beta1 0.2733 0.3044 0.898 0.3693
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log Likelihood:
## -173.0726 normalized: -2.933434
##
## Description:
## Wed Jul 29 21:43:23 2020 by user: student
##
##
## Standardised Residuals Tests:
## Statistic p-Value
## Jarque-Bera Test R Chi^2 4.481175 0.106396
## Shapiro-Wilk Test R W 0.924492 0.001298827
## Ljung-Box Test R Q(10) 87.95505 1.365574e-14
## Ljung-Box Test R Q(15) 96.35712 6.37268e-14
## Ljung-Box Test R Q(20) 103.9441 2.463585e-13
## Ljung-Box Test R^2 Q(10) 6.05134 0.8109315
## Ljung-Box Test R^2 Q(15) 23.84875 0.06770697
## Ljung-Box Test R^2 Q(20) 26.08246 0.1631058
## LM Arch Test R TR^2 11.88778 0.4547341
##
## Information Criterion Statistics:
## AIC BIC SIC HQIC
## 6.002461 6.143311 5.994023 6.057443

#Comparing the GARCH model’s results with the other models’ results

acc.ARIMA = accuracy(fcast.M1.ARIMA, test.ts)


acc.ETS = accuracy(fcast.M1.ETS, test.ts)

#Accuracy of ARIMA model

acc.ARIMA
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1456674 2.062940 1.582365 0.3593268 7.018769 0.3124246
## Test set 0.7744870 0.774487 0.774487 3.1793389 3.179339 0.1529160
## ACF1
## Training set -0.009918847
## Test set NA

#Accuracy of ETS model

acc.ETS
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1569204 2.0651246 1.5877243 0.397414 7.046455 0.3134828
## Test set 0.7551898 0.7551898 0.7551898 3.100122 3.100122 0.1491059
## ACF1
## Training set 0.03991386
## Test set NA
#It appears that in terms of AIC and BIC the GARCH model does a way
better job than the ETS or ARIMA model.
#Is it enough to say that the GARCH model is the best model ? Probably
not because it would be nice to see how it does graphically and how it
does in term of accuracy.
#With the current info, I would still rather use an ARIMA or ETS model
over a GARCH model.

#Appendix - trying to forecast GARCH model

#M1.GARCH = garchFit(formula = ~garch(1,1),data=train.ts,


cond.dist="QMLE")
#ug_spec = ugarchspec(mean.model = list(armaOrder = c(1,1)))
#if we want to specify a particular GARCH model, this is the code.
#ugfit = ugarchfit(spec = ug_spec, data = train.ts)
#ugfore = ugarchforecast(ugfit, n.ahead = 20)
#summary(ugfore)
#plot(ugfore)

You might also like