Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

The DCC package

Riccardo (Jack) Lucchetti Giulio Palomba Luca Pedini


version 0.1

Abstract
This package deals with the estimation of Dynamic Conditional Cor-
relation model (DCC) introduced by Engle (2002).
Notably, this model is generally estimated by a two-step approach:
the first step consists of the estimation of a battery of univariate GARCH
models in order to recover conditional variance parameters; in the second
step the conditional correlation likelihood is maximised with respect to
correlation parameters, conditionally on the estimated variance parame-
ters.
This package mainly deals with step 2: although one convenience func-
tion for step 1 is provided, it cannot be considered as a serious tool for
“production” work. Conversely, the estimation of the step-2 parameters
is carried out by numerical maximum likelihood with the option of em-
ploying advanced numerical techniques like analytical score and spherical
parametrisation.

1 The model
1.1 Notation
The dynamic conditional correlation (DCC) model by (Engle, 2002) is one of
the most popular multivariate GARCH models. It is suitable for analysing
and predicting the covolatility dynamics of asset returns given the information
available at time t − 1.
Denote an n-dimensional vector of observable series by yt and the informa-
tion set at time t − 1 by I t−1 ; in most cases, yt are market returns for a vector
of assets. We consider the mean-centred returns εt = yt − E (yt |I t−1 ) and we
assume they follow some unspecified conditional density with E(εt |I t−1 ) = 0n
and V ar(εt |I t−1 ) = Σt . The conditional covariance matrix can be split into
1/2 1/2
Σt = Vt Rt Vt , (1)

where Vt = ⟨vt ⟩ is the n × n diagonal matrix of conditional variances in which


 2 2 2
′
the diagonal is vt = σ1,t , σ2,t , . . . , σn,t , and Rt is a dynamic correlation
matrix.
The conditional variances are usually assumed to be described by some sort
of GARCH-like model:
2 2 2
σi,t = hi (εi,t−1 , εi,t−2 , . . . , σi,t−1 , σi,t−2 , . . . ; ψi ), (2)

where i = 1, 2, . . . , n. While in principle the parameters ψi should be estimated


jointly with the other parameters described below, this step is usually omitted

1
and the εt and vt vectors are treated as observable quantities. In practice,
a univariate GARCH-like model is customarily run for each series in the yt
vector and the GARCH residuals and estimated variances are used as observable
counterparts of εt and vt , respectively.
The dynamic correlation matrix is parametrised as
−1/2 −1/2
Rt = Q̃t Qt Q̃t ,

where Q̃t = diag (Qt ) and




Qt = Γ + A ⊙ ηt−1 ηt−1 − Γ + B ⊙ (Qt−1 − Γ) , (3)
−1/2
in which ηt = Vt εt , A and B are symmetric parameter matrices and the
symbol ⊙ is used for the Hadamard (element-by-element) product. The Γ ma-
trix is a correlation matrix that could generally be parametrised as a posi-
tive semi-definite symmetric matrix with ones on the diagonal. However, the
n(n − 1)/2 free parameters are often constrained in such a way that Γ equals
some prescribed matrix, usually obtained from the data. A typical choice among
practitioners is the sample correlation matrix, and this idea goes under the name
of “variance targeting”(see Engle and Mezrich, 1996).
From Equation (3) some restricted forms can be obtained as special cases by
suitable restrictions on the matrices A and B. These parameter matrices can
be reparametrised as follows:
Cholesky factorisation: the A and B matrices are expressed as
′ ′
A = CA CA , B = CB CB , (4)

where CA and CB are lower triangular matrices, ensuring that A and


B are both symmetric and positive semi-definite. Therefore, the total
amount of parameters to estimate in Equation (3) is 3n(n − 1)/2.
Rank-1 specification: The A and B matrices are expressed as

A = aa′ , B = bb′ , (5)

where a and b are column n-dimensional vectors. This is a special case


of the one above, where A and B are restricted to be rank 1 matrices.
Here, the total amount of parameters to estimate in Equation (3) is n(n −
1)/2 + 2n.
Scalar DCC: The A and B matrices are expressed as

A = a ιn ι′n , B = b ιn ι′n , (6)

where a and b are positive scalars and ιn = [1 1 . . . 1]′ . Note√that this


restricted version of the rank-1 specification, since a = a ιn and
is a √
b = b ιn . In this framework, the total amount of parameters to estimate
in Equation (3) reduces to n(n − 1)/2 + 2.
The object of the second step of the DCC estimation are the parameters
contained in Γ, A and B which are crucial for predicting the law of motion
of the conditional covariance matrix in Equation (1). Estimation is performed

2
via Maximum Likelihood under a joint normality assumption.1 In order to
optimise numerical maximisation, this package uses analytical gradients (see
Caporin, Lucchetti and Palomba, 2020) and the spherical parametrisation for
dynamic correlation matrices adopted in Lucchetti and Pedini (2023).

1.2 The workflow


The typical workflow normally involves a battery of univariate models to recover
the estimated residuals εt and conditional variances vt . This is customarily
called “Step 1” in DCC modelling, “Step 2” being the estimation of the param-
eters in Equation (3). In principle, these univariate models could be chosen on
an ad hoc basis, although in many cases practitioners opt for relatively standard
ones, such as GARCH(1,1).
With the DCC package, Step 1 is left to the user, although a convenience
function is provided to automate this step in the standard case (see Section
1.3). The series with the univariate residuals εt must be collected in a list (say,
E) and the estimated time varying variances vt in another list (say, H). After
this, the routine is the one common to many gretl packages, that is:
1. create a bundle with the basic model info. This is done via the DCC_Setup()
function;
2. feed the bundle into the function that estimates the parameters,
DCC_Estimate(). The details of the optimisation process can be tweaked
by supplying a second bundle containing the options;

3. upon successful estimation, the results can be printed using


DCC_Printout() and/or retrieved from the model bundle via dedicated
functions (eg DCC_GetCorrel()).
A detailed description of these functions is provided in Section 3.

1.3 Step 1 in practice


The object of Step 1 is to recover two lists of time series for each element of the
vector yt : the residuals from the conditional mean εt and the conditional vari-
ances vt . In most cases, this is accomplished by running a battery of univariate
GARCH-type models. For example, a very simple setup may be

yi,t = µi + εi,t (7)


2
σi,t = ωi + αε2i,t−1 + 2
βσi,t−1 (8)
2
εi,t ∼ N (0, σi,t ), (9)

where i = 1, 2, . . . , n. Although in more realistic cases each of the above expres-


sions may have to be modified: in equation (7) one may want to add lags or
other explanatory variables; equation (8) could be enriched to take into account
asymmetry effects or other features of more elaborate univariate models, such as
the GJR model or the APARCH model. Finally, the distributional assumption
in (9) could be generalised to other fat-tailed, possibly asymmetric distributions.
These choices may be driven by a multitude of factors, including statistical
considerations or computational issues. Therefore, it is very rare that for a
1 Other densities are in the works. Stay tuned.

3
realistically sized problem the most appropriate choice for step 1 is a battery of
identical models.
That said, this package provides, mostly as a convenience function that may
be used in teaching to abstract from the complexity of the real world, a function
called DCC_Step1(), which runs essentially a slight variation of the setup above,
that is an array of GARCH(1,1) models under the normality assumption for
each of the series yt , that should be contained in a list. The only difference is
that it is possible to endow Equation (7) with VAR-like dynamics, that is
p
X
yi,t = µi + βi yt−i + εi,t .
i=1

Section 2 provides an example.

1.4 Analytical vs numerical score


When estimating DCC models the choice between analytical versus numerical
derivatives is not as obvious as in other statistical models: while the analytical
score provides unambiguous advantages in terms of accuracy, it should be noted
that the formulae provided in Caporin, Lucchetti and Palomba (2020) require
heavy use of matrix algebra with possibly rather large matrices.
Evidence seems to suggest that the total CPU time of the evaluation of
the score increases (roughly) quadratically with n, the dimension of yt , and is
relatively insensitive to the total number of parameters p. On the other hand,
CPU time of the numerical score increases linearly with p. Therefore, for a
system where n is large (say, 30–40 series) and p is small (say, a scalar model
with no targeting), usage of the analytical score is almost certain to degrade
performance.
Naturally, the relative numerical efficiency of analytical versus numerical
derivatives also depends on the details of the hardware and software setup of
the computer you are using. Therefore, it is advisable to experiment both
solutions.
This aspect is being actively worked on and is likely to be optimised in future
versions.

2 An example
In example script 1 we use the daily time series of the Nasdaq, DJ Eurostoxx
50, Nikkei and Bovespa market indices.2 Data ranges from January 5, 2000 to
February 19, 2019 (T = 4990 observations).
Step 1 is accomplished by estimating four GARCH(1,1) models, in which
one lag of all the returns is used as explanatory variables for the mean. The
estimated residuals and variances are then fed into the DCC_Setup() function;
the two additional arguments modeltype and has_target are used for choosing
the Cholesky specification (Equation (4)) and for indicating that the matrix Γ
should not be estimated, and the static correlation matrix of the standardised
residuals obtained after the first step (R) is used instead.
In this case, the DCC_Estimate() function is called with no second argu-
ment, which means that the default optimisation options will be used: analytical
derivatives, QML standard errors and medium verbosity. The code produces the
2 Example script Indices.inp.

4
set verbose off # turn off unwanted output
include DCC.gfn # include the package

# --- DATA PREPARATION -----------------------------------------------

open MktIndices.gdt --frompkg=DCC # read the dataset


list Prices = Nasdaq Eurostoxx Nikkei Bovespa
list Returns = ldiff(Prices) # compute the returns

# --- PERFORM STEP 1 --------------------------------------------------

list E = null
list H = null
loop foreach i Returns
garch 1 1 ; $i const Returns(-1) --quiet
e_$i = $uhat
E += e_$i
h_$i = $h
H += h_$i
endloop

# --- MODEL SETUP AND STEP 2 ESTIMATION -------------------------------


scalar has_target = 1
modeltype = 1 # 0 = unrestricted, 1 = Cholesky, 2 = rank-1, 3 = scalar
bundle DModel = DCC_Setup(E, H, modeltype, has_target)

DCC_Estimate(&DModel) # estimate with default options


DCC_Printout(DModel) # print out the results

Listing 1: Example script

5
listing below. The coefficients shown on output are those contained in vech(A)
and vech(B).

DCC model: type = Cholesky, number of series = 4


sample range = [2000-01-07, 2019-02-19], size = 4988
Variance type: Robust

BFGS iterations: 88 (total loglik evaluations: 353) (analytical)

Target correlation matrix:


R (4 x 4)

e_ld_Nasdaq e_ld_Eurosto e_ld_Nikkei e_ld_Bovespa


e_ld_Nasdaq 1.0000 0.51913 0.14915 0.52667
e_ld_Eurostoxx 0.51913 1.0000 0.24805 0.43840
e_ld_Nikkei 0.14915 0.24805 1.0000 0.16426
e_ld_Bovespa 0.52667 0.43840 0.16426 1.0000

coefficient std. error z p-value


----------------------------------------------------------
A[1,1] 0.146956 0.0337750 4.351 1.36e-05 ***
A[1,2] 0.0827310 0.0208411 3.970 7.20e-05 ***
A[1,3] 0.0825758 0.0711338 1.161 0.2457
A[1,4] 0.125893 0.0230255 5.468 4.56e-08 ***
A[2,2] 0.0102926 0.0401987 0.2560 0.7979
A[2,3] 0.299721 0.0910543 3.292 0.0010 ***
A[2,4] -0.0105606 0.0450945 -0.2342 0.8148
A[3,3] -0.0139781 0.0748129 -0.1868 0.8518
A[3,4] 0.000980987 0.00566137 0.1733 0.8624
A[4,4] 1.81181e-06 1.14489e-06 1.583 0.1135

coefficient std. error z p-value


----------------------------------------------------------
B[1,1] 0.987493 0.00624906 158.0 0.0000 ***
B[1,2] 0.996008 0.00460660 216.2 0.0000 ***
B[1,3] -0.291469 0.488855 -0.5962 0.5510
B[1,4] 0.989238 0.00414176 238.8 0.0000 ***
B[2,2] -0.00674821 0.0270262 -0.2497 0.8028
B[2,3] -0.672457 0.319808 -2.103 0.0355 **
B[2,4] -0.00744240 0.0306065 -0.2432 0.8079
B[3,3] 0.263334 0.343690 0.7662 0.4436
B[3,4] 0.000447696 0.000703126 0.6367 0.5243
B[4,4] -1.00255e-05 1.31210e-05 -0.7641 0.4448

Total loglikelihood = 79319.3

The next example introduces two changes: Step 1 is performed by the ded-
icated function DCC_Step1(), which achieves exactly the same effect as above.
Note that DCC_Step1() returns a bundle containing all Step 1 main informa-
tion: the matrices of residuals and conditional variances, for instance, can be
easily retrieved under the keys E and H. These must be then converted to lists
by using the mat2list() gretl function. Therefore, the lines
list E = null
list H = null
loop foreach i Returns
garch 1 1 ; $i const Returns(-1) --quiet
e_$i = $uhat

6
E += e_$i
h_$i = $h
H += h_$i
endloop

should be replaced by
scalar VAR_lags = 1
S1 = DCC_Step1(Returns, VAR_lags)
list E = mat2list(S1.E)
list H = mat2list(S1.H)

which will prompt the following output


Step1: univariate GARCH(1,1) models with 1 lags
-------------------------------------------------------------------------
Series name: loglik AIC BIC HQC

ld_Nasdaq: 14961.552 -29905.104 -29846.471 -29884.552


ld_Eurostoxx: 15109.213 -30200.425 -30141.792 -30179.873
ld_Nikkei: 15284.504 -30551.008 -30492.375 -30530.456
ld_Bovespa: 13618.048 -27218.095 -27159.462 -27197.543
-------------------------------------------------------------------------

Additionally, by setting the modeltype to 2, the Rank-1 specification is selected,


and the estimated parameters are the vectors a and b in Equation (5).
DCC model: type = rank-1, number of series = 4
sample range = [2000-01-07, 2019-02-19], size = 4988
Variance type: Robust

BFGS iterations: 43 (total loglik evaluations: 201) (analytical)


Target correlation matrix:
R (4 x 4)

e_ld_Nasdaq e_ld_Eurosto e_ld_Nikkei e_ld_Bovespa


e_ld_Nasdaq 1.0000 0.51913 0.14915 0.52667
e_ld_Eurostoxx 0.51913 1.0000 0.24805 0.43840
e_ld_Nikkei 0.14915 0.24805 1.0000 0.16426
e_ld_Bovespa 0.52667 0.43840 0.16426 1.0000

coefficient std. error z p-value


-------------------------------------------------------
A[1] 0.147478 0.0355265 4.151 3.31e-05 ***
A[2] 0.0826100 0.0209743 3.939 8.19e-05 ***
A[3] 0.0383218 0.0601622 0.6370 0.5241
A[4] 0.126265 0.0227305 5.555 2.78e-08 ***

coefficient std. error z p-value


-------------------------------------------------------
B[1] 0.987386 0.00659721 149.7 0.0000 ***
B[2] 0.995937 0.00464541 214.4 0.0000 ***
B[3] -0.360945 0.209605 -1.722 0.0851 *
B[4] 0.989194 0.00404895 244.3 0.0000 ***

Total loglikelihood = 79317

In the final example, we introduce two more changes: by setting modeltype


to 3, the scalar DCC is estimated, and the estimated parameters correspond
to scalars α and β in Equation (6). Furthermore, we pass an option bundle to

7
the DCC_Estimate() function, so as to force the usage of numerical derivatives.
The line
DCC_Estimate(&DModel)

thus changes to
bundle optim_opts = defbundle("analytical", 0)
DCC_Estimate(&DModel, optim_opts)

or, more compactly, to


DCC_Estimate(&DModel, _(analytical=0))

The output follows:

DCC model: type = scalar, number of series = 4


sample range = [2000-01-07, 2019-02-19], size = 4988
Variance type: Robust

BFGS iterations: 15 (total loglik evaluations: 86)(numerical)

Target correlation matrix:


R (4 x 4)

e_ld_Nasdaq e_ld_Eurosto e_ld_Nikkei e_ld_Bovespa


e_ld_Nasdaq 1.0000 0.51913 0.14915 0.52667
e_ld_Eurostoxx 0.51913 1.0000 0.24805 0.43840
e_ld_Nikkei 0.14915 0.24805 1.0000 0.16426
e_ld_Bovespa 0.52667 0.43840 0.16426 1.0000

Scalar DCC

coefficient std. error z p-value


--------------------------------------------------------
a 0.00728632 0.00155143 4.697 2.65e-06 ***
b 0.990368 0.00249306 397.3 0.0000 ***

Total loglikelihood = 79298

0.8
CondCorr_01_02
0.7 CondCorr_01_03
CondCorr_01_04
0.6

0.5

0.4

0.3

0.2

0.1

-0.1
20

20

20

20

20

20

20

20

20

20
00

02

04

06

08

10

12

14

16

18
-0

-0

-0

-0

-0

-0

-0

-0

-0

-0
1-

1-

1-

1-

1-

1-

1-

1-

1-

1-
01

01

01

01

01

01

01

01

01

01

Figure 1: Estimated correlation from the DCC scalar model

Finally, the estimated conditional correlations can be retrieved by the DCC_GetCorrel()


function. In this case, the command

8
list C = DCC_GetCorrel(DModel)

will populate the gretl workspace with the estimated conditional correlation.
The series names are CondCorr_01_02, CondCorr_01_03 and so on. Plotting
the estimated conditional correlations obtained for the Nasdaq index gives a
plot like the one shown in Figure 2.

2.1 The no-targeting case


In the previous section, setting the has_target option to 1 had the effect that
the Γ matrix was kept fixed at the sample correlation matrix of the standardised
residuals, which is printed before the estimation output. By setting has_target
to 0, the elements of Γ are estimated jointly with A and B. For example, the
scalar DCC produces the following estimation output.
DCC model: type = scalar, number of series = 4
sample range = [2000-01-07, 2019-02-19], size = 4988
Variance type: Robust

BFGS iterations: 26 (total loglik evaluations: 131) (numerical)


Intercept parametrisation: Traditional

coefficient std. error z p-value


-------------------------------------------------------
Γ[1,2] 0.533227 0.0420416 12.68 7.32e-37 ***
Γ[1,3] 0.184273 0.0565266 3.260 0.0011 ***
Γ[1,4] 0.534908 0.0459936 11.63 2.90e-31 ***
Γ[2,3] 0.267401 0.0548988 4.871 1.11e-06 ***
Γ[2,4] 0.398188 0.0462162 8.616 6.95e-18 ***
Γ[3,4] 0.141740 0.0508388 2.788 0.0053 ***

Scalar DCC

coefficient std. error z p-value


--------------------------------------------------------
a 0.00725836 0.00156637 4.634 3.59e-06 ***
b 0.990167 0.00261731 378.3 0.0000 ***

Total loglikelihood = 79299.4

For values of Γ that make it badly conditioned, numerical problems may


arise; in order to mitigate them, it may be helpful to use alternative parametri-
sations. These can be selected using the bundle key reparm in the option bundle
(for details see Section 3).
With reparm = 0, the traditional parametrisation involving correlations is
used as exposed in the above example. However, when turning reparm to 1 or
2, the spherical approach described in Lucchetti and Pedini (2023) is employed.
The spherical parametrisation is a computational technique that may en-
hance the quality of numerical optimisation, especially when large and ill-conditioned
correlation matrix are involved. Correlations are turned into angles (reparm =
2) starting from the Cholesky factor of Γ = GG′ as follows:
 
1 0 0 0 ... 0
cos(ω ) sin(ω2,1 ) 0 0 ... 0 
 2,1 
 
G =cos(ω 3,1 ) cos(ω3,2 ) sin(ω3,1 ) sin(ω3,2 ) sin(ω3,1 ) 0 ... 0 
n×n 
n

 .. .. .. .. Y 
. . . . ... sin(ωn,i )
 
i=1

9
where the matrix G is now characterised via n(n − 1)/2 angles ωi,j defined
between 0 and π. The transformation is invertible, so recovering correlations is
immediate. To further improve the maximisation convergence, ωi,j can be re-
expressed in unconstrained parameters θi,j using a logit function (reparm=1):

θi,j = log (ωi,j ) − log (π − ωi,j )

This transformation has the effect that the parameter space for θi,j is the entire
real line; this may be advantageous to overcome possible numerical problems
when sin(ωi,j ) ≃ 0.

2.2 The GUI


The GUI hook to the package can be found under the Model>Multivariate
Time Series heading. Note that, in the light of the arguments presented in
Section 1.3, the user is assumed to have already run Step 1 before using the
graphical interface.
You’ll be presented with a window similar to the one shown in Figure 2.2.
The meaning of the various element should be rather clear.

Figure 2: GUI hook for the DCC package

The 1st stage residuals list obtained after the 1st step (list E, see section 1.2).
The variances list estimated via the 1st step (list H, see section 1.2).
Model type Here the choice is between 3 DCC models, namely the the Cholesky,
the Rank-1 and the scalar model (default).
Standard errors Here the choice is between 3 algorithms for computing the
covariance matrix of the estimated parameters, hence the standard errors.
Quasi Maximum Likelihood (also known as QMLE, see Bollerslev and
Wooldridge, 1992) is the default, but Hessian and OPG are also available.

10
Verbosity An integer, ranging from 0 to 2: the default is 1, which means that
BFGS iterations will printed. If 0, no output is printed (all results can be
retrieved later); if 2, the BFGS iterations are showed verbosely, which may
be helpful in some cases, especially when the algorithm fails to converge
(see also section 5).
The Boolean flags for analytical derivatives in score computation and for matrix
targeting in the second step estimation. They both default to 1, as already
mentioned in section 3.

Note that, if some lists are already defined, you can pick them from the
drop-down list; alternatively, you can create lists on the fly by using the “+”
button.

3 List of public functions (in alphabetical order)

DCC_Estimate(bundle *mod, bundle useropts[null])

Return type : scalar


Description : estimate the DCC model and returns an error code.

• mod: model bundle from DCC_Setup, in pointer form;


• useropts: a bundle containing the following keys as options:

– analytical: Boolean switch for analytical derivatives (default=1)


– verb: scalar for the verbosity level: 0: estimate silently, 1 (default):
print basic iteration info, 2: print detailed iteration info
– reparm: scalar, reparametrisation type for the intercept Γ in the
no-target case. 0 for traditional parametrisation, 1 for spherical
parametrisation using unrestricted parameters, 2 for spherical parametri-
sation using angles. Default: 1.
– int vcv_opt: scalar, parameter covariance matrix type. The options
are OPG (0), Hessian (1) or Robust (QML) (2). Default: 2.

If useropts is omitted, all the default choices are used.

DCC_GetCorrel(bundle mod)

Description : returns a list with the estimated historical conditional correlations

Return type : list

• mod: model bundle

11
DCC_Printout(bundle mod)

Return type : none


Description : prints out the estimation results for model mod.

• mod: model bundle

DCC_Setup(list E, list H, scalar DCCtype, bool has_target)

Return type : bundle


Description : sets up the model and returns an initialised bundle

• E: list of GARCH residuals obtained in the 1st step estimation


• H: conditional variance list obtained in the 1st step estimation
• DCCtype: scalar indicating the model type (0=unrestricted, 1=Cholesky,
2=rank-1, 3=scalar)
• has_target: Boolean flag for matrix targeting (default=1)

DCC_Step1(list R, scalar p[0], bool verbose[1])

Return type : bundle


Description : this is a convenience function that takes an input list and com-
putes a battery of univariate GARCH models (possibly, with lags) using
all the variables in the list R as dependent variables, in turn. The univari-
ate GARCH models contain a constant and p lags of the input series in
the conditional mean equation (default p=0).

• R, a list of suitable series (e.g., asset returns);


• p, scalar representing the lag of the series to use;
• verbose, Boolean flag. If 1 (default) prints the results as in Section 2

Note that the function returns a bundle with the following keys: n for the
number of series; vn for the series names; E, the GARCH unstandardised residu-
als; H the estimated variances; coeff for the (np + 4) × n matrix with estimated
coefficients; llik, for the n-dimensional row vector containing the loglikelihoods
of the univariate models; crit for the n × 3 matrix containing the information
criteria.

12
4 Changelog
v 0.1 Initial release

References
Bollerslev, T. and J. M. Wooldridge (1992) ‘Quasi maximum likelihood estima-
tion and inference in dynamic models with time varying covariances’, Econo-
metric Review 11: 143–172.
Caporin, M., R. Lucchetti and G. Palomba (2020) ‘Analytical gradients of dy-
namic conditional correlation models’, Journal of Risk and Financial Man-
agement 13(3): 1–21.
Engle, R. (2002) ‘Dynamic conditional correlation: A simple class of multivariate
generalized autoregressive conditional heteroskedasticity models’, Journal of
Business & Economic Statistics 20(3): 339–350.
Engle, R. F. and J. Mezrich (1996) ‘GARCH for groups’, Risk 9: 36–40.

Lucchetti, R. and L. Pedini (2023) ‘The spherical parametrisation for correlation


matrices and its computational advantages’, mimeo .

13

You might also like