Professional Documents
Culture Documents
Lecture Notes: Financial Time Series, ARCH and GARCH Models
Lecture Notes: Financial Time Series, ARCH and GARCH Models
Lecture Notes: Financial Time Series, ARCH and GARCH Models
models
Piotr Fryzlewicz
Department of Mathematics
University of Bristol
Bristol BS8 1TW
UK
p.z.fryzlewicz@bristol.ac.uk
http://www.maths.bris.ac.uk/~mapzf/
share, stock index, currency exchange rate or a commodity. Instead of analysing P k , which
often displays unit-root behaviour and thus cannot be modelled as stationary, we often
Pk Pk − Pk−1
yk = log Pk − log Pk−1 = log = log 1 + .
Pk−1 Pk−1
By Taylor-expanding the above, we can see that y k is almost equivalent to the relative
return (Pk − Pk−1 )/Pk−1 . The reason we typically consider log-returns instead of relative
1
returns is the additivity property of log-returns, which is not shared by relative returns.
As an illustration, consider the time series of daily closing values of the ‘WIG’ index, which
is the main summary index of Warsaw Stock Exchange, running from 16 April 1991 to 5 Jan-
(page in Polish). The top left plot in Figure 1 shows the actual series P k . The top right
plot shows yk .
The series yk displays many of the typical ‘stylised facts’ present in financial log-return
series. As shown in the middle left plot of Figure 1, the series y k is uncorrelated, here with
the exception of lag 1 (typically, log-return series are uncorrelated with the exception of the
first few lags). However, as shown in the middle right plot, the squared series y k2 is strongly
auto-correlated even for very large lags. In fact, in this example it is not obvious that the
left plot of Figure 1, which shows the Q-Q plot of the marginal distribution of y k against
Finally, the bottom right plot illustrates the so-called leverage effect: the series y k responds
differently to its own positive and negative movements, or in other words the conditional
distribution of |yk |{yk−1 > 0} is different from that of |yk |{yk−1 < 0}. The bottom right
plot of Figure 1 shows the sample quantiles of the two conditional distributions plotted
against each other. The explanation is that the market responds differently to “good” and
the entire available data set. However, to propose a stationary model for y k which captures
the above “stylised facts” is not easy, as the series does not “look stationary”: the local
linear time series model (such as ARMA) to y k , the estimated parameters would come out
2
0.10
40000
Pk
yk
0.00
20000
−0.10
0
Time Time
0.8
ACF
ACF
0.4
0.4
0.0
0.0
0 20 40 60 80 100 0 20 40 60 80 100
Lag Lag
0.08
0.00
0.04
−0.10
0.00
Figure 1:
as zero because of the lack of serial correlation property, which clearly would not be what
we wanted.
2 (G)ARCH models
2.1 Introduction
Noting the above difficulties, Engle (1982) was the first to propose a stationary non-linear
cess). Bollerslev (1986) and Taylor (1986) independently generalised Engle’s model to make
it more realistic; the generalisation was called “GARCH”. GARCH is probably the most
commonly used financial time series model and has inspired dozens of more sophisticated
models.
Literature. Literature on GARCH is massive. My favourites are: Giraitis et al. (2005), Bera
and Higgins (1993), Berkes et al. (2003), and the book by Straumann (2005). This chapter
yk = σ k εk
Xp q
X
σk2 = ω + 2
αi yk−i + 2
βj σk−j , (1)
i=1 j=1
The main idea is that σk2 , the conditional variance of yk given information available up to
time k − 1, has an autoregressive structure and is positively correlated to its own recent
past and to recent values of the squared returns y 2 . This captures the idea of volatility (=
4
conditional variance) being “persistent”: large (small) values of y k2 are likely to be followed
The answer to the question of whether and when the system of equations (1) admits a
unique and stationary solution is not straightforward and involves the so-called top Lya-
punov exponent associated with a certain sequence of random matrices. This is beyond the
scope of this course. See Bougerol and Picard (1992) for a theorem which gives a necessary
and sufficient condition for (1) to have a unique and stationary solution. Proposition 3.3.3
p
X q
X
αi + βj < 1. (2)
i=1 j=1
In any model in which σk is measurable with respect to Fk−1 (which is the case in the
E(yk ) = E(σk εk ) = E[E(σk εk |Fk−1 )] = E[σk E(εk |Fk−1 )] = E[σk E(εk )] = E[σk · 0] = 0.
5
2.2.3 Lack of serial correlation
In the same way, we can show that yk is not correlated to yk+h for h > 0:
E(yk yk+h ) = E(yk σk+h εk+h ) = E[E(yk σk+h εk+h |Fk+h−1 )] = E[yk σk+h E(εk+h |Fk+h−1 )] = 0.
Proceeding exactly like in Sections 2.2.2 and 2.2.3, we can show that Z k is a martingale
difference and therefore has mean zero (the “lack of serial correlation” property is more
tricky as it would require E(yk4 ) < ∞, which is not always true). The main point of this
definition is that for many purposes, Z k can be treated as a “white noise” sequence: there are
a lot of results about martingale difference sequences which extend results on independent
yk2 = σk2 + Zk
Xp q
X
2 2
= ω+ αi yk−i + βj σk−j + Zk
i=1 j=1
Xp Xq q
X
2 2
= ω+ αi yk−i + βj yk−j − βj Zk−j + Zk .
i=1 j=1 j=1
If we denote R = max(p, q), αi = 0 for i > p and βj = 0 for j > q, then the above can be
written as
R
X q
X
yk2 = ω + 2
(αi + βi )yk−i − βj Zk−j + Zk . (3)
i=1 j=1
6
In other words, yk2 is an ARMA process with martingale difference innovations.
to obtain:
R
X q
X
E(yk2 ) = ω+ (αi + 2
βi )E(yk−i ) − βj E(Zk−j ) + E(Zk )
i=1 j=1
R
X
= ω + E(yk2 ) αi + β i ,
i=1
which gives
ω
E(yk2 ) = PR .
1− i=1 αi + βi
In this section, we argue that the GARCH model (1) can easily be heavy-tailed. For ease
of presentation, we only show it for the GARCH(1,1) model. We first assume the following
condition:
for some q > 0. This condition is, for example, satisfied if ε k ∼ N (0, 1) (but not only in
We write
2
σk+1 = ω + α1 yk2 + β1 σk2 = ω + (α1 ε2k + β1 )σk2 ,
q
E(σk+1 ) = E[ω + (α1 ε2k + β1 )σk2 ]q/2 ≥ E[(α1 ε2k + β1 )σk2 ]q/2 = E(α1 ε2k + β1 )q/2 E(σkq ).
7
we would obtain
which contradicted assumption (4). Thus E(σ kq ) is infinite, which also implies that E(y k2 )
2.3 Estimation
θ = (ω, α1 , . . . , αp , β1 , . . . , βq ).
We assume that there is a unique stationary solution to the set of equations (1). By
2
expanding σk−j in (1), σk2 can be represented as
∞
X
σk2 = c0 + 2
ci yk−i , (5)
i=1
(given that E(log σ02 ) < ∞, which is a kind of a “stability condition”, but this is not
important for now). How to compute the coefficients c k in (5)? They obviously depend on
If q ≥ p, then
x
c0 (u) =
1 − t1 − . . . − t q
c1 (u) = s1
c2 (u) = s2 + t1 c1 (u)
...
8
cp (u) = sp + t1 cp−1 (u) + . . . + tp−1 c1 (u)
...
If q < p, then
x
c0 (u) =
1 − t1 − . . . − t q
c1 (u) = s1
c2 (u) = s2 + t1 c1 (u)
...
...
where ρ0 , u, u are arbitrary but such that 0 < u < u, 0 < ρ 0 < 1, qu < ρ0 . Clearly, U is a
9
Quasi-likelihood for GARCH(p,q) is defined as
yk2
X 1
Ln (u) = − log wk (u) + ,
2 wk (u)
1≤k≤n
where
X
2
wk (u) = c0 (u) + ci (u)yk−i
1≤i<∞
(in practice we truncate this sum, but let us not worry about it for now). Note that
wk (θ) = σk2 .
If εk is standard normal, then conditionally on F k−1 , yk is normal with mean zero and
x2 x2
−1 1 2
log σ − 2 +C ∼− log σ + 2 .
2σ 2 σ
Thus, if εk are N (0, 1) and the parameter is a general u, then the conditional likelihood of
yk given Fk−1 is
yk2
1
− log wk (u) + .
2 wk (u)
Summing over k, we get exactly Ln (u). The quasi-likelihood estimator θ̂n is defined as
10
(ii) The polynomials
A(x) = α1 x + α2 x2 + . . . + αp xp
B(x) = 1 − β1 x − β2 x2 − . . . − βq xq
(iii) θ lies in the interior of U . (This implies that there are no zero coefficients and that
n o
2(1+δ)
E ε0 <∞
E(ε20 ) = 1.
(vii)
E|y02 |δ < ∞
11
for some δ > 0.
We are now ready to formulate a consistency theorem for the quasi-likelihood estimator θ̂n .
θ̂n → θ a.s.,
as n → ∞.
Before we prove Theorem 2.1, we need a number of technical lemmas. These lemmas use,
in an ingenious and “beautiful” way, a number of simple but extremely useful mathematical
implications, the Bonferroni inequality, the Markov inequality, the mean-value theorem,
Proof. By induction. (7) and (8) hold for 0 ≤ i ≤ max(p, q) if C 1 (C2 ) is chosen small
(large) enough. Let j > R and assume (7), (8) hold for all i < j. By (6),
12
Also by (6),
(j−q)/q j/q
cj (u) ≤ (t1 + . . . + tq ) max cj−k (u) ≤ ρ0 C2 ρ0 = C 2 ρ0 ,
k=1,...,q
Lemma 2.2 Suppose Assumptions 2.1 (iii), (v) and (vii) hold. Then for any 0 < ν < γ
we have
ν
σk2
E sup < ∞.
u∈U wk (u)
σk2 σ2 σk2
≤ PM k = PM . (9)
wk (u) 2
i=1 ci (u)yk−i
2 2
i=1 ci (u)εk−i σk−i
By definition of σk2 ,
2 2
σk−1 > βi σk−i−1 , 1≤i≤q
2 2
σk−1 > αi yk−i−1 , 1≤i≤p
2
σk−1 > ω.
Hence
2 2 2 2
σk2 ω + α1 yk−1 + . . . + αp yk−p + β1 σk−1 + . . . + βq σk−q
2 = 2
σk−1 σk−1
α1 ε2k−1 σk−1
2 2
α2 yk−2 α3 αp β2 βq
≤ 1+ 2 + 2 + + ... + + β1 + +... +
σk−1 α1 yk−1−1 α2 αp−1 β1 βq−1
≤ K1 (1 + ε2k−1 ),
13
for some K1 > 1. This leads to
σk2 Y
2 ≤ K1M (1 + ε2k−j ),
σk−i
1≤j≤M
2
Q
σk2 σk2 Y 1 1≤j≤M (1 + εk−j )
≤ PM ≤ K1M (1+ε2k−j ) PM ≤ K2M PM 2 ,
wk (u) 2 2
i=1 ci (u)εk−i σk−i
2
i=1 ci (u)εk−i i=1 εk−i
1≤j≤M
for some K2 > 0, which is now independent of u (a uniform bound)! Thus, using the Hölder
inequality,
ν ( !ν )
2
Q
σk2 1≤j≤M (1 + εk−j )
E sup ≤ E K2M PM 2
u∈U wk (u) i=1 εk−i
γ ν/γ γ−ν
νγ
!− γ−ν
Y M
X γ
≤ K2M ν E (1 + ε2k−j ) E ε2k−i
1≤j≤M i=1
γ
Y M
(1 + ε2k−j ) = E(1 + ε20 )γ
E < ∞.
1≤j≤M
Dealing with moments of sums can be difficult, but we will use a “trick” to avoid having to
do so. From 1st year probability, we know that for a nonnegative random variable Y ,
Z
EY = P (Y > t)dt.
14
Thus if P (Y > t) ≤ K3 t−2 for t large enough, then EY < ∞. We will show
νγ
!− γ−ν
M
X
P ε2k−i > t ≤ K3 t−2
i=1
νγ
!− γ−ν
M M
!
− γ−ν
X X
P ε2k−i > t = P ε2k−i ≤ t νγ (10)
i=1 i=1
M
− γ−ν − γ−ν
X
ε2k−i ≤ t νγ ⇒ ∀ i, ε2k−i ≤ t νγ ,
i=1
n γ−ν oM
P ε20 ≤ t− νγ . (11)
P (ε20 ≤ s) ≤ C̃sµ ,
for all s, for some C̃. Thus, we bound (11) from above by
µ(γ−ν)M
C̃ M t− γν ≤ K3 t−2 ,
Lemma 2.3 Suppose Assumptions 2.1 (iii) and (viii) hold. Then for any ν > 0,
P∞ !ν
2
i=1 ici (u)yk−i
E sup P∞ 2 < ∞.
u∈U 1 + i=1 ci (u)yk−i
15
Proof. For any M ≥ 1, we have
P∞ 2
PM 2
P∞ 2
i=1 ici (u)yk−i i=1 ici (u)yk−i i=M +1 ici (u)yk−i
= +
1+ ∞
P∞
2 2 1+ ∞ 2
P P
i=1 ci (u)yk−i 1 + i=1 ci (u)yk−i i=1 ci (u)yk−i
PM 2 ∞
i=1 ici (u)yk−i
X
2
≤ PM + ici (u)yk−i
2
i=1 ci (u)yk−i i=M +1
∞
X
2
≤ M+ ici (u)yk−i . (12)
i=M +1
We now recall another basic fact of probability. For a nonnegative variable Y , if P (Y >
α
t) ∼ e−βt , then all moments of Y are finite, i.e. E|Y | ν < ∞ for all ν. Explanation:
Z Z Z
ν ν 1/ν α/ν
E|Y | = P (Y > t)dt = P (Y > t )dt ∼ e−βt dt < ∞.
We will show !
∞
α
X
2
P sup ici (u)yk−i >t ∼ e−βt .
u∈U i=M +1
1/q
Choose constants ρ0 < ρ∗ < 1, ρ∗∗ > 1 such that ρ∗ ρ∗∗ < 1 and take M ≥ M0 (C2 , ρ∗ , ρ∗∗ )
∞
! ∞
! ∞
!
i/q 2
X X X
2
P sup ici (u)yk−i >t ≤P C2 iρ0 yk−i >t ≤P ρi∗ yk−i
2
>t .
u∈U i=M +1 i=M +1 i=M +1
(13)
Now,
∞ −1
X ρ∗∗
ρi∗ yk−i
2
>t ⇒ ∃ i = M + 1, . . . , ∞ 2
yk−i > tρ−i
∗ ρ−i
∗∗
ρ∗∗ − 1
i=M +1
To see that this implication of the form p ⇒ q is true, it is easy to show that ¬q ⇒ ¬p.
∞ −1 ! ∞ −δ !
X
2 ρ∗∗ X
2δ ρ∗∗
P yk−i > tρ−i
∗ ρ−i
∗∗ = P yk−i > tδ (ρ∗ ρ∗∗ )−iδ .
ρ∗∗ − 1 ρ∗∗ − 1
i=M +1 i=M +1
16
Now using the Markov inequality, we bound the above by
∞ δ δ
(ρ∗ ρ∗∗ )M δ
X
2δ −δ ρ∗∗ iδ 2δ −δ ρ∗∗
E|y0 | t (ρ∗ ρ∗∗ ) = E|y0 | t (14)
ρ∗∗ − 1 ρ∗∗ − 1 1 − (ρ∗ ρ∗∗ )δ
i=M +1
We now take t > 2 max(M0 , 1) (it is enough to show the above for “large” t) and M = t/2.
∞
! ∞
!
X X
2 2
P M + sup ici (u)yk−i >t = P sup ici (u)yk−i > t/2
u∈U i=M +1 u∈U i=M +1
δ
(ρ∗ ρ∗∗ )tδ/2
2δ ρ∗∗
≤ E|y0 | (t/2) −δ
≤ K4 e−K5 t ,
ρ∗∗ − 1 1 − (ρ∗ ρ∗∗ )δ
Lemma 2.4 Let | · | denote the maximum norm of a vector. Suppose that Assumptions 2.1
1
E sup |log wk (u) − log wk (v)| < ∞
u,v∈U |u − v|
1 yk2 yk2
E sup − < ∞
u,v∈U |u − v| w (u) w (v)
k k
Proof. The mean value theorem says that for a function f , we have
|f (u) − f (v)|
= |f 0 (ξ)|,
|u − v|
where max(|ξ − u|, |ξ − v|) ≤ |u − v|. Applying it to f (u) = y k2 /wk (u), we get
2
yk2
2 0
yk yk wk (ξ)
w (u) − w (v) = |u − v| w (ξ) w (ξ) .
k k k k
Clearly
∞
X
2
wk0 (u) = c00 (u) + c0i (u)yk−i .
i=1
17
We now use a fact which we accept without proof. The proof is easy but long and is again
done by induction. If you are interested in the details, see Berkes et al. (2003), Lemma 3.2.
1+ ∞ 2
0 P
i=1 ici (u)yk−i
wk (u)
sup ≤ K6 .
1+ ∞ 2
P
u∈U wk (u) i=1 ci (u)yk−i
0 2+δ
wk (u) δ
E sup
< ∞.
u∈U wk (u)
On the other hand, by Lemma 2.2 and the assumptions of this lemma,
1+δ/2 1+δ/2
yk2 σk2
E sup = E(ε2k )1+δ/2 E sup < ∞.
u∈U wk (u) u∈U wk (u)
The Hölder inequality completes the proof. The proof for log is very similar.
Lemma 2.5 Suppose Assumptions 2.1 (iii), (iv) and (v) hold. Then
1
sup Ln (u) − L(u) → 0
u∈U n
y02
L(u) = −1/2E log w0 (u) + .
w0 (u)
18
Proof. We start by stating the fact that if E|ε 20 |δ < ∞ for some δ > 0, then there exists
a δ ∗ > 0 such that E|y02 |δ < ∞ and E|σ02 |δ < ∞. The proof is beyond the scope of this
∗ ∗
E|y02 |δ < ∞
∗
(15)
which implies
δ ∗
X
i/q i/q
X
2 2
| log w0 (u)| ≤ log C2 + log 1 + ρ0 yk−i ≤ A + B ρ0 yk−i ,
1≤i<∞ 1≤i<∞
By Lemma 2.2,
y02 σ02
E = Eε20 E < ∞.
w0 (u) w0 (u)
and therefore yk is stationary and ergodic by Theorem 3.5.8 of Stout (1974) (since {ε k }k is
As E|L(u)| < ∞, we can use the ergodic theorem, which says that for any u ∈ U ,
1
Ln (u) → L(u)
n
19
almost surely.
Also,
1 X
sup |Ln (u) − Ln (v)| ≤ 1/2 ηk ,
u,v∈U |u − v|
1≤k≤n
where
2 2
1 yk y k
ηk = sup | log wk (u) − log wk (v)| + − .
u,v∈U |u − v| wk (u) wk (v)
Again by Theorem 3.5.8 of Stout (1974), η k is stationary and ergodic. By Lemma 2.4,
n
1X
ηi = O(1)
n
i=1
1 1 1
sup Ln (u) − Ln (v)
= O(1).
u,v∈U n n |u − v|
Thus the sequence of functions Ln (u)/n is equicontinuous. Also as shown earlier it converges
almost surely to L(u) for all u ∈ U . This, along with the fact that U is a compact set,
implies that the convergence is uniform, which completes the proof. (Recall a well-known
Lemma 2.6 Suppose the conditions of Theorem 2.1 are satisfied. Then L(u) has a unique
maximum at θ.
y02 σ02
E =E .
w0 (u) w0 (u)
20
We have
y02 y02
1 2 1
L(θ) − L(u) = − E log σ0 + 2 + E log w0 (u) +
2 σ0 2 w0 (u)
2
1 σ0
= − E(log σ02 ) + 1 − E(log w0 (u)) − E
2 w0 (u)
1
σ02 σ0 2
= − E log − +1
2 w0 (u) w0 (u)
2
σ02
1 1 σ0
= − + E − log .
2 2 w0 (u) w0 (u)
The function x − log x is positive for all x > 0 and attains its minimum value (of 1) for
σ02 σ02
∗ 1 1
0 = L(θ) − L(u ) = − + E − log .
2 2 w0 (u∗ ) w0 (u∗ )
X = 1 almost surely. Thus σ02 = w0 (u∗ ) almost surely, so we must also have c i (θ) = ci (u∗ )
for all i (we accept this “intuitively obvious” statement here without proof; see Berkes et al.
21
On the other hand, by the above discussion,
Equating coefficients (using the “uniqueness of GARCH representation”, also without proof:
see Berkes et al. (2003) for details), we have u ∗ = θ, which completes the proof.
probability one (Lemma 2.5) and L has a unique maximum at u = θ (Lemma 2.6). Thus by
standard arguments (best seen graphically!) the locations of the maxima of L n /n converge
Exercise: try to think why we need uniform convergence for this reasoning to be valid.
2.4 Forecasting
By standard Hilbert space theory, the best point forecasts of y k under the L2 norm are
given by E(yk+h |Fk ) and are equal to zero if h > 0 by the martingale difference property
of yk .
The equation (3) is a convenient starting point for the analysis of the optimal forecasts for
sense if E(yk4 ) < ∞, which is not always the case. However, many authors take the above as
their forecasting statistic of choice. It might be more correct (and interesting) to consider
2
Median(yk+h |Fk ), which is the optimal forecast under the L 1 norm. This always makes
sense as E(yk2 ) < ∞ as we saw before. However, it is mathematically far more tractable to
2
look at E(yk+h |Fk ), which is what we are going to do below.
22
Take h > 0. Recall that E(Zk+h |Fk ) = 0. From (3), we get
R
X q
X
2 2
yk+h = ω+ (αi + βi )yk+h−i − βj Zk+h−j + Zk+h
i=1 j=1
R
X q
X
2 2
E(yk+h |Fk ) = ω + (αi + βi )E(yk+h−i |Fk ) − βj E(Zk+h−j |Fk ) + E(Zk+h |Fk )
i=1 j=1
R
X q
X
2 2
E(yk+h |Fk ) = ω + (αi + βi )E(yk+h−i |Fk ) − βj E(Zk+h−j |Fk ). (16)
i=1 j=1
The recursive formula (16) is used to compute the forecasts, with the following boundary
conditions:
2
E(yk+h−i |Fk ) is given recursively by (16) if h > i,
2
E(yk+h−i 2
|Fk ) = yk+h−i if h ≤ i,
R
X
2 2
E(yk+h |Fk ) =ω+ (αi + βi )E(yk+h−i |Fk ), (17)
i=1
2
which is a difference equation for the sequence {E(y k+h |Fk )}∞
h=p+1 . Standard theory of
23
lie outside the unit circle, then the solution of (17) converges to
ω
PR ,
1− i=1 (αi + βi )
gets longer and longer, the conditioning set F k has less and less impact on the forecast and
In this section, we obtain explicit formulae for forecasts in the GARCH(1,1) model. Using
2
E(yk+1 |Fk ) = ω + (α1 + β1 )yk2 − β1 Zk = ω + α1 yk2 + β1 σk2 .
2
E(yk+2 |Fk ) = ω[1 + (α1 + β1 )1 ] + α1 (α1 + β1 )yk2 + β1 (α1 + β1 )σk2
2
E(yk+3 |Fk ) = ω[1 + (α1 + β1 )1 + (α1 + β1 )2 ] + α1 (α1 + β1 )2 yk2 + β1 (α1 + β1 )2 σk2
...
h−1
X
2
E(yk+h |Fk ) = ω (α1 + β1 )i + α1 (α1 + β1 )h−1 yk2 + β1 (α1 + β1 )h−1 σk2 ,
i=0
There are many extensions of the GARCH model. Two of them, EGARCH and IGARCH
are probably the most popular and are covered in Straumann (2005). The Exponential
24
GARCH (EGARCH) model reads
PR
The Integrated GARCH (IGARCH) process is a GARCH process for which i=1 αi +βi = 1.
Both S-Plus and R have their own packages containing routines for fitting and forecasting
http://www.insightful.com/products/finmetrics/
and is a commerical product. Sadly, it is much better than its (free) R counterpart, the
http://cran.r-project.org/src/contrib/Descriptions/tseries.html
The R package is only able to fit GARCH models, while the S-Plus module can fit GARCH,
Are GARCH models really used in practice? The answer is YES. Only recently, a big
UK bank was looking for a time series analyst to work on portfolio construction (risk
management). One of the job requirements was familiarity with GARCH models!
References
25
I. Berkes, L. Horváth, and P. Kokoszka. GARCH processes: structure and estimation.
31:307–327, 1986.
P. Bougerol and N. Picard. Stationarity of GARCH processes and of some nonnegative time
W. F. Stout. Almost Sure Convergence. Academic Press, New York-London, 1974. Proba-
26