Professional Documents
Culture Documents
Uniwersytet Wroclawski, Uniwersytet Wroclawski, University of Copenhagen and Uniwersytet Wroclawski
Uniwersytet Wroclawski, Uniwersytet Wroclawski, University of Copenhagen and Uniwersytet Wroclawski
Yn = An Yn1 + Bn ,
n Z,
and its stationary solution have attracted much attention. Here (Ai , Bi ),
i Z, is an i.i.d. sequence, Ai > 0 a.s., and Bi assumes real values. [In what
Received May 2011; revised January 2012.
Supported by NCN Grant UMO-2011/01/M/ST1/04604.
2
Supported in part by the Danish Natural Science Research Council (FNU) Grants
09-072331 Point process modeling and statistical inference and 10-084172 Heavy tail
phenomena: Modeling and estimation.
AMS 2000 subject classifications. Primary 60F10; secondary 91B30, 60G70.
Key words and phrases. Stochastic recurrence equation, large deviations, ruin probability.
1
(1.2)
EA = 1.
Then Y is regularly varying with index > 0. In particular, there exist con+
stants c+
, c 0 such that c + c > 0 and
and
P{Y x} c
x
c+
as x .
c := E[(1 + Y ) Y ]/().
Goldie [10] also showed that similar results remain valid for the stationary
solution to stochastic recurrence equations of the type Yn = f (Yn1 , An , Bn )
for suitable functions f satisfying some contraction condition.
The power law tails (1.3) stimulated research on the extremes of the
sequence (Yi ). Indeed, if (Yi ) were i.i.d. with tail (1.3) and c+
> 0, then the
maximum sequence Mn = max(Y1 , . . . , Yn ) would satisfy the limit relation
(1.4)
1/
lim P{(c+
Mn x} = ex
n)
= (x),
x > 0,
where denotes the Frechet distribution, that is, one of the classical extreme value distributions; see Gnedenko [9]; cf. Embrechts et al. [6], Chapter 3. However, the stationary solution (Yi ) to (1.1) is not i.i.d., and therefore
one needs to modify (1.4) as follows: the limit has to be replaced by for
LARGE DEVIATIONS
some constant (0, 1), the so-called extremal index of the sequence (Yi );
see de Haan et al. [4]; cf. [6], Section 8.4.
The main objective of this paper is to derive another result which is a consequence of the power law tails of the marginal distribution of the sequence
(Yi ): we will prove large deviation results for the partial sum sequence
Sn = Y1 + + Yn ,
n 1,
S0 = 0.
This means we will derive exact asymptotic results for the left and right
tails of the partial sums Sn . Since we want to compare these results with
those for an i.i.d. sequence, we recall the corresponding classical results due
to A. V. and S. V. Nagaev [19, 20] and Cline and Hsing [2].
Theorem 1.2. Assume that (Yi ) is an i.i.d. sequence with a regularly
varying distribution, that is, there exists an > 0, constants p, q 0 with
p + q = 1 and a slowly varying function L such that
L(x)
L(x)
and P{Y x} q
as x .
x
x
Then the following relations hold for > 1 and suitable sequences bn :
P{Sn ESn > x}
lim sup
p = 0
(1.6)
n
nP{|Y | > x}
(1.5) P{Y > x} p
xbn
and
P{Sn ESn x}
q = 0.
(1.7)
lim sup
n xbn
nP{|Y | > x}
If > 2 one can choose bn = an log n, where a > 2, and for (1, 2],
bn = n+1/ for any > 0.
For (0, 1], (1.6) and (1.7) remain valid if the centering ESn is replaced
by 0 and bn = n+1/ for any > 0.
For (0, 2] one can choose a smaller bound bn if one knows the slowly
varying function L appearing in (1.5). A functional version of Theorem 1.2
with multivariate regularly varying summands was proved in Hult et al. [11]
and the results were used to prove asymptotic results about multivariate
ruin probabilities. Large deviation results for i.i.d. heavy-tailed summands
are also known when the distribution of the summands is subexponential,
including the case of regularly varying tails; see the recent paper by Denisov
et al. [5] and the references therein. In this case, the regions where the
large deviations hold very much depend on the decay rate of the tails of
the summands. For semi-exponential tails (such as for the log-normal and
the heavy-tailed Weibull distributions) the large deviation regions (bn , )
are much smaller than those for summands with regularly varying tails. In
particular, x = n is not necessarily contained in (bn , ).
The aim of this paper is to study large deviation probabilities for a particular dependent sequence (Yn ) as described in Kestens Theorem 1.1. For
dependent sequences (Yn ) much less is known about the large deviation
probabilities for the partial sum process (Sn ). Gantert [8] proved large deviation results of logarithmic type for mixing subexponential random variables. Davis and Hsing [3] and Jakubowski [12, 13] proved large deviation
results of the following type: there exist sequences sn such that
P{Sn > an sn }
c
nP{Y > an sn }
LARGE DEVIATIONS
we apply the large deviation results to get precise asymptotic bounds for
the ruin probability related to the random walk (Sn ). This ruin bound is
an analog of the celebrated result by Embrechts and Veraverbeke [7] in the
case of a random walk with i.i.d. step sizes.
2. Main result. The following is the main result of this paper. It is an
analog of the well-known large deviation result of Theorem 1.2.
Theorem 2.1. Assume that the conditions of Theorem 1.1 are satisfied
and additionally there exists > 0 such that EA+ and E|B|+ are finite.
Then the following relations hold:
(1) For (0, 2], M > 2,
(2.1)
sup
n
P{Sn dn > x}
< .
n1/ (log n)M x nP{|Y | > x}
sup
(2.2)
lim
sup
= 0,
nP{|Y | > x}
n 1/
c+
+ c
n
(log n)M xesn
where dn = 0 or dn = ESn according as (0, 1] or (1, 2].
(2) For > 2 and any cn ,
(2.3)
sup
n cn
sup
n0.5 log nx
n c n0.5 log nxesn
nP{|Y | > x}
c + c
n
and
Yei = 2i B1 + 3i B2 + + ii Bi1 + Bi ,
i 1.
Since Yi = i Y0 + Yei , the following decomposition is straightforward:
n
n
X
X
(3.1)
Yei =: Y0 n + Sen ,
i +
Sn = Y0
i=1
i=1
where
(3.2)
and
n = 1 + + n ,
n 1.
In view of (3.1) and Lemma 3.1 below it suffices to bound the ratios
P{Sen den > x}
nP{|Y | > x}
uniformly for the considered x-regions, where den = ESen for > 1 and den = 0
for 1.
The proof of the following bound is given at the end of this subsection.
Lemma 3.1. Let (sn ) be a sequence such that sn /n 0. Then for any
sequence (bn ) with bn the following relations hold:
P{|Y0 |n > x}
P{|Y0 |n > x}
= 0 and lim sup sup
< .
lim
sup
n bn xesn nP{|Y | > x}
n bn x nP{|Y | > x}
Before we further decompose Sen we introduce some notation to be used
throughout the proof. For any x in the considered large deviation regions:
m = [(log x)0.5+ ] for some positive number < 1/4, where [] denotes the
integer part.
n0 = [1 log x], where = E(A log A).
n1 = n0 m and n2 = n0 + m.
LARGE DEVIATIONS
For > 1, let D be the smallest integer such that D log EA > 1.
Notice that the latter inequality makes sense since EA < 1 due to (1.2)
and the convexity of the function (h) = EAh , h > 0.
For 1, fix some < , and let D be the smallest integer such that
D log EA > where, by the same remark as above, EA < 1.
Let n3 be the smallest integer satisfying
(3.3)
D log x n3 ,
x > 1.
Notice that since the function (h) = log (h) is convex, putting = 1 if
1
> 1, by the choice of D we have D
< ()()
< () = ; therefore
ej + W
fj = Bj + jj Bj+1 + + j,n 1 Bn + W
fj .
Yej = U
4
4
(3.5)
For any small > 0, there exists a constant c > 0 such that
)
( n
X
fj cj ) > x cnx ,
x > 1,
P (W
j=1
fj according as 1 or > 1.
where cj = 0 or cj = EW
ei = X
ei + Sei + Z
ei ,
U
where for i n n3 ,
ei = Bi + ii Bi+1 + + i,i+n
X
(3.7)
1 2
Bi+n1 1 ,
ei , Sei , Z
ei as follows: for n2 < n i < n3 choose X
ei , Sei
For i > n n3 , define X
as above and
ei = i,i+n Bi+n +1 + + i,n1 Bn .
Z
2
ei as before and
For n1 n i n2 , choose Zei = 0, X
Sei = i,i+n1 1 Bi+n1 + + i,n1 Bn .
jn1
X
i=(j1)n1 +1
ei ,
X
Sj =
jn1
X
i=(j1)n1 +1
Sei ,
Zj =
jn1
X
i=(j1)n1 +1
Zei ,
(3.9)
Since sn /n 0 as n we have
sup
bn xesn
x .
P{|Y0 |n > x}
P{|Y0 | > x}
sup
0.
s
nP{|Y | > x}
nP{|Y | > x}
n
bn xe
There exist c0 , x0 > 0 such that P {|Y0 | > y} c0 y for y > x0 . Therefore
P{|Y0 |n > x} P{x/n x0 } + c0 x En 1{x/n >x0 } cx En .
cx En
P{|Y0 |n > x}
sup
< .
bn x nP{|Y | > x}
bn x nP{|Y | > x}
In = sup
LARGE DEVIATIONS
fj is finite,
Proof of Lemma 3.2. Assume first that > 1. Since EW
D log EA > 1 and D log x n3 , we have for some positive
n3
fj | (EA) E|B| ceD log x log EA cx(1) ,
E|W
(3.10)
1 EA
and hence by Markovs inequality
( n
)
n
X
X
fj | cnx .
fj EW
fj ) > x 2x1
P (W
E|W
j=1
j=1
n3
X
X
fj | x nE|B| (EA )
fj > x x
P
E|W
W
(1 EA )
j=1
j=1
In the last step we used the fact that D log EA > . This concludes
the proof of the lemma.
3.2. Bounds for P{Xj > x} and P{Zj > x}. We will now study the tail
behavior of the single block sums X1 , Z1 defined in (3.8). We start with a
useful auxiliary result.
Lemma 3.3. Assume ( + ) = EA+ < for some > 0. Then there
is a constant C = C() > 0 such that ( + ) Ce for || /2, where
= E(A log A).
Proof. By a Taylor expansion and since () = 1, () = , we have
for some (0, 1),
( + ) = 1 + + 0.5 ( + ) 2 .
(3.11)
( + ) 1 + + c 2 = elog(1++c ) Ce .
The following lemma ensures that the Xi s do not contribute to the considered large deviation probabilities.
Lemma 3.4.
where
x > 1,
n1
X
(|Bi | + ii |Bi+1 | + + i,i+n1 2 |Bi+n1 1 |).
i=1
10
Proof. We have X 1 =
Pn 0
k=m+1 Rk ,
x/k3 , m
n0
[
k=m+1
X
x X 1
x
Rk
c
< x.
m+1
k2
(log x)0.5+
k=m+1
k=1
k=m+1
Ik c(x/k3 )(+) n+
e(n0 k) cx k3(+) n+
ek .
0
0
k=m+1
The following lemma ensures that the Zi s do not contribute to the considered large deviation probabilities.
Lemma 3.5.
where
n1
X
i=1
x > 1,
11
LARGE DEVIATIONS
Proof. We have Z 1 =
Pn3 n2 e
k=1 Rk , where
k=1
Jk cx n
1
nX
3 n2
k=1
C6
If
c+
= 0, then
(3.13)
P{S1 > x}
= 0.
n xbn n1 P{|Y | > x}
lim sup
The proof depends on the following auxiliary result whose proof is given
in Appendix B.
Lemma 3.7. Assume that Y and k [defined in (3.2)] are independent and ( + ) = EA+ < for some > 0. Then for n1 = n0 m =
[1 log x] [(log x)0.5+ ] for some < 1/4 and any sequences bn and
12
provided
c+
> 0. If
c+
= 0, then
lim
sup
n rn kn1 ,bn x
P{k Y > x}
= 0.
kP{|Y | > x}
For i n1 , consider
Sei + Si = i,n1 Bn1 +1 + + i,i+n1 2 Bi+n1 1 + Sei + i,i+n2 Bi+n2 +1 +
+ i,n2 +n1 1 Bn2 +n1
Notice that
Therefore and by virtue of Lemmas 3.4 and 3.5, there exist positive constants
C7 , C8 , C9 such that
C9
We observe that
S1 +
n1
X
Si =: U T1
and T1 + T2 = Y,
i=1
where
provided c+
> 0 or
13
LARGE DEVIATIONS
if c+
= 0. Thus to prove the proposition it suffices to justify the existence
of some positive constants C10 , C11 , C12 such that
C12
(3.14)
x > 1.
For this purpose we use the same argument as in the proof of Lemma 3.4.
First we write
X
P{|U T2 | > x}
P{U n1 +1,n1 +n2 +k |Bn1 +n2 +k+1 | > x/(log x + k)3 }.
k=0
Write = (log x + k)0.5 . Then by Lemma 3.3, Markovs inequality and since
n2 = n0 + m,
P{U n1 +1,n1 +n2 +k |Bn1 +n2 +k+1 | > x/(log x + k)3 }
k=0
/2
cx e(log x)
sup
i1,|ij|2
x > 1.
=: U1 T1 ,
14
=: U2 T2 ,
|S3 | (2n1 +1,3n1 + + 3n1 ,3n1 )
=: U3 T3 .
1/(2)
+ P{T1 xn1
, U1 T1 > x, U2 T2 > x}
1/(2)
cn0.5
+ P{n1
1 x
1/(2)
cn0.5
+ P{U1 > n1
1 x
cn0.5
.
1 x
U1 > 1, U2 T2 > x}
}P{U2 T2 > x}
In the same way we can bound P{|S1 | > t, |S3 | > t}. We omit details.
3.4. Semi-final steps in the proof of the main theorem. In the following
proposition, we combine the various tail bounds derived in the previous
sections. For this reason, recall the definitions of Xi , Si and Zi from (3.8) and
that p1 , p, p3 are the largest integers such that p1 n1 n n1 + 1, pn1 n n2
and p3 n1 n n3 , respectively.
Proposition 3.9. Assume the conditions of Theorem 2.1. In particular,
consider the following x-regions:
1/
(n (log n)M , ),
for (0, 2], M > 2,
n =
0.5
(cn n log n, ),
for > 2, cn ,
and introduce a sequence sn such that esn n and sn = o(n). Then
the following relations hold:
P
P{ pj=1 (Sj cj ) > x}
c+
c
(3.15) +
,
lim sup sup
nP{|Y | > x}
c + c
n xn
Pp
P{ j=1 (Sj cj ) > x}
c+
c
0 = lim
sup
(3.16)
,
n xn ,log xsn
nP{|Y | > x}
c+
+ c
P1
P{| pj=1
(Xj ej )| > x}
0 = lim sup
(3.17)
,
n xn
nP{|Y | > x}
P3
P{| pj=1
(Zj zj )| > x}
(3.18)
,
0 = lim sup
n xn
nP{|Y | > x}
where cj = ej = zj = 0 for 1 and cj = ESj , ej = EXj , zj = EZj for > 1.
LARGE DEVIATIONS
15
2 + 4 < M
and
< 1/(4),
p
\
{|Sj | y},
j=1
2 =
1i<kp
3 =
p
[
k=1
P
Then for A = { pj=1 (Sj cj ) > x},
(3.21)
n xn
We have
I2
1i<kp
Summarizing the above estimates and observing that (3.19) holds, we obtain
for x n ,
16
(3.22)
n xn
For this purpose, we write Sjy = Sj 1{|Sj |y} and notice that ESj = ESjy +
ESj 1{|Sj |>y} . Elementary computations show that
max(,1)
|S1 | n1
(3.23)
Let now > 1/ and n1/ (log n)M x n . Since pn1 n and (3.19) holds,
(3.24)
If x > n , then
and
Hence
(3.25)
Using the bounds (3.24) and (3.25), we see that for x sufficiently large,
)
( p
X
y
y
I1 P (Sj ESj ) > 0.5x
j=1
X
X
+
= P
1jp,j{1,4,7,...}
1jp,j{2,5,8,...}
(3.26)
1jp,j{3,6,9,...}
3P
(Sjy
1jp,j{1,4,7,...}
(Sjy
ESjy ) > x/6
ESjy ) > 0.5x
In the last step, for the ease of presentation, we slightly abused notation
since the number of summands in the 3 partial sums differs by a bounded
number of terms which, however, do not contribute to the asymptotic tail
behavior of I1 . Since the summands S1y , S4y , . . . are i.i.d. and bounded, we
may apply Prokhorovs inequality (A.1) to the random variables Rk = Sky
ES1y in (3.26) with y = x/(log n)2 and Bp = p var(S1y ). Then an = x/(2y) =
LARGE DEVIATIONS
17
Therefore, for x n ,
(3.28)
X
Vk = P A (Sj cj ) z, |Sk | > y, |Si | y, i 6= k
.
P
j6=k
Applying the Prokhorov inequality (A.1) in the same way as in step 1b, we
see that
P{|S| > 0.5z, |Si | y, i 6= k} cnz c(log n)(M )
P{|S1 | > y} c
n1 (y)
n1
c .
y
y
Therefore
sup (x /n1 )Jk c(log n)3M 0.
xn
18
By (3.12), for every small > 0, n sufficiently large and uniformly for x n ,
we have
(3.31) (1 )c
(1 + )c ,
n1 P{Y > x}
n1 P{Y > x}
where we also used that limn (x + z)/x = 1. Then the following upper
bound is immediate from (3.30):
Vk
P{Sk ck > x z}
(1 + )c .
n1 P{Y > y}
n1 P{Y > x}
From (3.29) we have
Vk
P{Sk ck > x + z}
Lk .
n1 P{Y > y}
n1 P{Y > x}
In view of the lower bound in (3.31), the first term on the right-hand side
yields the desired lower bound if we can show that Lk is negligible. Indeed,
we have
[
n1 P{Y > x}Lk = P {Sk ck > x + z} {|S| > z 8y} {|Si | > y}
i6=k
Similar bounds as in the proofs above yield that the right-hand side is of the
order o(n1 /x ), hence Lk = o(1). We omit details. Thus we obtain (3.27).
Step 1d: Final bounds. Now we are ready for the final steps in the proof
of (3.16) and (3.15). Suppose first c+
> 0 and log x sn . In view of the
decomposition (3.20) and steps 1a and 1b we have as n and uniformly
for x n ,
P
P{ pj=1 (Sj cj ) > x}
nP{Y > x}
I3
nP{Y > x}
Pp
Pp
n1 k=1 P{ j=1 (Sj ESj ) > x, |Sk | > y, |Sj | y, j 6= k}
n
n1 P{Y > x}
19
LARGE DEVIATIONS
In view of step 1c, in particular (3.27), the last expression is dominated from
above by (pn1 /n)(1 + )c (1 + )c and from below by
n n2 n1
3sn
n1 p
(1 )c
(1 )c (1 )c 1
.
n
n
n
Letting first n and then 0, (3.15) follows and (3.16) is also satisfied
provided the additional condition limn sn /n = 0 holds.
If c+
= 0, then I3 = o(nP{|Y | > x}). Let now x n and recall the definition of Vk from (3.28). Then for every small and sufficiently large x,
P
P{ pj=1 (Sj cj ) > x}
I3
nP{|Y | > x}
nP{|Y | > x}
Pp
n1
k=1 Vk
n n1 P{|Y | > x}
P{S1 > x(1 ) |c1 |}
,
n1 P{|Y | > x}
xn
sup
By Lemma 3.4,
C3
= o(nx ).
For the estimation of P2 consider the random variables Xjy = Xj 1{|Xj |y}
and proceed as in step 1b.
The case > 2.
The proof is analogous to (1, 2]. We indicate differences in the main
steps.
Step 1b. First we bound the large deviation probabilities of the truncated
sums (it is an analog of step 1b of Proposition 3.9). We assume without loss
,
of generality that cn log n. Our aim now is to prove that for y = xc0.5
n
( p
)
X
x
P (Sj ESj ) > x, |Sj | y for all j p = 0.
(3.32) lim sup
n xn n
j=1
We proceed as in the proof of (3.22) with the same notation Sjy = Sj 1{|Sj |y} .
As in the proof mentioned, p|ESj 1{|Sj |>y} | = o(x) and so we estimate the
20
probability of interest by
(3.33)
I := 3P
(Sjy
1jp,j{1,4,7,...}
ES1y ) > x/6
The summands in the latter sum are independent and therefore one can apply the two-sided version of the FukNagaev inequality (A.3) to the random
y
2
variables in (3.33): with an = x/y = c0.5
n and p var(S1 ) cpn1 cnn1 ,
(1.5+) an
(1 )2 x2
pn1
+ exp
(3.34)
.
I c
xy 1
2e cnn1
We suppose first that x n0.75 . Then the first quantity in (3.34) multiplied
by x /n is dominated by
(c(log n)(1.5+) c0.5(1)
)an (n/x )an 1
n
cn0.5an (1+)+
n
2e cn1
If x > n0.75 then xn0.5 > x for an appropriately small . Then the first
quantity in (3.34) is dominated by
(c(log x)(1.5+) )an cn0.5an (1) (n/x )an 1
cn0.5an (1+)+
exp
x ecx 0,
n
2e cn1
j6=k
21
LARGE DEVIATIONS
In the remainder of the proof one can follow the arguments of the proof in
step 1c for (1, 2].
)
( n
X
ei ) > x
P (Zei EZ
i=1
i=1
(3.36)
)
( n
X
ei ) > x .
(Zei EZ
+P
i=1
i=1
Divide the last two probabilities in the first and last lines by nP{|Y | > x}.
Then these ratios converge to zero for x n , in view of (3.17), (3.18) and
Lemmas 3.4 and 3.5. Now
)
( n
X
(Sei ESei ) > x(1 2)
P
i=pn1 +1
=P
nn
X1
i=pn1 +1
nn
X1
i=pn1 +1
nn
X1
i=pn1 +1
|ESei | ,
22
nn
X1
i=pn1 +1
|ESei | 2n1 C
(n
1
X
i=1
+P
nn
X1
i=pn1 +1
|ESei |
x(1 2) 2n1 C
Sei >
2
( 2n
X1
i=n1
)
x(1 2) 2n1 C
Sei >
2
)
4. Ruin probabilities. In this section we study the ruin probability related to the centered partial sum process Tn = Sn ESn , n 0, that is, for
given u > 0 and > 0 we consider the probability
n
o
(u) = P sup[Tn n] > u .
n1
23
LARGE DEVIATIONS
beke [7] for a special case of subexponential step distribution and Mikosch
and Samorodnitsky [17] for a general regularly varying step distribution)
that
uP{Y > u}
ind (u)
(4.1)
,
u .
( 1)
(We write ind to indicate that we are dealing with i.i.d. steps.) If the step
distribution has exponential moments the ruin probability ind (u) decays
to zero at an exponential rate; see, for example, the celebrated Cramer
Lundberg bound in Embrechts et al. [6], Chapter 2.
It is the main aim of this section to prove the following analog of the
classical ruin bound (4.1):
Theorem 4.1. Assume that the conditions of Theorem 1.1 are satisfied
and additionally B 0 a.s. and there exists > 0 such that EA+ and
EB + are finite, > 1 and c+
> 0. The following asymptotic result for the
ruin probability holds for fixed > 0, as u :
n
o
c
P sup(Sn ESn n) > u
u+1
(
1)
n1
(4.2)
c uP{Y > u}
+
.
c ( 1)
Remark 4.2. We notice that the dependence in the sequence (Yt ) manifests in the constant c /c+
in relation (4.2) which appears in contrast to
the i.i.d. case; see (4.1).
To prove our result we proceed similarly as in the proof of Theorem 2.1.
First notice that in view of (3.9),
n
o
P sup(Y0 n E(Y0 n )) > u P{Y0 > u} = o(u1 ).
n1
c
( 1)
for Sen defined in (3.2). Next we change indices as indicated after (3.3).
However, this time we cannot fix n and therefore we will proceed carefully;
the details will be explained below. Then we further decompose Sen into
smaller pieces and study their asymptotic behavior.
Proof of Theorem 4.1. The following lemma shows that the centered
sums (Sen ESen )nuM for large M do not contribute to the asymptotic
behavior of the ruin probability as u .
24
Lemma 4.3.
M u
n>uM
n
o X
e
e
P sup (Sn ESn n) > u
pl ,
n>uM
l=0
where pl = P{maxn[Nl ,Nl+1 ) (Sen ESen n) > u}. For every fixed l, in the
events above we make the change of indices i j = Nl+1 i and write, again
abusing notation,
Yej = Bj + jj Bj+1 + + j,Nl+12 BNl+1 1 .
max
n(0,Nl ]
Nl+1 1
X
i=n
P
P
(Nl+1 1
X
i=Nl
(Nl+1 1
X
i=1
i=n
fi EW
fi ) +
(W
N
l 1
X
i=1
fi > u/4 + Nl /4
W
fi EW
fi ) > u/4 + Nl (/4 E W
f1 )
(W
cNl+1 Nl c(uM 2l )1 .
We conclude that for every M > 0,
X
l=0
pl1 = o(u1 )
as u .
ei = X
ei + Sei + Zei , making the definiAs in (3.7) we further decompose U
tions precise in what follows. Let p be the smallest integer such that pn1
Nl+1 1 for n1 = n1 (u). For i = 1, . . . , p 1 define Xi as in (3.8), and
25
LARGE DEVIATIONS
Xp =
PNl+1 1
e Now consider
i=(p1)n1 +1 Xi .
pl2 = P
max
n(0,Nl ]
(Nl+1 1
X
i=Nl
i=n
n(0,Nl ]
P max
kp/2
ei EX
ei /4) > u/4
(X
ei EX
ei )
(X
+ max
(
Nl+1 1
p
X
i=k
+ P max
kp/2
N
l 1
X
i=n
ei EX
ei ) > u/4 + Nl /4
(X
)
p
X
i=k
max max
kp/2 1j<n1
= pl21 + pl22 .
kn1
X
i=kn1 j
ei EX
ei ) > u/8 + Nl /8
(X
The second quantity is estimated by using Lemma 3.4 as follows for fixed
M >0
(n
)
1
X
ei > u/8 + Nl /8 C1 pN eC2 (log Nl )C3
pl22 cpP
X
l
i=1
= o(u
)2(1)l ,
X
pl22 = o(u1 )
as u .
l=0
Next we treat pl21 . We observe that Xi and Xj are independent for |ij| > 1.
Splitting summation in pl21 into the subsets of even and odd integers, we
obtain an estimate of the following type
X
pl21 c1 P max
(X2i EX2i ) > c2 (u + Nl ) ,
kp/2
k2ip
where the summands are now independent. By the law of large numbers, for
any (0, 1), k p/2, large l,
X
(X2i EX2i ) > c2 (u + Nl ) 1/2.
P
2ik
26
Using the latter bound and summarizing the above estimates, we finally
proved that
lim lim sup u1
M u
pl2 = 0.
l=0
Similar arguments show that the sums involving the Sei s and Zei s are negligible as well. This proves the lemma.
In view of Lemma 4.3 it suffices to study the following probabilities for
sufficiently large M > 0:
n
o
P max (Sen ESen n) > u .
nM u
Then we decompose Yej as in the proof of Lemma 4.3. Reasoning in the same
fi ,
way as above, one proves that the probabilities related to the quantities W
1
e
e
Xi and Zi are of lower order than u
as u and, thus, it remains to
study the probabilities
)
(
N0
X
(4.3)
(Sei ESei ) > u ,
P max
nN0
i=n
(p+1)n1
(Sei ESei )
i=(k1)n1 +1
p
X
(Si ESi n1 )
i=k1
27
LARGE DEVIATIONS
and
pn1
N0
X
X
(Sei ESei ) 2n1 (ESe1 + ) +
(Sei ESei )
i=n
i=kn1
2n1 (ESe1 + ) +
p1
X
(Si ESi n1 ).
i=k
Choose q
variables
= [M/+1
]+1
1
Rk =
i=n
kq3
X
Si ,
i=(k1)q
k = 1, . . . , r = p/q,
(Si ESi n1 ) > u(1 31 )
niqr
i6=kq2,kq1
)
r
X
+ P max
(Skq2 ESkq2 n1 ) > 1 u
(
jr
k=j
)
r
X
+ P max
(Skq1 ESkq1 n1 ) > 1 u
+P
jr
k=j
max
qr<n<p
(i)
p (u),
p
X
i=n
=:
4
X
p(i) (u).
i=1
The quantities
i = 2, 3, can be estimated in the same way; we fo(2)
cus on p (u). Applying Petrovs inequality (A.4) and Proposition 3.9, we
obtain for some constant c not depending on 1 ,
(
)
r
X
(2)
p (u) P max
(Skq2 ESkq2 ) > 1 u
jr
k=j
( r
)
X
cP
(Skq2 ESkq2 ) > 1 u/2
k=1
crn1 (1 u) c1 u+1 .
28
(1 u)
r(1 u)
Since r q > M/+1
for large u we conclude for such u that r 1 M 1 +1
1
1
(4)
(4)
and therefore p (u) c1 u1 and lim1 0 lim supu u1 p (u) = 0.
Since A and B are nonnegative we have for large u with 0 = (q 2),
)
(
j
X
(Ri ERi 0 n1 ) > u(1 31 ) qn1 (ES1 + )
p(1) (u) P max
jr
i=1
)
j
X
(Ri ERi 0 n1 ) > u(1 41 ) .
P max
(
jr
i=1
Combining the bounds above we proved for large u, small 1 and some
constant c > 0 independent of 1 that
)
(
j
X
(Ri ERi 0 n1 ) > u(1 41 ) + c1 u+1 .
p (u) P max
jr
i=1
i=1
Thus we reduced the problem to study an expression consisting of independent random variables Ri and the proof of the theorem is finished if we can
show the following result. Write
)
(
j
X
(Ri ERi n1 0 ) > u .
r = max
jr
Lemma 4.4.
i=1
(4.4)
1 + 0 .
1 0 u
n1 (k j)C0
Choose , > 0 small to be determined later in dependence on 0 . Eventually,
both , > 0 will become arbitrarily small when 0 converges to zero. Define
29
LARGE DEVIATIONS
i=kl +1
c
Dm
,
l = 0, . . . , l0 1.
m6=l
(4.5)
),
P{W
}
r
l o(u
l=0
Sl0 1
l=0
Dl . Indeed, on
u .
Tl0 1
l=0
Dlc we have
j
X
(Ri ERi n1 0 ) l0 2u 0 u,
max
jr
i=1
Then (4.5) will follow. First apply Petrovs inequality (A.4) to P{Dl } with
q0 arbitrarily close to one and power p0 (1, ). Notice that E|Ri |p0 is of the
order qn1 , hence mp0 is of the order qu cM 1
u. Next apply (4.4).
1
Then one obtains for sufficiently large u, and small , , and some constant
c depending on , , 0 , 1 ,
)
( kl+1
X
1
(Ri ERi ) > u
P{Dl } q0 P
i=kl +1
q01 n1 (kl+1
S
Hence P{ m6=l (Dl Dm )} = O(u2(1) ) as desired for (4.6) if all the parameters , , 0 , 1 are fixed.
Thus we showed (4.5) and it remains to find suitable bounds for the
probabilities P{Wl }. On the set Wl we have
max
jkl
j
X
i=1
j
X
(Ri ERi ) 2lu 0 u,
i=1
30
max
kl+1 <jr
i=kl +1
j
X
i=1
(Ri ERi 0 n1 )
j
X
= max
kl <jr
i=1
kl
X
i=1
(Ri ERi 0 n1 )
(Ri ERi 0 n1 ) +
j
X
+ max
kl+1 <jr
max
kl <jkl+1
j
X
(Ri ERi )
i=kl +1
(Ri ERi )
i=kl+1 +1
20 u kl 0 n1 +
max
kl <jkl+1
j
X
i=kl +1
(Ri ERi ).
Petrovs inequality (A.4) and (4.4) imply for l 1 and large u that
(
)
j
X
P{Wl } P
max
(Ri ERi ) (1 20 )u + 0 n1 kl
kl <jkl+1
q01 P
q01
= q01
i=kl +1
kl+1
i=kl +1
(Ri ERi ) (1 30 )u + 0 n1 kl
(1 + )l1 (1 + 0 )C0
u1 .
((1 30 ) + 0 (1 + )l1 )
For l = 0 we use exactly the same arguments, but in this case (k1 k0 )n1 = u
and k0 = 0. Thus we arrive at the upper bound
lX
0 1
l=0
P{Wl } q01 (1 + 0 )
C0
(4.7)
=
!
lX
0 1
(1 + )l1
+
u1
(1 30 )
((1 30 ) + 0 (1 + )l1 )
i=1
1
q0 (1 + 0 )A(, , 0 , l0 )u1 .
31
LARGE DEVIATIONS
i=1
i=1
kl+1
i=kl +1
kl+1
i=kl +1
m6=l
(kl+1 1
)
X
[
P
(Ri ERi ) > (1 + 0 )u + 0 n1 kl+1 P Dl
Dm
i=kl
m6=l
(1 20 )C0 (1 + )l1 1
(kl+1 kl )n1 (1 0 )C0
1
o(u
)
u
.
((1 + 0 )u + 0 kl+1 n1 )
((1 + 0 ) + 0 (1 + )l )
Hence
lX
0 1
l=0
P{Wl } (1 20 )
C0
!
lX
0 1
(1 + )l1
+
u1
(1 + 0 + 0 )
((1 + 0 ) + 0 (1 + )l )
l=1
(1 20 )B(, , 0 , l0 )
(4.8)
lX
0 1
l=0
P{Wl }
lX
0 1
l=0
P{Wl }
q01 (1 + 0 )A(, , 0 , l0 ).
Finally, we will justify that the upper and lower bounds are close for small
, , 0 , large M and q0 close to 1. For a real number s which is small in
32
on [, M ].
lX
0 1
l=1
f30 (x) dx =
F30 () F30 (M ) + 0
.
0 ( 1)
q0 1 0 0 M 0
In this section, we consider a sequence (Xn ) of independent random variables and their partial
P sums Rn = X1 + + Xn . We always write Bn =
var(Rn ) and mp = nj=1 E|Xj |p for p > 0. First, we collect some of the classical tail estimates for Rn .
Lemma A.1. The following inequalities hold.
Prokhorovs inequality (cf. Petrov [21], page 77): Assume that the Xn s
are centered, |Xn | y for all n 1 and some y > 0. Then
xy
x
(A.1)
,
x > 0.
P{Rn x} exp arsinh
2y
2Bn
S. V. Nagaevs inequality (see [20]): Assume mp < for some p > 0.
Then
n
X
emp x/y
(A.2)
,
x, y > 0.
P{Xj > y} +
P{Rn > x}
xy p1
j=1
LARGE DEVIATIONS
33
FukNagaev inequality (cf. Petrov [21], page 78): Assume that the Xn s
are centered, p > 2, = p/(p + 2) and mp < . Then
x/y
n
X
mp
P{Xj > y} +
P{Rn > x}
xy p1
j=1
(A.3)
(1 )2 x2
,
x, y > 0.
+ exp
2ep Bn
Petrovs inequality (cf. Petrov [21], page 81): Assume that the Xn s are
centered and mp < for some p (1, 2]. Then for every q0 < 1, with L = 1
for p = 2 and L = 2 for p (1, 2),
n
o
(A.4) P max Ri > x q01 P{Rn > x [(L/(1 q0 ))1 mp ]1/p },
x R.
in
34
lim
sup
n rn kn ,bn x
1
k1 Ek 1{k >x} = 0.
k
X
+
(EA+ )ki+1 (E(1 + i1 )+ E(i1
))
i=1
k
X
(EA+ )k
.
(EA+ )ki+1 c
EA+ 1
i=1
Indeed, we will prove that if < 1 then there is a constant c such that for
i 1,
E(1 + i )+ Ei+ c.
en1 / klog x/
(EA+ )k
x
c
Ek 1{k >x} c
EA+ 1
EA+ 1
em/ k
c
.
EA+ 1
In the last step we used that k n1 = n0 m, where n0 = [1 log x]. Moreover, since m = [(log x)0.5+ ], m/ k 2c1 (log x) for some c1 > 0. On the
other hand, setting = = k0.5 in (3.11), we obtain EA+ 1 k0.5 /2.
Combining the bounds above, we finally arrive at
sup
rn kn1 ,bn x
for constants c, c1 > 0. This estimate yields the desired relation (B.1) and
thus completes the proof of the first part of the lemma when c+
> 0.
LARGE DEVIATIONS
35
If c+
= 0 we proceed in the same way, observing that for any , z > 0 and
sufficiently large x,
P{Y > x/z}
< z
P{|Y | > x}
and hence I1 converges to 0 as n goes to infinity. This proves the lemma.
Acknowledgments. We would like to thank the referee whose comments
helped us to improve the presentation of this paper.
REFERENCES
[1] Bartkiewicz, K., Jakubowski, A., Mikosch, T. and Wintenberger, O. (2011).
Stable limits for sums of dependent infinite variance random variables. Probab.
Theory Related Fields 150 337372. MR2824860
[2] Cline, D. B. H. and Hsing, T. (1998). Large deviation probabilities for sums of
random variables with heavy or subexponential tails. Technical report, Texas
A&M Univ, College Station, TX.
[3] Davis, R. A. and Hsing, T. (1995). Point process and partial sum convergence
for weakly dependent random variables with infinite variance. Ann. Probab. 23
879917. MR1334176
[4] de Haan, L., Resnick, S. I., Rootz
en, H. and de Vries, C. G. (1989). Extremal
behaviour of solutions to a stochastic difference equation with applications to
ARCH processes. Stochastic Process. Appl. 32 213224. MR1014450
[5] Denisov, D., Dieker, A. B. and Shneer, V. (2008). Large deviations for random
walks under subexponentiality: The big-jump domain. Ann. Probab. 36 1946
1991. MR2440928
ppelberg, C. and Mikosch, T. (1997). Modelling Extremal
[6] Embrechts, P., Klu
Events: For Insurance and Finance. Applications of Mathematics (New York)
33. Springer, Berlin. MR1458613
[7] Embrechts, P. and Veraverbeke, N. (1982). Estimates for the probability of
ruin with special emphasis on the possibility of large claims. Insurance Math.
Econom. 1 5572. MR0652832
[8] Gantert, N. (2000). A note on logarithmic tail asymptotics and mixing. Statist.
Probab. Lett. 49 113118. MR1790159
[9] Gnedenko, B. (1943). Sur la distribution limite du terme maximum dune serie
aleatoire. Ann. of Math. (2) 44 423453. MR0008655
[10] Goldie, C. M. (1991). Implicit renewal theory and tails of solutions of random
equations. Ann. Appl. Probab. 1 126166. MR1097468
[11] Hult, H., Lindskog, F., Mikosch, T. and Samorodnitsky, G. (2005). Functional
large deviations for multivariate regularly varying random walks. Ann. Appl.
Probab. 15 26512680. MR2187307
[12] Jakubowski, A. (1993). Minimal conditions in p-stable limit theorems. Stochastic
Process. Appl. 44 291327. MR1200412
[13] Jakubowski, A. (1997). Minimal conditions in p-stable limit theorems. II. Stochastic
Process. Appl. 68 120. MR1454576
[14] Jessen, A. H. and Mikosch, T. (2006). Regularly varying functions. Publ. Inst.
Math. (Beograd) (N.S.) 80 171192. MR2281913
36
[15] Kesten, H. (1973). Random difference equations and renewal theory for products of
random matrices. Acta Math. 131 207248. MR0440724
[16] Konstantinides, D. G. and Mikosch, T. (2005). Large deviations and ruin probabilities for solutions to stochastic recurrence equations with heavy-tailed innovations. Ann. Probab. 33 19922035. MR2165585
[17] Mikosch, T. and Samorodnitsky, G. (2000). The supremum of a negative drift
random walk with dependent heavy-tailed steps. Ann. Appl. Probab. 10 1025
1064. MR1789987
[18] Mikosch, T. and Wintenberger, O. (2011). Precise large deviations for dependent
regularly varying sequences. Unpublished manuscript.
[19] Nagaev, A. V. (1969). Integral limit theorems for large deviations when Cramers
condition is not fulfilled I, II. Theory Probab. Appl. 14 5164; 193208.
[20] Nagaev, S. V. (1979). Large deviations of sums of independent random variables.
Ann. Probab. 7 745789. MR0542129
[21] Petrov, V. V. (1995). Limit Theorems of Probability Theory: Sequences of Independent Random Variables. Oxford Studies in Probability 4. Oxford Univ. Press,
New York. MR1353441
D. Buraczewski
E. Damek
J. Zienkiewicz
Instytut Matematyczny
Uniwersytet Wroclawski
50-384 Wroclaw
pl. Grunwaldzki 2/4
Poland
E-mail: dbura@math.uni.wroc.pl
edamek@math.uni.wroc.pl
zenek@math.uni.wroc.pl
T. Mikosch
University of Copenhagen
Universitetsparken 5
DK-2100 Copenhagen
Denmark
E-mail: mikosch@math.ku.dk