Professional Documents
Culture Documents
Quantization of Gaussians
Quantization of Gaussians
Rényi-α-entropy constraints
arXiv:1212.6530v2 [math.PR] 21 Jun 2013
Wolfgang Kreitmeier ∗
Abstract
We consider the optimal quantization problem with Rényi-α-entropy
constraints for centered Gaussian measures on a separable Banach space.
For α = ∞ we can compute the optimal quantization error by a moment
on a ball. For α ∈ ]1, ∞] and large entropy bound we derive sharp asymp-
totics for the optimal quantization error in terms of the small ball proba-
bility of the Gaussian measure. We apply our results to several classes of
Gaussian measures. The asymptotical order of the optimal quantization
error for α > 1 is different from the well-known cases α = 0 and α = 1.
We use the convention 0 · log(0) := 0 and 0x := 0 for all real x. The logarithm
log is based on e.
∗ Department of Informatics and Mathematics,
1
Quantization of Gaussian measures 2
if card denotes cardinality. Using the rule of de l’Hospital it is easy to see, that
and α ∈ {0, 1}. In case of α = 0 the reader is referred to [24, section 5.2]. To
the author’s knowledge the uniform and the exponential distribution are the
only examples for α = 1 where an exact determination of the optimal quanti-
zation error was carried out so far. György and Linder [28] have determined a
parametric representation of (1) for the uniform distribution and a large class of
distance functions ρ which includes (2). Berger [6] has derived in case of α = 1
and r = 2 an analytical representation for the optimal quantization error of
the exponential distribution. For the class (2) of distance functions the author
[29] was able to generalize the results of György and Linder [28] to the case
α ∈ [0, ∞].
Due to the difficulties in determining the optimal quantization error one
is interested in asymptotics for the error for large entropy bounds. In case
of α ∈ {0, 1} and finite dimension the asymptotical behaviour of the optimal
quantization error is well-known for a large class of distributions, see e.g. [24,
27]. Kreitmeier and Linder [32] have derived also sharp asymptotics for a large
class of one-dimensional distributions and α ∈ [0, ∞]. Moreover, the author
[30] has determined first-order asymptotics for the optimal quantization error
in arbitrary finite dimension and α ∈ [0, ∞], where the class of distributions is
larger than the one in [32].
This paper aims to determine asymptotics for the optimal quantization error
(1) in the infinite dimensional case. To this end we will assume for the rest of
this paper that (E, k · k) is of infinite dimension. Moreover, we restrict ourselves
to Gaussian measures. In more detail we will assume from now on that µ is
a non-atomic centered Gaussian measure on E and the support of µ coincides
with E. The restriction to Gaussian measures is motivated by different reasons.
First, this class of distributions has been extensively studied in the past. In the
proofs of this paper we especially use concentration inequalities (cf. [8]) and
small ball asymptotics (see e.g. [5, 9, 10, 21, 34, 39, 43, 44]). Secondly, for
α
distance functions of type (2) and α ∈ {0, 1} the asymptotical order of Dµ,ρ (R)
for large R has been already determined for several classes of Gaussian measures.
Dereich et al. [13] have determined asymptotics for (1) in case of α = 0 and
for distance functions of type (2). Their results require weak conditions on
the regular variation of the small ball asymptotics of the Gaussian measure.
Graf, Luschgy and Pagès [25] have additionally shown for α = 0 and restriction
(2) that one can determine the small ball asymptotics from the asymptotics
of the optimal quantization error (1) if the asymptotics of (1) satisfy certain
regularity conditions. Luschgy and Pagès [36] have determined sharp error
asymptotics for α = 0 and distance function ρ(x) = x2 . They imposed a
condition on the regularity of the eigenvalues of the covariance operator of µ.
In this situation, also the sharp error asymptotics for α = 0 and α = 1 coincide,
Quantization of Gaussian measures 4
cf. [26]. Dereich and Scheutzow [14] have shown for fractional Brownian motion
that sharp asymptotics of (1) for large R exist and also coincide for α ∈ {0, 1}.
According to these cited works the asymptotics for α = 0 and α = 1 are of the
same order and in view of Remark 1.2 even for all α ∈ [0, 1].
The objective of this paper is to analyze the optimal quantization error for
α > 1. In Section 2 we determine in case of α = ∞ (’mass-constrained quan-
tization’) a representation of the optimal quantization error by a moment on a
ball (cf. Proposition 2.3). The proof of this result is a straightforward general-
ization of the techniques used in the proof of [30, Proposition 2.1.]. In Section
3, for a large class of Gaussian measures where sharp asymptotics for the small
ball probability are known, we can determine sharp asymptotics for the optimal
quantization error with entropy parameter α > 1 (cf. Corollary 3.12, Theorem
3.15). The cornerstone of our approach is covered by Proposition 3.10. For dis-
tance functions of type (2) we obtain a representation of the sharp asymptotics
∞
for Dµ,ρ in terms of the inverse of the small ball function (cf. definition (6)).
The condition imposed (cf. (10)) on the small ball asymptotics is satisfied by
most prevalent Gaussian measures. For those distributions we are then able
to derive also sharp asymptotics for all α > 1, cf. Corollary 3.16. In Section
4 we discuss several examples of Gaussian processes in order to determine the
asymptotical order of the optimal quantization error for large entropy bound
and α > 1. The asymptotics of the optimal quantization error for α > 1 turns
out to be of different order compared to the case α ≤ 1.
R+ = {x ∈ R : x > 0} and R+
0 = {x ∈ R : x ≥ 0}.
As a key tool we will use Anderson’s inequality [2] as stated in reference [8].
Theorem 2.1 ([8, Corollary 4.2.3]). If A is a convex, symmetric and Borel-
measurable subset of E, then for every a ∈ E
Proof.
R R
1. A ρ(kx − ak)dµ(x) ≥ B(a,l) ρ(kx − ak)dµ(x) with µ(B(a, l)) = µ(A).
By the properties of the mapping Fa an l > 0 exists with µ(B(a, l)) = µ(A).
The remaining part of the proof can be taken from the proof of [24, Lemma 2.8].
Although [24, Lemma 2.8] covers only the special case ρ(x) = xr , the argument
works also for general ρ.
R R
2. B(a,l) ρ(kx − ak)dµ(x) ≥ B(0,s) ρ(kxk)dµ(x).
Let
E ∋ x → f (x) = 1B(0,s) (x)ρ(kxk).
By Theorem 2.1 an s > 0 exists such that
µ(B(0, s)) = µ(B(a, l)) ≤ µ(B(0, l)),
which yields s ≤ l. For every c ∈ [0, ∞[ the set
B(0, s) ∩ {f ≤ c} = {x ∈ B(0, s) : ρ(kxk) ≤ c} = B(0, min(s, ρ−1 (c)))
is symmetric and convex. Moreover f (· + t(−a)) is µ-integrable for every t ≥ 0.
Because the support of f is a subset of B(0, s) we obtain from Theorem 2.1 that
Z Z
ρ(kx − ak)dµ(x) ≥ ρ(kx − ak)dµ(x)
B(a,l) B(a,s)
Z Z
= f (x − a)dµ(x) ≥ f (x)dµ(x)
Z
= ρ(kxk)dµ(x).
B(0,s)
Quantization of Gaussian measures 6
∞
Proof. Let R > 0. From the definition (1) of Dµ,ρ (R) we obtain
Z
∞
Dµ,ρ (R) = inf{ ρ(kx − f (x)k)dµ(x) : f ∈ F , max µ(f −1 (a)) ≥ e−R }
a∈f (E)
Moreover let
Z
D(R) = inf{ ρ(kx − ak)dµ(x) : a ∈ E, A measurable , µ(A) ≥ e−R }. (5)
A
∞
1. Dµ,ρ (R) ≥ D(R).
Let f ∈ F with maxb∈f (E) µ(f −1 (b)) ≥ e−R . Then an a ∈ f (E) exists with
µ(f −1 (a)) ≥ e−R . Let A = f −1 (a). We obtain
Z X Z
ρ(kx − f (x)k)dµ(x) = ρ(kx − bk)dµ(x)
b∈f (E) f −1 (b)
Z
≥ ρ(kx − ak)dµ(x)
f −1 (a)
Z
= ρ(kx − ak)dµ(x) ≥ D(R),
A
∞
which yields Dµ,ρ (R) ≥ D(R).
∞
2. Dµ,ρ (R) ≤ D(R).
Let A ⊂ E be measurable with µ(A) ≥ e−R and choose a ∈ E. Let ε > 0.
Because ρ(0) = 0 and ρ is continuous, a δ > 0 exists such that for every t ∈ [0, δ]
we have ρ(t) ≤ ε. Let (xn )n∈N be dense in E. Then (B(xn , δ))n∈N is an open
cover of E. Hence a Borel-measurable partition (An )n∈N of E\A exists, such
that An ⊂ B(xn , δ) for every n ∈ N. Now we define the mapping f : E → E by
(
a, if x ∈ A
f (x) =
xn , if x ∈ An .
Quantization of Gaussian measures 7
be the small ball function of µ. Note that bµ (·) is continuous, surjective, strictly
decreasing and, therefore, invertible (see e.g. [12, Lemma 2.3.5]). Thus we
obtain as an immediate consequence of Proposition 2.3 the following result.
∞
Corollary 2.4. Let R > 0. Then Dµ,ρ (R) ≤ ρ(b−1
µ (R))e
−R
.
and f & g as x → a if
1 ≤ lim inf f (x)/g(x).
x→a
Quantization of Gaussian measures 9
We write f / g as x → a if
and f . g as x → a if
lim sup f (x)/g(x) ≤ 1.
x→a
then
b/a
b−1
µ (R) ∼ c
1/a −b/a −1/a
a R (log(R)) as R → ∞.
Remark 3.4. It is also easy to check that
implies that
b/a
b−1
µ (R) ≈ R
−1/a
(log(R)) as R → ∞.
Remark 3.5. According to the separability of the support of µ and the finite
r−th moment (cf. Remark 3.2) we have
0
lim Dµ,r (R) = 0.
R→∞
Dµ,r (fR )
lim α (R)
= 1. (7)
R→∞ Dµ,r
Remark 3.7. An asymptotically α−optimal family (fR )R>0 always exists. More-
over we can assume w.l.o.g. that card(fR (E)) < ∞ for every R > 0. Let us
α
justify this. First, we note that Dµ,r (R) > 0 for every R > 0, because µ is a non-
degenerate Gaussian measure. Hence, the left hand side of (7) is well-defined.
Clearly, for every R ≥ 0 we can define a quantizer f ∈ F with Hµα (f ) = 0 ≤ R
Quantization of Gaussian measures 10
(choose a ∈ E and let f (x) = a for every x ∈ E). Thus for every R ≥ 0 a se-
quence (fRn )n∈N of quantizers exists with Hµα (fRn ) ≤ R and Dµ,r (fRn ) → Dµ,r
α
(R)
as n → ∞. For every R > 0 choose n0 (R) such that
n (R) α
|Dµ,r (fR0 ) − Dµ,r (R)| ≤ εR with εR = R−1 Dµ,r
α
(R).
n (R)
Consequently, (fR0 )R>0 is an asymptotically α−optimal family, i.e. such a
family always exists.
Now let (fR )R>0 be an asymptotically α−optimal family, let εR as above and
a0 ∈ fR (E). Choose A ⊂ fR (E)\{a0 } such that card(fR (E)\A) < ∞,
XZ Z
kx − akr dµ(x) ≤ εR /2 and kx − a0 kr dµ(x) ≤ εR /2.
−1 −1
a∈A fR (a) ∪a∈A fR (a)
we obtain
|Dµ,r (fR ) − Dµ,r (gR )| ≤ εR .
Moreover, Hµα (gR )
≤ Hµα (fR )
according to [31, Proposition 4.2] and [31, Defini-
tion 2.1(b)]. Thus, (gR )R>0 is also asymptotically α−optimal with card(gR (E)) <
∞ for every R > 0.
Lemma 3.8. Let α ∈ ]1, ∞[ and (fR )R>0 ⊂ F be an asymptotically α−optimal
family. Then, limR→∞ Hµα (R) = ∞, or, what is the same,
X
lim ( µ(fR−1 (a))α )1/α = 0.
R→∞
a∈fR (E)
Proof. Let C > 0 and (Rn )n∈N be a sequence with limn→∞ Rn = ∞. We will
show that lim inf n→∞ Hµα (fRn ) ≥ C. Let us assume the contrary. Hence, there
exists a subsequence of (fRn )n∈N , which we will also denote by (fRn )n∈N , such
that Hµα (fRn ) < C for every n ∈ N. By definition (1) of the optimal quantization
error and because the support of µ is infinite we obtain
α
lim inf Dµ,r (fRn ) ≥ Dµ,r (C) > 0. (8)
n→∞
which contradicts (8). Thus we obtain that lim inf n→∞ Hµα (fRn ) ≥ C. Because
C > 0 and (Rn )n∈N was arbitrary we get limR→∞ Hµα (fR ) = ∞. Now the
assertion follows immediately from the definition of Hµα (fR ).
Quantization of Gaussian measures 11
Proof. Let η ∈ ]0, 1[ and s ∈ ]0, ∞[ with s = b−1µ (R), i.e. µ(B(0, s)) = e
−R
.
Proposition 2.3 implies
kxkr dµ(x)
R
∞
Dµ,r (R) 1
Z
B(0,s)
= ≥ kxkr dµ(x)
(b−1
µ (R))r e−R µ(B(0, s))s r µ(B(0, s))s r
B(0,s)\B(0,ηs)
µ(B(0, ηs))
≥ η r (1 − ).
µ(B(0, s))
then
∞
Dµ,r (R) ∼ (b−1 r −R
µ (R)) e ∼ c1/a a−b/a R−1/a (log(R))b/a e−R as R → ∞.
Quantization of Gaussian measures 12
Proof. From [7, p.16] we obtain that [1, ∞[ ∋ x → c(log(x))b is a slowly vary-
ing function. Let η ∈ ]0, 1[. Applying [7, Theorem 1.4.1] we deduce from [7,
Definition (1.4.3)] that
bµ (1/(η −1 x)) bµ (ηs)
1 < η −a = lim = lim .
x→∞ bµ (1/x) s→0 bµ (s)
Thus we obtain
µ(B(0, ηs))
lim = lim ebµ (s)(1−bµ (ηs)/bµ (s)) = 0.
s→0 µ(B(0, s)) s→0
Hence the first part of the assertion follows from Proposition 3.10. The second
part is a direct consequence of Lemma 3.3.
Before we can state and prove our main result (Theorem 3.15) we need two
more technical lemmas.
Lemma 3.13. Let α > 1, A > 0 and B ≥ 0. Let f : [0, 1/e[ → R with
(
0, if x = 0
f (x) = −A B
x (log(1/x)) (log(log(1/x))) , if x ∈ ]0, 1/e[.
Then, f is continuous on [0, 1/e[, continuously differentiable on ]0, 1/e[ and
monotone increasing on [0, e−e [. Moreover an x0 ∈ ]0, 1/e[ exists, such that the
mapping ]0, x0 [ ∋ x → F (x) = x1−α f ′ (x) is monotone decreasing.
Proof. Let z > 1 and g(z) = zf (1/z). Thus g is slowly varying (cf. [7, Examples
p. 16]). Applying [7, Proposition 1.3.6 (v)] we obtain
lim f (x) = lim (1/z)g(z) = 0.
x→0 z→∞
Thus f is continuous on [0, 1[. Now let x ∈ ]0, 1[ and z = 1/x. We calculate
x1−α f ′ (x) = z α−1 h1 (z) · h2 (z)
with
h1 (z) = (log(log(z)))B−1 (log(z))−A−1
and
h2 (z) = log(log(z)) log(z) + log(log(z)) − 1.
Because f (x) = h1 (1/x)h2 (1/x) > 0 for every x ∈ ]0, e−e [ we obtain that f is
′
Lemma 3.14. Let α ∈ ]1, ∞[. Let x0 > 0 and f : [0, x0 ] → R+ 0 be continuous
with f (0) = 0. If f is continuously differentiable on ]0, x0 [ and the mapping
]0, x0 [ ∋ x → F (x) = x1−α f ′ (x) is monotone decreasing, then
( n n n
)
X X X
n α α
f (x0 ) ≤ inf f (xi ) : n ∈ N, (x1 , .., xn ) ∈ [0, x0 ] , xi = 1, xi ≥ x0 .
i=1 i=1 i=1
Thus we obtain
( n n n
)
X X X
n α α
inf f (xi ) : n ∈ N, (x1 , .., xn ) ∈ [0, x0 ] , xi = 1, xi ≥ x0
i=1 i=1 i=1
n
( )
X
≥ inf g(xi ) : n ∈ N, (x1 , .., xn ) ∈ An,α (x0 ) .
i=1
An,α (x0 ) is a convex set for every n ∈ N. Moreover g is differentiable on ]0, xα0[
and for x ∈ ]0, x0 [ we calculate g ′ (xα ) = α−1 x1−α f ′ (x). Thus g ′ is monotone
decreasing on ]0, xα and, therefore, concave on ]0, xα
0 [P 0 [. For n ∈ N and x ∈
An,α (x0 ) let G(x) = ni=1 g(xi ). Obviously G is continuous and concave on the
convex compact set An,α (x0 ). Applying [42, Theorem 3.4.7.] we obtain that
G attains its global minimum at an extreme point of An,α (x0 ). Note that the
extreme points of An,α (x0 ) are consisting of the set
Thus we get
n
( )
X
inf g(xi ) : n ∈ N, (x1 , .., xn ) ∈ An,α (x0 ) ≥ g(xα
0 ) = f (x0 ),
i=1
then r
α −1 α−1 − α−1 R ∞ α−1
Dµ,r (R) ∼ bµ R e α ∼ Dµ,r R
α α
as R → ∞.
Quantization of Gaussian measures 14
α α−1
r − α−1 R
Proof. 1. Dµ,r (R) . b−1µ α R e α as R → ∞.
This follows from Lemma 3.1 and Corollary 3.12.
α α−1
r − α−1 R
2. Dµ,r (R) & b−1
µ α R e α as R → ∞.
Let (fR )R>0 be an asymptotically α-optimal family of quantizers. According to
Remark 3.7 let us assume w.l.o.g. that card(fR (E)) < ∞ for every R > 0. By
definition we have
X Z
Dµ,r (fR ) = kx − akr dµ(x).
−1
a∈fR (E) fR (a)
For every a ∈ fR (E) let sa (R) > 0 such that µ(B(0, sa )) = µ(fR−1 (a)). Applying
Lemma 2.2 and Proposition 2.3 we deduce
Z Z
r
kx − ak dµ(x) ≥ kxkr dµ(x) = Dµ,r
∞
(− log(µ(fR−1 (a)))).
−1
fR (a) B(0,sa )
Now let ε ∈ ]0, 1[. According to Corollary 3.12 a δ ∈ ]0, 1[ exists, such that for
every x ∈ ]0, δ[
∞
Dµ,r (− log(x)) ≥ (1 − ε)C · g(x)
with C = c1/a a−b/a and g(x) = x(− log(x))−1/a (log(− log(x)))b/a . From Re-
mark 3.9 we get an R1 > 0 such that for every R ≥ R1 and for every a ∈ fR (E)
we have
∞
Dµ,r (− log(µ(fR−1 (a)))) ≥ (1 − ε)C · g(µ(fR−1 (a)))
and, therefore,
X
Dµ,r (fR ) ≥ (1 − ε)C g(µ(fR−1 (a))). (12)
a∈fR (E)
3.8 an R2 > 0 such that x0 < z0 for every R ≥ R2 . Now let R ≥ max(R1 , R2 ).
α−1
From Hµα (fR ) ≤ R we obtain z0 > x0 ≥ e− α R . Hence we can apply Lemma
3.14 and deduce together with the monotonicity of g that
X α−1
g(µ(fR−1 (a))) ≥ g(x0 ) ≥ g(e− α R ). (13)
a∈fR (E)
then
α α − 1 −1/a α−1 α−1
Dµ,r (R) ∼ c1/a a−b/a ( R) (log( R))b/a e− α R
α α
as R → ∞.
Proof. Immediate consequence of Theorem 3.15 and Corollary 3.12.
Remark 3.17. Unfortunately the author was unable to prove the assertion of
Theorem 3.15 under the weaker condition (10) of Proposition 3.10. In general it
would be of interest to characterize those Gaussian measures who are satisfying
condition (10). Finally it is also an open question if the asymptotics in Theorem
3.15 resp. Corollary 3.12 remain valid in a weak sense if we only require weak
asymptotics for the small ball function, i.e. if we replace ∼ by ≈.
4 Examples
Let d ∈ N. Let f : [0, 1]d → R. We denote by k · k∞ the sup-norm, i.e.
kf k∞ = sup |f (t)|.
t∈[0,1]d
with s, t ∈ [0, 1]d . Fractional Brownian motion is covered by the special case
d = 1. Moreover we obtain the classical Brownian sheet by letting d = 2 and
H1 = H2 = 1/2. Let γ = min(H1 , .., Hd ) > 0.
Case 1. there is a unique minimum among H1 , .., Hd .
In this case we know, that a c = c(H) ∈ ]0, ∞[ exists, such that
(cf. [39]. See also [33] and [34] for d = 1). From Lemma 3.3 we deduce
b−1
µ,k·k∞ (R) ∼ (R/c)
−γ
as R → ∞.
Sharp asymptotics for bµ,k·kp (·) are known if p = 2 (cf. [9]). Thus a c2 > 0
exists such that
bµ,k·k2 (s) ∼ c2 s−1/γ as s → 0.
As above, Lemma 3.3 and Corollary 3.12 yields
c γr
∞ 2
Dµ,r,k·k (R) ∼ e−R as R → ∞.
2
R
Quantization of Gaussian measures 17
(cf. [5, Theorem 5.2.], see also [44] for the case H = 1/2). Remark 3.4 implies
b−1
µ,k·k∞ (R) ≈ R
−H
(log(R))H+1 as R → ∞.
1/2 d−1
b−1
µ,k·k2 (R) ∼ cd 2
−(d−1) −1/2
R (log(R)) as R → ∞.
Quantization of Gaussian measures 18
X β = (Xtβ )t∈[0,1]
by
t
1
Z
Xtβ = (t − s)β−1 Bs ds, t ∈ [0, 1],
Γ(β) 0
where Bs denotes Brownian motion. Since a c(β) ∈ ]0, ∞[ exists, such that
b−1
µ,k·k∞ (R) ∼ (R/c(β))
−(β+1/2)
as R → ∞.
(cf. [10, Theorem 1.1]). Again, Lemma 3.3 and Corollary 3.12 are implying that
(m+1/2)r
∞ c(m)
Dµ,r,k·k (R) ∼ e−R as R → ∞.
2
R
for s = (s1 , .., sd ) ∈ [0, 1]d and t = (t1 , .., td ) ∈ [0, 1]d. The m−integrated
Brownian sheet (Xt )t∈Rd is now defined by
Z t1 d
Z td Y
(tj − uj )m
Xm (t) = ··· B(du1 , .., dud ).
0 0 j=1 m!
(cf. [25, p. 1061]) we deduce from Remark 3.4 and Corollary 2.4 together with
Lemma 3.1 for α > 1 that
α r
α ∞ −H/2
Dµ,r,k·k ( R) ≤ D µ,r,k·k∞ (R) / R e−R as R → ∞
∞
α−1
and
α r
α ∞ −H/2
Dµ,r,k·k ( R) ≤ Dµ,r,k·k (R) / R e−R as R → ∞.
p
α−1 p
and r
0 −H/2
Dµ,r,k·k p
(R) ≈ R as R → ∞.
d
Y
E(Xt Xs ) = max(0, ai − |si − ti |).
i=1
(cf. [23, Theorem 1.1]). Thus we obtain from Remark 3.4, Corollary 2.4 together
with Lemma 3.1 for α > 1 that
α r
α ∞
Dµ,r,k·k ( R) ≤ Dµ,r,k·k (R) / R−1/2 (log(R))3/2 e−R as R → ∞
∞
α−1 ∞
and
α r
α ∞
Dµ,r,k·k ( R) ≤ Dµ,r,k·k (R) / R−1/2 (log(R))d−1 e−R as R → ∞.
2
α−1 2
and r
0 −1/2 d−1
Dµ,r,k·k 2
(R) ≈ R (log(R)) as R → ∞.
Finally, also this class of Gaussian processes shows the change in the optimal
quantization error asymptotics as α increases.
References
[1] Aczél, J., Daróczy, Z.: On Measures of Information and Their Characteri-
zations, Mathematics in Science and Engineering, vol. 115. Academic Press,
London (1975)
[2] Anderson, T.W.: The integral of a symmetric convex set and some proba-
bility inequalities. Proc. Amer. Math. Soc. 6, 170-176 (1955)
[3] Beck, C., Schlögl, F.: Thermodynamics of chaotic systems. Cambridge Uni-
versity Press, Cambridge (1993)
Quantization of Gaussian measures 23
[4] Behara, M.: Additive and nonadditive measures of entropy. John Wiley,
New Delhi (1990)
[5] Belinsky, E., Linde, W.: Small Ball Probabilities of Fractional Brownian
Sheets via Fractional Integration Operators. J. Theoret. Probab. 15, 589-
612 (2002)
[6] Berger, T.: Optimum quantizers and permutation codes. IEEE Trans. In-
form. Theory 18, 759–765 (1972)
[7] Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular variation. Encyclope-
dia of Mathematics and its applications, Cambridge University Press, Cam-
bridge (1987)
[8] Bogachev, I.V.: Gaussian Measures. AMS (1998)
[9] Bronski, J.C.: Small Ball Constants and Tight Eigenvalue Asymptotics for
Fractional Brownian Motions. J. Theoret. Probab. 16, 87-100 (2003)
[10] Chen, X., Li, W.V.: Quadratic functionals and small ball probabilities for
the m−fold integrated Brownian Motion. Ann. Probab. 31, 1052-1077 (2003)
[11] Csáki, E.: On small values of the square integral of a multiparameter
Wiener process. Statistics and Probability. Proc. 3rd Pannonian Symp.,
Visegrád/Hung. 1982, 19-26 (1984)
[12] Dereich, S.: High resolution coding of stochastic processes and small ball
probabilities. Ph.D. Dissertation (2003)
[13] Dereich, S., Fehringer, F., Matoussi, A., Scheutzow, M.: On the link be-
tween small ball probabilities and the quantization problem for Gaussian
measures on Banach spaces. J. Theor. Probab. 16, 249-265 (2003)
[14] Dereich, S., Scheutzow, M.: High-resolution quantization and entropy cod-
ing for fractional Brownian motion. Electron. J. Probab. 11, 700-722 (2006)
[15] Dereich, S.: The coding complexity of diffusion processes under supremum
norm distortion. Stochastic Process. Appl. 118, 917-937 (2008)
[16] Dereich, S.: The coding complexity of diffusion processes under Lp [0, 1]-
norm distortion. Stochastic Process. Appl. 118, 938-951 (2008)
[17] Dunker, T.: Estimates for the Small Ball Probabilities of the Fractional
Brownian Sheet. J. Theor. Probab. 13, 357-382 (2000)
[18] El-Nouty, C.: The fractional mixed fractional Brownian motion and frac-
tional Brownian sheet. ESAIM, Probab. Stat. 11, 448-465 (2007)
[19] El-Nouty, C.: On the lower classes of some mixed fractional Gaussian pro-
cesses with two logarithmic factors. J. Appl. Math. Stochastic Anal. (2008)
doi:10.1155/2008/160303
Quantization of Gaussian measures 24
[36] Luschgy, H., Pagès, G.: Sharp asymptotics of the functional quantization
problem for Gaussian processes. Ann. Probab. 32, 1574-1599 (2004)
[37] Luschgy, H., Pagès, G.: Functional quantization of a class of Brownian
diffusions: a constructive approach. Stochastic Processes Appl. 116, 310-
336 (2006)
[38] Luschgy, H., Pagès, G.: Functional quantization rate and mean pathwise
regularity of processes with an application to Lévy processes. Ann. Appl.
Probab. 18, 427-469 (2008)
[39] Mason, D.M., Shi, Z.: Small Deviations for Some Multi-Parameter Gaus-
sian Processes. J. Theor. Probab. 14, 213-239 (2001)
[40] Monrad, D., Rootzén, H.: Small values of Gaussian processes and func-
tional laws of the iterated logarithm. Probab. Theory Relat. Fields 101,
173-192 (1995)
[41] Nazarov, A.I.: On the Sharp Constant in the Small Ball Asymptotics of
Some Gaussian Processes under L2 −Norm. J. Math. Sci., New York 117,
4185-4210 (2005)
[42] Niculescu, C., Persson, L.: Convex functions and their applications. A
contemporary approach, CMS Books in Mathematics, Springer, New York
(2005)
[43] Shao, Q.M., Wang, D.: Small ball probabilities of Gaussian fields. Probab.
Theory Relat. Fields 102, 511-517 (1995)
[44] Talagrand, M.: The small ball problem for the Brownian sheet. Ann.
Probab. 22, 1331-1354 (1994)