Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Sampling and Quantization Theory for Image Processing

G.A. GIRALDI
1
1
LNCCNational Laboratory for Scientic Computing -
Av. Getulio Vargas, 333, 25651-070, Petr opolis, RJ, Brazil
{gilson}@lncc.br
Abstract.
1 Introduction
The basic requirement for computer processing an image is that it is available in digital form. A digital image can
be represented as a two dimensional matrix. Its elements are called pixels and to each pixel it is associated a color
intensity or a grey-level value. In the case of color images, an usual color system is the RGB one.
Grey-level images are, in general, represented by 8 bits. So, we can assign intensities I {0, 1, ..., 255} to
each pixel.
The process of digitization; that means, to convert a continuous signal (function) in a digital form, involves
sampling in a regular grid and quantization. In this Chapter, we review basic elements in these elds.
Basic references for this chapter are [1],[2], [3].
2 One-Dimensional Sampling Theory
Sampling Theorem: If the Fourier transform

f () of a signal function f (x) is zero for all frequencies ouside the
interval
c

c
(bandlimited function), then f (x) can be uniquely determined from its sampled values:
f
n
= f (nT) , n +, (1)
if:
T =
1
2
c
. (2)
In this case, f (x) is recovered by:
f (x) =
+

n=
f (nT)
sin[
c
(x nT)]

c
(x nT)
. (3)
Dem: If we use the inverse Fourier transform we obtain the signal f (x) by:
f (x) =
_
c
c

f () exp(2jx) d. (4)
So, the sampling values f
n
can be written as:
f
n
= f (nT) =
_
c
c

f () exp(2jnT) d (5)
By expressing

f () as a Fourier siries in the interval
c

c
, we obtain:

f () =
+

n=
c
n
exp
_
2j
n
2
c
_
=
+

n=
c
n
exp
_
j
n

c
_
.
where c
n
are computed by:
c
n
=
1
2
c
_
c
c

f () exp
__
2jn
_
1
2
c
_

__
d
therefore, by comparing this result with expression (5) we observe that:
c
n
=
1
2
c
f
_
n
1
2
c
_
, (6)
if T = 1/2
c
. So, by substituting this result in the Fourier series, we obtain:

f () =
+

n=
1
2
c
f
_
n
1
2
c
_
exp
_
j
n

c
_
(7)
were
c

c
. Therefore, by substituting this result in expression (4) we get
f (x) =
_
c
c
_
+

n=
1
2
c
f
_
n
1
2
c
_
exp
_
j
n

c
_
_
exp(2jx) d. (8)
We can interchange the summation and integration operations to obtain:
f (x) =
+

n=
1
2
c
f
_
n
1
2
c
__
c
c
exp
_
2j
_
x
n
2
c
__
d. (9)
We can compute the integral to get:
f (x) =
1
2
c
+

n=
f
n
sin[
c
(x T)]

c
(x nT)
, (10)
where T = 1/2
c
.(End of Prof.)
The function:
f (x) =
sinx
x
,
is called the sinc function. Expression (10) shows that, if T = 1/2
c
, we can recover the input signal f through
an innite order interpolation using sinc-like functions. The sampling rate 2
c
is called the Nyquist rate or the
Nyquist frequency.
3 Quantization Theory
After the sampling process described on section 2, the continuous scalar eld f is represented in a mesh with
resolution T through an array: {f (n) , n } , where each value f (n) is a real variable. The next
step in order to get a digital version of f is the quantization.
The input of a quantizer is the sampled function f (n) , and the output is the sequence f

(n)
{r
1
, r
2
, , r
L
}. The values r
i
, i = 1, 2, ..., L, are called the reconstruction levels.
So, let us consider a continuous variable u, dened in the interval a u b. The process of mapping
a continuous variable u in a variable u

which takes values from the nite set {r


1
, r
2
, , r
L
} is called
Quantization.
In the quantization process, we must dene a set of transition levels {t
k
, k = 1, 2, ..., L + 1} , with t
1
= a
and t
L+1
= b, such that r
k
[t
k
, t
k+1
) . Then, the discrete variable u

can be dened as follows:


u

(u) = r
k
, if u [t
k
, t
k+1
). (11)
Such mapping can be represented by the staircase function of Figure 1, which also pictures the quantization
error.
Figure 1: Quantization and the corresponding error.
For instance, if 0 u 10 and the samples are uniformly quantized to 256 levels, then, the transition t
k
and
reconstruction r
k
levels are, respectively, given by:
t
k
=
10 (k 1)
256
, k = 1, 2, , 257 (12)
r
k
= t
k
+
5
256
, k = 1, 2, , 256. (13)
In this case, the interval q = t
k
t
k1
= r
k
r
k1
is constant for different values of k and it is called the
quantization interval.
Obviously, in this process there is lost of information. So, a good quantizer is one which represents the
original signal with minimum loss or distortion. Therefore, some optimization criterium must be considered in
order to design a suitable quantizer, which may be more efcient than the simple choice given by expressions like
(12)-(13).
3.1 The Optimum Mean Square or LLOYD-MAX Quantizer
This quantizer is obtained by minimizing the mean square error for a given number of quantization levels L. So,
let u be a real scalar random variable with a continuous probability density function p (u) . So, it is desired to nd
the decision levels t
k
and the reconstruction levels r
k
in order to minimize the mean square error:
= E
_
(u u

)
2
_
=
_
t
L+1
t
1
(u u

)
2
p (u) du, (14)
which can be rewritten as:
=
L

i=1
_
t
i+1
t
i
(u r
i
)
2
p (u) du. (15)
The necessary conditions for minimizing are obtained by solving the expressions:

t
k
= (t
k
r
k1
)
2
p (t
k
) (t
k
r
k
)
2
p (t
k
) = 0, 1 k L, (16)

r
k
= 2
_
t
k+1
t
k
(u r
k
) p (u) du = 0, 1 k L. (17)
Using the fact that t
k1
t
k
, we can simplify the preceeding expressions to obtain:
t
k
=
r
k
+ r
k1
2
, (18)
r
k
=
_
t
k+1
t
k
up (u) du
_
t
k+1
t
k
p (u) du
= E [u|u [t
k
, t
k+1
)] . (19)
Properties:
1. The quantizer output is an unbiased estimate of the input, that is:
E [u

] = E [u] .
2. The quatization error is orthogonal to the quantizer output, that is:
E [(u u

) u

] = 0.
3.2 The Uniform Optimal Quantizer
Let us consider a continuous variable u, dened in the interval t
1
u t
L+1
. If the probability density function
p (u) is uniform, that is:
p (u) =
1
t
1
+ t
L+1
, if t
1
u t
L+1
, (20)
p (u) = 0, otherwise,
then, from (19) we obtain:
r
k
=
t
k+1
+ t
k
2
. (21)
Now, using this expression and equation (18) we get:
t
k
t
k1
= t
k+1
t
k
= cons tant = q. (22)
Bisides, we can show that:
q =
t
L+1
t
1
L
, r
k
= t
k
+
q
2
. (23)
So, all transitions as well as reconstruction levels are equally spaced. The quantization error e = u u

is
uniformly distributed over the interval (q/2, q/2) . Hence, the mean square error is given by:
=
1
q
_
q/2
q/2
u
2
du =
q
2
12
. (24)
4 Exercises
1. Consider the function z = x
2
+ y
2
, for 0 x 10 and 0 y 10. Take a regular discretization
with steps x = y = 0.1 and its histogram as the probability distribution. Then, apply the LLOYD-
MAX quantizer (section 3.1) to get a 8-bits grey level image.
2. Generalize the Sampling Theorem of section 2 for the cases T <
1
2c
e T >
1
2c
. Suggestion: apply
the Poissons sum formula of reference [3].
3. Exercise 4.11, page 126 do Jain (Moir e Effect)
4. Prove expressions (23)
5. Demonstrate expressions (16)-(19).
6. Prove propertie 1 and property 2 of section 3.1.
7. Take an image and apply a gaussian low-pass lter. Use the uniform quantizer in order to conver the
lter output in a 8-bit grey level image.
8. Generalize the Sampling Theorem of section 2 for 2D.
9. Prove expression (24).
References
[1] R.C. Gonzalez. Digital Image Processing. Reading Addison-Wesley, 1992.
[2] Anil K. Jain. Fundamentals of Digital Image Processing. Prentice-Hall, Inc., 1989.
[3] An Introduction to the Sampling Theorem. www.national.com/an/AN/AN-236.pdf. Technical report, 1980.

You might also like