Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Lossy Compression

• Compromise accuracy of reconstruction for increased


compression.
• The reconstruction is usually visibly indistinguishable from the
original image.
• Typically, one can get up to 20:1 compression with almost
(visually) indistinguishable error.
• The main difference between the lossless and lossy compression
methods is the presence of the quantizer.

Compressed
f (m, n) + e(m, n) e(m, n)
Σ
Symbol Image
Quantizer
Encoder
Input

Image

+


fˆ (m, n) f (m, n)
Predictor Σ
+

Source Encoder

• Notice that the prediction is based on the quantized input pixel


values and not on the original input pixel values.
• This prevents accumulation of prediction error in the reconstructed
image and has a stabilizing effect.
e(m, n) +


f ( m, n )
Symbol
Decoder Σ
Compressed + Decompressed
Image Image
fˆ ( m, n )
Predictor

Source Decoder
Lossy Compression: Delta Modulation
• The design of a lossy predictive coding scheme involves two main
steps: (a) Predictor and (b) Quantizer.
• Delta Modulation is a very simple scheme in which (we look at a
1-D case for simplicity):

• Predictor: fˆ (n) = αf (n − 1)

 + ς for e(n) > 0


• Quantizer (1-bit): e (n) = 
− ς otherwise

• Note that the quantized errors can be represented by a single bit.


Naturally, the symbol encoder can use a simple 1-bit fixed length
code.
• Example: Consider an input sequence of pixels:
f (n) = {14,15,14,15,13,15,14,20,26,27,28,27,27,29,37,47,62,75,77,
78,79,80,81,81,82,82}
with α = 1 and ς = 6.5 .

• The first pixel is transmitted error-free. Therefore,


f (0) = f (0) = 14 .


• The remaining outputs are sequentially computed using the above


predictor and quantizer expressions:

fˆ (n) = f (n − 1)
e(n) = f (n) − fˆ (n)
 6.5 for e(n) > 0
e( n ) = 
− 6.5 otherwise
f (n) = e(n) + fˆ (n)
Input Encoder Decoder Error
fˆ fˆ f −f
  

n f e e f f
0 14 - - - 14 - 14 0
1 15 14 1 6.5 20.5 14 20.5 -5.5
2 14 20.5 -6.5 -6.5 14 20.5 14 0
3

15

14

1

6.5

20.5

14 
20.5 -5.5
 

14 29 20.5 8.5 6.5 27 20.5 27 2


15 37 27 10 6.5 33.5 27 33.5 3.5
16 47 33.5 13.5 6.5 40 33.5 40 7
17 62 40 22 6.5 46.5 40 46.5 15.5
18 75 46.5 28.5 6.5 53 46.5 53 22
19

77

53

24

6.5

59.6

53

59.6

17.5

Design of Lossy Compression schemes
• In rapidly changing image regions, the value of ς = 6.5 is too
small to represent the changes in pixel value and this leads to a
distortion known as slope overload.
• In very slowly changing image regions, the value of ς = 6.5 is too
large to represent the smooth changes in pixel value and this leads
to a distortion known as granular noise.
• These phenomena lead to blurred object edges and grainy or noisy
surfaces.
• The interaction between the predictor and quantizer (i.e., the effect
of one on the other) is quite complex. Ideally, they must be
designed together, taking this interaction into account.
• However, in most cases, the predictor and quantizer are designed
independently of each other.

Optimal Predictor: DPCM


• The differential pulse code modulator (DPCM) is a specific
prediction scheme (1-D case is illustrated for simplicity):
• Predictor is chosen to optimize (minimize) the mean-squared
{[
error (of the predictor): E {e 2 (n)} = E f (n) − fˆ (n) .
2
]}
• Quantization error is assumed negligible: e(n) ≈ e(n) .

• The form of the predictor is a linear combination of the m past


m
input pixel values: fˆ (n) = α i f (n − i ) .


i =1
• The actual choice of the parameters {α i } obtained by minimizing
the MSE would depend on the image statistics.
• Some common choices that are used in practice are:

fˆ (m, n) = 0.97 f (m, n − 1)


fˆ (m, n) = 0.5 f (m, n − 1) + 0.5 f (m − 1, n)
fˆ (m, n) = 0.75 f (m, n − 1) + 0.75 f (m − 1, n) − 0.5 f (m − 1, n − 1)
 0.97 f ( m, n − 1) if ∆h ≤ ∆y
fˆ (m, n) =
0.97 f (m − 1, n) otherwise

where ∆h = f (m − 1, n) − f (m − 1, n − 1) , and
∆v = f (m, n − 1) − f (m − 1, n − 1) , denote the horizontal and vertical
gradients at point (m, n).

Example:

Original Image
(Lenna)
σ = 0.062 σ = 0.046

σ = 0.046 σ = 0.048

Prediction error images e(m, n) and their standard deviations for the
four different predictors.
Optimal Quantization: Lloyd-Max
quantizer
• A quantizer is a “staircase function” t = q (s ) that maps continuous
input values into a discrete and finite set of out values.

• We consider quantizers that are odd functions; i.e., q (− s ) = q ( s ) .

• An L-level quantizer is completely described by the L 2 − 1


decision levels s1 , s2 , , s L 2−1 and the reconstruction levels
t1 , t 2 , , t L 2 .

• If the input values s are not uniformly distributed, the “uniform
quantizer” with equally spaced quantization levels is not a good
choice.

• By the odd symmetry, we have s−i = − si and t −i = −ti .

• By convention, the input value s is mapped to the output value ti ,


if s lies in the half-open interval ( si , si+1 ] .

• The quantizer design problem is to select the best si and ti for a


particular optimization criterion and input probability distribution
p(s).

• Note that the input in our case is the prediction error pixel values,
since we are quantizing the prediction error.

• For image quantization, the input would be image pixel values.

• Based on a mean-squared quantization error criterion; i.e.,


[ ]
E ( s − ti ) 2 and assuming that the probability density function p(s)
is an even function, the conditions for minimal error are given by:

∫ (s − t ) p(s)ds = 0,
si
L Reconstruction levels ti are
i i = 1,2, 

, centroids of area under p(s) over


si −1
2 specified decision regions


(s , s
0 i=0
ti + ti +1 L
si = i = 1,2, −1


, Decision levels si are located


2 2 midway between reconstruction
L


∞ i= levels ti and ti +1
2
s −i = − si and t −i = − t i
Decision levels si and reconstruction
levels ti are symmetric about 0.
• The set of values si , ti that satisfy the above conditions give the
optimal quantizer --- called the Lloyd-Max quantizer.
• These equations for si and ti are coupled and cannot be solved
analytically.
• Tables of numerical solutions for different probability
distributions p(s), and quantization levels L are available.
• See Table 8.10 in text for L = 2,4,8 level quantizers. The values in
the table correspond to a unit variance ( σ = 1 ) Laplacian density
function:


1 

2s
p( s) = exp −


2σ σ


• If the (quantization) error variance is not equal to one, the values in


the table should be multiplied by the appropriate error standard
deviation σ .
• Example: 4-level Lloyd-Max quantization table for a Laplacian
density with σ = 0.062 (using Table 8.10 in text):

s t
(−∞,−0.068] − 0.11
(−0.068,0] − 0.024
(0,0.068] 0.024
(0.068, ∞] 0.11

• The output of the Lloyd-Max quantizer (quantized errors) are


encoded by a fixed-length code.
Example:

Normalized RMS Error for lossy DPCM using Lloyd-Max quantizer

Quantizer/ 2-level 4-level 8-level


Predictor
1 0.076 0.038 0.021
2 0.067 0.034 0.019
3 0.054 0.029 0.018
4 0.100 0.034 0.019
Compression 8:1 4:1 2.7:1
Reconstructed Image Error Image

1-bit
quantizer

2-bit
quantizer

3-bit
quantizer

Reconstructed and Error Images for lossy DPCM using Predictor 3

You might also like