Predictive PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Predictive coding

The approach, commonly referred to as predictive coding, is based on eliminating


the redundancies of closely spaced pixels—in space and/or time—by extracting and
coding only the new information in each pixel. The new information of a pixel is defined
as the difference between the actual and predicted value of the pixel.

1. Lossless predictive coding

FIG: A lossless predictive coding model:


(a) encoder;
(b) decoder.

1. The system consists of an encoder and a decoder, each containing an identical


predictor. As successive samples of discrete time input signal, f(n), are introduced
to the encoder, the predictor generates the anticipated value of each sample based
on a specified number of past samples.

2. The output of the predictor is then rounded to the nearest integer, denoted f ^
(n), and used to form the difference or prediction error

……………………… (1)

which is encoded using a variable-length code (by the symbol encoder) to


generate the next element of the compressed data stream.
3. The decoder in Fig. (b) reconstructs e(n) from the received variable-length code
words and performs the inverse operation

………………………….. (2)

to decompress or recreate the original input sequence.

4. Various local, global, and adaptive methods can be used to generate f ^ (n). In
many cases, the prediction is formed as a linear combination of m previous
samples. That is,

………………. (3)

where m is the order of the linear predictor, round is a function used to denote the
rounding or nearest integer operation, and the αi for i = 1, 2,..., m are prediction
coefficients.

5. If the input sequence in Fig. (a) is considered to be samples of an image, the f(n)
in Eqs. (1) through (3) are pixels—and the m samples used to predict the value of
each pixel come from the current scan line (called 1-D linear predictive coding),
from the current and previous scan lines (called 2-D linear predictive coding), or
from the current image and previous images in a sequence of images (called 3-D
linear predictive coding).

6. Thus, for 1-D linear predictive image coding, Eq. (3) can be written as

…………… (4)

where each sample is now expressed explicitly as a function of the input image’s
spatial coordinates, x and y.

7. Note that Eq. (4) indicates that the 1-D linear prediction is a function of the
previous pixels on the current line alone. In 2-D predictive coding, the prediction
is a function of the previous pixels in a left-to-right, top-to-bottom scan of an
image. In the 3-D case, it is based on these pixels and the previous pixels of
preceding frames.

8. Equation (4) cannot be evaluated for the first m pixels of each line, so those
pixels must be coded by using other means (such as a Huffman code) and
considered as an overhead of the predictive coding process.

2. Lossy predictive coding

1. In this method, we add a quantizer to the lossless predictive coding model and
examine the trade-off between reconstruction accuracy and compression
performance within the context of spatial predictors.

FIG: A lossless predictive coding model:


(a) encoder;
(b) decoder.

2. Fig shows, the quantizer, which replaces the nearest integer function of the
error-free encoder, is inserted between the symbol encoder and the point at which
the prediction error is formed.

3. It maps the prediction error into a limited range of outputs, denoted e . (n),
which establish the amount of compression and distortion that occurs.
4. In order to accommodate the insertion of the quantization step, the error-free
encoder of Fig. 1(a) must be altered so that the predictions generated by the
encoder and decoder are equivalent.

5. As Fig. 1(a) shows, this is accomplished by placing the lossy encoder’s predictor
within a feedback loop, where its input, denoted f (n), is generated as a function of
past predictions and the corresponding quantized errors. That is,

…………………… (1)

6. This closed loop configuration prevents error buildup at the decoder’s output.
Note in Fig. 1(b) that the output of the decoder is given also by Eq. (1).

7. Delta modulation (DM) is a simple but well-known form of lossy predictive


coding in which the predictor and quantizer are defined as

…………………………….. (2)

and

………………….. (3)

where α is a prediction coefficient (normally less than 1) and ζ is a positive


constant.

FIG: 2 An example of delta modulation.


8. The output of the quantizer, e . (n), can be represented by a single bit [Fig.
2(a)], so the symbol encoder of Fig. 1(a) can utilize a 1-bit fixed-length code. The
resulting DM code rate is 1 bit/pixel .

9. Both the input and completely decoded output [ f(n) and f . (n) ] are shown.
Note that in the rapidly changing area where ζ was too small to represent the
input’s largest changes, a distortion known as slope overload occurs. Moreover,
when ζ was too large to represent the input’s smallest changes, as in the relatively
smooth region, granular noise appears. In images, these two phenomena lead to
blurred object edges and grainy or noisy surfaces (that is, distorted smooth areas).

10. The distortions noted in the preceding example are common to all forms of lossy
predictive coding.The severity of these distortions depends on a complex set of
interactions between the quantization and prediction methods employed.

11. Despite these interactions, the predictor normally is designed with the
assumption of no quantization error, and the quantizer is designed to minimize its
own error. That is, the predictor and quantizer are designed independently of each
other.

You might also like