Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

3027

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 42, NO. 11. NOVEMBER 1994

Efficient Codebooks for Vector


.
Quantization Image compression with
an Adaptive Tree Search Algorithm
T

Vijay S. Sitaram, Chien-Min Huang, and Paul D. Israelsen

Abstract-This paper discusses some algorithms to be used for uses a fuzzy clustering and leads to a global optimum. Our
the generation of an efficient and robust codebook for vector goal in developing the algorithms discussed in this paper is to
quantization (VQ). Some of the algorithms reduce the required achieve the design of a universal codebook that is relatively
codebook size by 4 or even 8 b to achieve the same level of
performance as some of the popular techniques. This helps in insensitive to the scenery changes in different images.
The size of codebook and the vector dimension also play a
greatly reducing the complexity of codebook generation and
encoding. We also present a new adaptive tree search algorithm major role in determining the overall performance. From Shanwhich improves the performance of any product VQ structure. nons rate distortion theory we know that the larger the vector
Our results show an improvement of nearly 3 dB over the fixed dimension, the better the potential performance. However, with
rate search algorithm at a bit rate of 0.75 blpixel.

I. INTRODUCTION
MAGE DATA compression using vector quantization (VQ)
has received a lot of attention in the last decade because
of its simplicity and adaptability. The advantage of using
vectors over scalars was first shown by Shannon [ 11 in his rate
distortion theory. VQ requires the input image to be processed
as vectors or blocks of image pixels. The encoder takes in
a vector and finds the best or closest match, based on some
distortion criterion, from its stored codebook. The address of
the best match is then transmitted to the decoder. The decoder
accesses an entry from an identical codebook, thus obtaining
the reconstructed vector. Data compression is achieved in this
process because the transmission of the address requires fewer
bits than transmitting the vector itself.
The performance of encoding and decoding by VQ is
dependent on the available codebook and the distribution of the
source data relative to it. Hence, the design of an efficient and
robust codebook is of prime importance in VQ. Linde, Buzo,
and Gray first suggested a practical suboptimal clustering
analysis algorithm [2], now known as the LBG algorithm
to generate a codebook based on some training set. The
drawback of this scheme is that the algorithm only guarantees
a locally optimum codebook relative to the source data used
(the training set). Some of the techniques which have appeared
in the literature to overcome this problem are [31-[5]. The
simulated annealing (SA) method of generating a codebook
tries to obtain a global optimum by a stochastic relaxation
technique. Another algorithm, called deterministic annealing,

Paper approved by A. N. Netravali, the Editor for Image Processing of


the IEEE Communications Society. Manuscript received May 22, 1992. This
paper was presented in part at the IEEE Data Compression Conference,
Snowbird, UT, March 24-27, 1992.
V. S. Sitaram is with Expertware, Inc., Santa Clara, CA 95051 USA.
C.-M. Huang and P. D. Israelsen are with the Department of Electrical
Engineering, Utah State University, Logan, UT 84322 USA.
IEEE Log Number 9404732.

increased vector dimension the required codebook size also


increases, and the result is an exponential increase in encoding
complexity. Hence, for practical limitations one is forced to
work with low-dimensionality VQ with lesser quality, despite
the fact that better VQ performance is theoretically possible.
The increase in codebook size also introduces the problem of
empty cells in the generated codebook [6].
Because of these practical implementation issues, many
forms of constrained VQ have been developed and applied
which provide very useful and favorable tradeoffs between
performance and complexity. Some of the methods suggested
are tree-structured VQ [7], product code techniques [8]-[9],
transform VQ [lo], etc. In mean-residual vector quantization
(MRVQ) [ l l ] , the mean of the vector is subtracted from
each pixel and the residual vector is used for codebook
generation. The drawback of this method is that residual values
of high amplitudes, which typically represent edgehexture
information, are not adequately represented in the generated
codebook. To overcome the problem of preserving edge and
texture information gain shape vector quantization (GSVQ)
[12]has been suggested which normalizes each vector with
respect to its gain. A combination of the above two methods,
Le., mean removal and gain normalization of each vector, can
be expected to give better performance than either of them
used individually.
The three types of VQ discussed above can be grouped
under a single class called memoryless VQ, where the present
quantizer output does not depend on its past inputs. Another
class of vector quantizer is a memory or feedback vector
quantizer, in which the present quantizer output depends on
the past inputs. A typical example of a coder with memory
is linear predictive coding (LPC), which is used widely in
speech and image data compression. A type of memory VQ
is finite state vector quantization (FSVQ) [13], which uses a
finite-state machine model to select a vector quantizer from a
set of vector quantizers. A slight variation of LPC is predictive
vector quantization (PVQ)[141, which predicts the next vector

0090-6778/94$04.00 0 1994 IEEE

3028

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 42, NO. 11, NOVEMBER 1994

as a single unit. The residual vector, obtained by taking the


difference between the actual and predicted value, is used for
VQ.
The complexity of generating a codebook and encoding
for VQ can be considerably simplified by using a successive approximation tree-structured codebook [ 151. Further, a
pruning algorithm for the tree-structured codebook has been
suggested [161 which produces a variable rate encoder. We
present an alternative constant rate algorithm which improves
the performance of the tree structured codebook.
The rest of the paper is organized as follows. In Section
11, we present codebook design algorithms for mean-shape
VQ and predictive VQ. Section 111, shows how the adaptive
tree algorithm is implemented for a tree structured codebook.
Simulation results are given in Section IV, and finally in
Section V, we arrive at some conclusions.

generated as in full-search VQ. Then each of the vectors in the


training set is partitioned to one of the root level codevectors
and the LBG algorithm is applied to each of them separately.
This generates a two-level codebook and proceeding similarly
one can grow the tree in depth. For a more detailed discussion
of large size codebook generation and the associated problems
and solutions the reader is referred to [6].
A. Mean-Shape VQ Codebook Design

In the past literature MRVQ and GSVQ have been suggested


so as to increase the efficiency of the codebook. While MRVQ
is suited for flat image vectors, GSVQ is preferable for
images with a large dynamic range. The algorithm discussed
below combines MRVQ and GSVQ. The mean of the vector
is removed from each pixel and then each pixel is normalized
with respect to the vector gain. The term gain used here refers
to the LZ norm of the vector. As would be expected, such a
combination of techniques yields very good results.
11. CODEBOOK
DESIGN
The order of mean removal and gain normalization can
In this section we describe some algorithms for the design
affect the performance drastically. For a vector of dimension
of the codebook. The goal is to minimize the quantization error
k, normalization will produce vectors which lie on a kwhile encoding. The error should not change considerably
dimensional unit hypersphere. If the mean of the vector is
when used with different images. The basic algorithm is the
now subtracted from it, the vector will no longer lie on the
LBG algorithm [2], but it has been modified to suite our needs.
hypersphere. On the other hand, if we remove the mean
We first briefly describe the LBG algorithm and then show
first, we are left with a zero mean residual vector. If we
where the modifications were made to it.
normalize the residual vector with respect to its gain (the
The LBG algorithm designs the codebook of the required
gain is actually the standard deviation of the vector), the
size in two stages. The algorithm starts with a codebook
normalized residual vector will lie on the hypersphere. We
size of one which is the centroid of all the training vectors.
can then proceed and design a corresponding codebook so
This codevector is then perturbed by a small perturbance
that the codevectors also lie on the same hypersphere. The
vector E , to obtain the codebook size of two (double the
designed codebook theoretically should have a constant signalprevious size). Now the entire training set is partitioned
to-noise ratio (SNR)for all input levels [17], i.e., the codebook
based on the two codevectors and to some distortion measure.
should be capable of handling infinitely large dynamic range.
The centroid of the two partitions is then computed which
Although this is possible in the ideal case, what happens
represent the two new codevectors. In the second stage, the
practically is also very useful. Because of normalization the
nearest neighbor and centroid conditions are used to update
vectors have a smaller range and they can be quantized more
the codebook. Again, this process of splitting, partitioning,
efficiently and as such we can expect a higher SNR. Our
and centroid calculation is repeated, doubling the size of the
computer simulations show that this is indeed the case, and
codebook with each iteration, until the desired codebook size
that the price paid for this increase in performance is a small
is achieved. The algorithm guarantees that the final codebook
increase in computational complexity, and an increase in side
obtained is the local optimum for the training set used. For
information.
a more detailed discussion of the algorithm along with the
Fig. 1 shows the block diagram of a mean removal and
mathematical derivations, the reader is referred to [2].
gain normalization vector quantization (MGVQ) encoder. If
The codebook size and the vector dimension play a vital role
the input training vector sequence is zn, we obtain the residual
in determining the processing complexity for generating the
vector y, by
codebook. In fact it can be shown that for a vector quantizer
of dimension k operating at a bit rate of T bits/sample, the
Ya
X n - mn
(1)
required computations and storage space are proportional to
k2. This exponential increase in computational cost usually
forces one to use low dimensions and small codebook size, where m, is the vector mean. The 12 norm of the vector y, is
by definition our gain un, and we normalize yn with respect
compromising on the quality of the decoded image.
One of the ways suggested to overcome the exponential to u, to obtain z,,
growth of complexity is a tree-structure for the codebook.
Yn
z, = -.
Even though the tree-structured codebook does not perform as
0
,
well as full-search VQ, the advantage in terms of the reduction
in complexity makes it a very practical option. The codebook We then use this zero mean unit gain vector sequence as our
generation algorithm is slightly modified to generate the tree- training set for VQ codebook generation. Since we are using
structure for the codebook. First, the root level codebook is the normalized vector, the centroid C, for the rth partition of

SITARAM ef al.: EFFICIENT CODEBOOKS FOR VQ IMAGE COMPRESSION

3029

Evaluate
Gain

Scalar
Quantizer

Fig. 1. Mean-shape VQ encoder.

Vector

Fig. 2. Predictive VQ encoder.

the training set is given by

(3)
where N is the number of training vectors in the rth partition.
For a discussion on how the centroid condition is affected
by gain normalization, the reader is referred to [17]. With
this modified centroid condition, we now proceed to design
an optimal codebook by applying the LBG algorithm on the
training sequence {zn, o n } ,which is derived from the input
vector sequence { z n } .Now the convergence to a local optimum is guaranteed. The performance results of this codebook
are discussed in the simulation results section.
Throughout our discussion so far, we have bypassed the
implementation issue. The calculation of gain on a computer
would normally use floating point representation and as such
it would be difficult to implement in a digital VLSI chip.
However, if we scale the residual vector with a constant a, and
the codebook is also scaled with same a,we use only integer
arithmetic. In our simulations we did both floating point and
integer implementations, and the difference in performance
was negligible.
B. Predictive VQ Codebook Design

Predictive VQ is a memory type of VQ which uses data


correlation extending over the dimension of a vector. Although
the increase in dimension of the vector with VQ increases
performance, the same is not strictly true with PVQ. This
is because as the vector dimension increases, the accuracy
of the prediction deteriorates. However, this loss in performance is more than compensated by the improvement in the
performance of the vector quantizer of the prediction error
vector.
The basic predictive vector quantizer encoder is shown in
Fig. 2. A closed-loop prediction x, of the input vector is made
from past observations of the reconstructed vector. An error or
residual vector e, is formed by taking the difference between
the input vector and the predicted value. The residual vector
is then vector quantized and the index of the best match is
transmitted to the decoder.
We now discuss some issues involved in the design of
a predictive vector quantizer. PVQ requires the design of

a predictor and a VQ codebook. Given an input training


sequence x, an open-loop design of the predictor is done so
as to obtain the predictor coefficients. We used an image as
the training set to obtain the predictor coefficients by solving
the causal predictor model of [lo]. Once the predictor has
been designed, the VQ codebook design can also be done
using an open-loop approach. An input training sequence is
then used to generate a residual training set, i.e., a training
set based on the open-loop prediction error vectors, which in
turn can be used to design a VQ using the generalized Lloyd
algorithm. This approach of designing a predictor first, without
taking the quantizer into account, simplifies its design. Then
the quantizer is separately designed using the predictor only
for the formation of the VQ training sequence. Although this
is an open-loop approach to the design of the predictor and
VQ codebook, the loop can be closed in a PVQ system
by operating on the reconstructed signal, 2, to produce the
prediction z, of the next input as shown in the figure.
PVQ like VQ can be subjected to mean removal and/or gain
normalization of the residual vector. The error which increases
with increasing vector dimension can be compensated for by
removing the mean of the error vector and/or normalizing with
respect to its gain. The performance of each of these product
PVQs is discussed in the results section.
111. CONSTANT RATE ADAPTIVE TREE-SEARCH ALGORITHM
In this section we describe an adaptive tree search algorithm
for a tree-structured vector quantizer (TSVQ). Our algorithm
is particularly suited to exploit the successive approximation
nature of a tree-structured codebook. Given a constant rate, the
encoder varies the depth of tree up to which it will search for a
match in the codebook. If the allowed rate is high, the encoder
travels down the tree to find the best match terminal node.
But if the rate is reduced, the algorithm degrades gracefully
by simply transmitting shorter words and suffering minimal
increase in distortion. For the product VQ, explained later in
the paper, our algorithm offers another advantage. If the input
vector is flat then only mean (or gain) is transmitted and
on the other hand if the vector has a lot of activity in it, the
vector codebook is searched for a best match.
We now describe the algorithm, first in words and then
in pseudocode form. The encoder requires a buffer so as to
allow it to adapt to the varying statistics of the input image.
The algorithm described is a closed-loop algorithm and is
characterized by two parameters, the error threshold and the
current buffer level. The error threshold (or simply the threshold) is a function of the current buffer level and the choice
of this function influences the performance of the algorithm.
For simplicity, we have chosen this function to be a linear
function of the buffer level, between an upper and a lower
limit. Initially, the threshold is calculated based on some initial

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 42, NO. 11, NOVEMBER 1994

3030

buffer level. When the input vector arrives, the tree-structured an adept way of encoding the vector and exploits the product
codebook is searched for the best match starting from the nature of VQ. This is also advantageous in two other ways,
root level and travelling down the tree one level at a time. one is that it saves the search time at the encoder and the other
The distortion between the input vector and the best match is is that only the mean (or gain) scalar codebook index needs
evaluated at each level and compared with the threshold. If the to be transmitted which saves considerable bandwidth of the
distortion is greater than the threshold, the search is continued channel. It should be obvious to note that as the output bit
travelling down to the lower levels of the codebook. Since we rate decreases a greater percentage of vectors are transmitted
are dealing with a successive approximation tree structured as only mean or only gain.
codebook, we can say that in general as we travel down the tree
This algorithm is different from the pruning algorithm for
we get a better representation of the input vector and reduction classification and regression trees suggested by Breiman ef al.
in the overall distortion. If the distortion is less than or equal to [ 181 and generalized for tree-structured source coding by Chou
the threshold, further searching of the codebook is terminated et aZ. [ 161. While the pruning algorithm prunes a subtree of
and the index of the best match is transmitted along with a the tree-structured codebook thereby restricting its search for
prefix to specify the level of the tree where the match was a possible match, our algorithm always uses a balanced tree
found. The buffer is then updated and the new corresponding codebook. Another difference of our algorithm is that, while
threshold is evaluated. By allowing the threshold to increase the pruned tree algorithm gives a variable output rate at the
as the buffer level at the encoder increases, the input image encoder, and as such cannot guarantee a certain average rate
suffers a smooth degradation in quality. We now describe the per vector, the adaptive tree search algorithm gives a constant
algorithm in pseudocode form for VQ and later explain the rate at the output of the encoder.
modification to it for product VQ.
IV. SIMULATION
RESULTS
1) Evaluate initial threshold, based on initial buffer level.
2) For each input vector do the following:
Extensive computer simulation were carried out to evaluate
the performance of each of the algorithms mentioned in the
a) current level = 0;
previous section. We generated six tree structured codebooks
b) while (distortion> threshold).
using a training set of about loo0 378 x 480, 8 b images. The
Search the current level of the tree-structured vector dimension used for all of the simulations, was a subcodebook for the best match of the input block of 4 x 4 image pixels. A tree-structured codebook was
vector. Evaluate the distortion with respect used to reduce the encoder and decoder complexity with large
to the best match.
size codebooks. The codebook is a four-level codebook with
if (distortion 5 threshold)
16 branches in each node and was generated using [6]. All the
testing was done with these codebooks, with three different
Go to step c).
images. It should be noted that the images used for testing
else if (currentlevel< maxlevel)
were not present in the training set. Since the codebooks are
a tree structured codebook, we used the same codebook for
currentlevel=currentlevel+ 1;
evaluating the performance when using 4,8, 12, and 16 b of
else Go to step c).
the codebook.
Fig. 3 shows the plot of the peak signal-to-noise ratio
c) Evaluate the number of bits (n-bits) required for
the transmission of the best match. This depends (PSNR), versus the codebook size in bits, for MGVQ, MRVQ,
on the level of the tree at which the distortion and GSVQ. The plots for MRVQ and GSVQ are shown for
comparison purpose only. The PSNR points on the plot are
became less than threshold.
d) Update the current buffer level (buflevel) based the average for the three images on which testing were done,
on the number of bits required for transmission and is defined by
and the number of allowed output bits per vector.
2552 dB
PSNR = 10 log (4)
loMSE
buflevel = buflevel nbits - outbits.
where MSE stands for the mean squared-error between the
where outbits represents the number of bits that
decoded image and the original image. From the results, it
the encoder transmits for each vector. Note that
can be seen that there is considerable improvement in the
since this does not depend on the input vector,
performance of MGVQ, as compared to GSVQ and MRVQ.
our algorithm is a constant rate algorithm.
This improvement in performance is obtained at the additional
e) Evaluate the new threshold.
cost of slight increase in complexity and increase in side
The algorithm can be modified for product VQ to allow the information rate. The former problem is not very significant
transmission of only mean (for MRVQ) and only gain (for and the latter problem is overcome by the adaptive tree search
GSVQ). The criterion to decide this is the distortion between algorithm.
the mean (or gain) and the input vector. If this distortion is
The performance of PVQ is shown in Fig. 4. It is seen that
less than the current threshold, the encoder does not search PVQ with gain normalization only (PVQG) performs better
the vector codebook at all and transmits only the index of the than PVQ with mean removal only (PVQM). The reason
mean (or gain) as the representation to the input vector. This is for this can be explained by observing the distribution of

SITARAM et ab: EFFlCIENT CODEBOOKS FOR VQ IMAGE COMPRESSION

_I_

40.00

40.00

38.00

z36.00
n
m 38.00

303 1

,,/
/

z36.00

>A

(r

34.00

.-H

tinew GSVQ

ADT-GSVQ

32.00

32.00

30.00

30.00

28.00 ? s
0.00

ADT-MRVQ

a*-

5.00

10.00

I d

Codebook Size (bits)

*-Y

, I

28.00
I
5.00

20.00

15.00

cfffo MGVQ

I 1

10.00

15.00

ADT-MGVO

20.00

25.00

30.bO

Number of bits per vector

Fig. 3. Plot of PSNR versus the codebook size using outside-training-set


images as the quantizer input. The vector dimension in each case is 4 x 4.

Fig. 5. Plot of PSNR versus the number of bits per vector for the fixed
rate and adaptive tree search algorithms. The vector dimension in each case
is 4 x 4.

44.00 3

42.00

*,
40.00

40.00 1
-38.00

36.00 :

Z
0 32.00

36 00

---

v,

a 34.00

PVQG
MOW PVQM
tit** PVQMG

OE-PVQ

28.00 1

30 00

Go

32005 00

24.00
0.00
I

PVQM
m ADTPVQM
mPVQG
A c.L ADTPVQG
cL)tr** PVQMG
ADTPVQMG
DMW

lY

I I I I I I I

II

4.00

I I I I II I I

II

I I I I I I I I

8.00

II

I I I I

12.00

I I I

II

16.00

,, , , , , , I

20.00

Codebook Size (bits)


Fig. 4. Plot of PSNR versus the codebook size using outside-training-set
images as the quantizer input. The vector dimension in each case is 4 x 4.

the residual vector. The residual vector has a wide range


and is close to random noise. Hence, the normalization of
it with the vector gain gives a more uniform distribution, thus
allowing the quantizer to perform better than in the case of
PVQM. Also, PVQ with mean removal and gain normalization
(PVQMG) performs the best among all the algorithms. PVQ
requires the evaluation of the predictor coefficients for each
image, but once the required coefficients are obtained, the
increase in computation cost in only fractional.
For plotting the performance of the adaptive tree search
algorithm, we have added the 4 b of scalar quantization
for the mean and/or gain. Thus, the graphs shown in Figs.
5 and 6 represent the total number of bits transmitted by
the encoder for each input vector. In the figures the symbol
ADTMRVQ stands for the performance of the MRVQ with the
adaptive search algorithm presented in this paper, and similarly
other product VQs are prefixed with ADT to show their
corresponding relation. The performance of the adaptive tree
search algorithm was very good and it produced considerable

Number of bits per vector

Fig. 6. Plot of PSNR versus the number of bits per vector for the fixed
rate and adaptive tree search algorithms. The vector dimension in each case
is 4 x 4.

performance improvement over fixed rate tree-structured VQ.


The adaptive tree search algorithm is simple to implement
on digital hardware and the gain in performance is worth the
overhead. At low bit rates, the performance improvement of
the adaptive tree search algorithm for all of the product VQ
cases was significant. But, as the output bit rate approached
that of the fixed rate VQ, the curve saturated and the algorithm
performed as good as fixed rate VQ. Our results show that
by using MGVQ with adaptive tree search we obtain a
performance improvement of about 3 dB over the fixed rate
MRVQ, when the output rate is 14 b/p vector. One limitation
of our scheme, is that the encoder should also transmit to the
decoder the level at which the match to the input vector was
found. This requires an additional log,5 bits to be transmitted.
If we use a lossless encoder such as a Huffman coder between
the encoder output and the buffer, we can reduce the average
bit rate by this amount or more. Our experience shows that
we will not suffer in terms of performance by doing this.
Finally, we carried out some test on the USC data base
image Lena so as to bench mark our results. Table I gives the
peak SNR of different algorithms for varying codebook sizes.

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 42, NO. 1 1 , NOVEMBER 1994

3032

TABLE I
PERFORMANCE
OF ALGORITHMSON IMAGELENA.
THE VECTOR DIMENSIONIN EACHCASEIS 4 x 4
Algorithm
MRVQ (dB)
GSVQ (dB)
MGVQ (dB)
PVQM (dB)
PVQG (dB)
PVQMG (dB)

4
30.14
30.11
31.63
30.79
30.64
32.19

Codebook Size in bits


8
12
33.10
35.03
33.10
35.01
35.00
36.82
35.36
33.42
33.58
35.68
35.07
36.99

16
36.58
36.47
38.22
36.83
37.13
38.39

TABLE I1
PERFORMANCE
OF ~ A F T I V ETREESEARCH
ON IMAGE
LENA.THE VECTOR DIMENSION
IN EACH
CASE IS 4 X 4
Algorithm
MRVQ (dB)
GsVQ (a)
MGVQ (B)
PVQM (m)
PVQG (dB)
PVQMG (dB)

Number of bits per vector (or rate in b/pixels)


8 (0.5)
12 (0.75)
16 (1.0)
20 (1.25)
36.10
36.45
36.58
35.05
36.04
36.37
36.47
34.97
36.97
37.30
38.06
36.22
36.63
36.80
34.98
36.29
34.66
36.89
37.08
36.98
37.71
38.18

Subjectively, the reconstructed image quality was very good


for all the algorithms with a codebook size of more than 12 b.
When the adaptive tree search algorithm was used to encode
the vectors, the resulting improvement is listed in Table 11. For
instance, comparing the results of Table I for MRVQ with a
codebook size of 8 b (plus 4 b of mean is required to transmit
each vector) and Table I1 for MGVQ with 12 b devoted to
each vector, we see that the adaptive tree search increases the
peak SNR by 3.8 dB. This improvement for MGVQ is at a
bit rate of 0.75 bitdpixel and is obtained at the slight increase
in complexity.

[2] Y. Linde, A. Buzo, and R. M. Gray, An algorithm for vector quantizer


design, IEEE Trans. Commun., vol. COM-28, pp. 84-95, Jan. 1980.
[3] M. R. Anderberg, Cluster Analysis for Applications. New York: Academic, 1973.
[4] J. MacQueen, Some methods for classification and analysis of multivariate observations, in Proc. Fifh Berkeley Symp. Math., Stat., Prob.,
vol. 1, pp. 281-296, 1967.
[5] A. K. Jain and R. C. Dubes, Algorithms for Clustering Data. Englewood Cliffs, NJ: Prentice-Hall, 1989.
[6] C.-M. Huang, Large size vector quantization generation analysis and
design, Ph.D. dissertation, Utah State Univ., 1991.
[7] A. Buzo, A. H. Gray, R. M. Gray, and J. D. Markel, Speech coding
based upon vector quantization, IEEE Trans. Acoust., Speech, Signal
Processing, vol. ASSP-28, pp. 562-574, Oct. 1980.
[8] M. J. Sabin and R. M. Gray, Product code vector quantizers for speech
waveform coding, in Con$ Rec. GLOBECOM 1982, Dec. 1982, pp.
1087-109 1.
[9] -,
Product code vector quantizers for waveform and voice coding, IEEE Trans. Acoust., Speech, Signal Processing, vol. 32, pp.
474-488, Apr. 1984.
[IO] A. K. Jain, Fundamental of Digital Image Processing. Englewood
Cliffs, NJ: Prentice-Hall, 1989.
[ 111 R. L. Baker, Vector quantization of digital images, Ph.D. dissertation,
Stanford Univ., June 1984.
[12] H. J. Lee and D. T. L. Lee, A gain-shape vector quantizer for image
coding, in Proc. ZEEE ICASSP, Mar. 1986, pp. 141-144.
[13] M. 0.Dunham and R. M. Gray, An algorithm for the design of labeledtransition finite state vector quantizers, in Proc. IEEE Trans. Commun.,
vol. 33, pp. 83-89, Jan. 1985.
[14] V. Cuperman and A. Gersho, Vector predictive coding of speech at 16
kbit/s, ZEEE Trans. Commun., vol. 33, no. 7, pp. 685-696, July 1985.
[I51 A. Gersbo and R. M. Gray, Vector Quantization and Signal Compression.
Boston, MA: Kluwer-Academic, 1992.
[16] P. A. Chou, T. Lookabaugh, and R. M. Gray, Optimal pruning with
applications to tree structured source coding and modeling, IEEE Trans.
Inform. Theory, vol. 35, pp. 299-315, Mar. 1989.
[17] J. H. Chen and A. Gersho, Gain-Adaptive vector quantization with
applications to speech coding, in Proc. IEEE Trans. Commun., vol.
COM-35, pp. 918-930, Sept. 1987.
[18] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Class$cation and Regression Trees. The Wadsworth StatisticdProbability Series.
Belmont, CA: Wadsworth, 1984.

V. CONCLUSION
In this paper we have discussed a number of algorithms
which achieve the design of an eficient and robust codebook.
It has also been shown that the same level of performance is
achieved by reducing the codebook size by 4 or even up to
8 b. These reductions in the codebook size are obtained with
only slight increase in computational complexity. Reducing the
codebook size exponentially reduces the codebook generation
complexity and also the encoding complexity. The reduced
complexity can be used in several applications such as highdefinition television. On the other hand, if one can cope
with large size codebooks, the potential theoretical increase
in performance is significant.
The adaptive tree-search algorithm is a new approach and
provides considerable improvement over the fixed rate VQ.
The improvement in performance of the algorithm is more
pronounced for low bit rates, wherein the product nature of the
VQ is exploited. The algorithm is fast and is simple enough
to implement in hardware at video rates. Hence, it can be
effectively used in real life.
REFERENCES
[ 11 C. E. Shannon, A mathematical theory of communication, Bell Syst.

Tech. J., vol. 27, pp. 379-423, 623-656, 1948.

Vijay S. Sitaram was born in March 1968 in


Mysore, India. He received the B.S. degree in
electronics and communication engineering from the
University of Mysore, India, in 1989, and the M.S.
degree in electrical engineering from Utah State
University at Logan, UT, in 1992.
During his graduate study, he was as a research
assistant in the Department of Electrical Engineering
working on digital image compression algorithms.
He is currently working as an engineer with Expertware Inc., Santa Clara, CA. His research interests
include signal processing, communications, and medical imaging.

Chien-Min H u n g was born in Taiwan in 1958. He


received the B.S. degree in control engineering from
National Chiao-Tung University, Hsinchu, Taiwan
in 1980, and the M.S. and Ph.D. degrees in electrical
engineering from Utah State University, Logan, UT,
in 1988 and 1991, respectively.
From 1982 to 1985 he was a research assistant in
Chung-Shan Institute of Science and Technology,
Taiwan. From 1990 to 1991 he was a research
engineer for Globesat Holding Corp., Logan, UT.
Since 1991, he has been with Space Dynamics Laboratory/Utah State Unilversity as a senior engineer and Utah State University as
an adjunct assistant priofessor. His research interests include data compression,
signal processing and pattern recognition.

SITARAM et

01.:

EFFICIENT CODEBOOKS FOR VQ IMAGE COMPRESSION

Paul D. Israelsen was bom in Logan, UT, in


November, 1953. He received the B.S. degree in
electrical engineering from Utah State University in
1982, and the M.S.degree in electrical engineering
from the University of Utah in 1985. He is presently
completing the Ph.D. degree in electrical engineering at Utah State University.
He worked for several years in industry as a
VLSI design and Systems Engineer. In 1986 he
retumed to Utah State University to implement their
VLSI design and research program. He is presently
teaching VLSI design courses in the Electrical Engineering department and is
involved in research in VLSI architectures for DSP and the algorithmic and
hardware design of image compression systems.

3033

You might also like