Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

An Ecient and Selective Image Compression Scheme using Human and Adaptive Interpolation
Sunil Bhooshan
Department of ECE Jaypee University of Information Technology Solan, INDIA Email:sunil.bhooshan@juit.ac.in

Shipra Sharma
Department of CSE and IT Jaypee University of Information Technology Solan, INDIA Email:shipra.sharma@juit.ac.in

AbstractThis paper proposes a hybrid approach to compression. It incorporates lossy as well as lossless compression. Various parts of image are compressed in either way, depending on the amount of information held in that part. This scheme consists of two stages. In the rst stage the image is ltered by a high pass lter to nd the areas that have details. In the second stage, an eective scheme based on Human coding and adaptive interpolation is developed to encode the original image. With this algorithm, a good compression ratio is obtained, while, PSNR and SSIM are better to that of other methods available in literature. In other words, the newly proposed algorithm provides an ecient means for image compression.

I. Introduction Image compression literally means reducing the size of graphics le, without compromising on its quality. Depending on whether the reconstructed image has to be exactly same as the original or some unidentied loss may be incurred, two techniques for compression exist. Lossless compression is the term used for the former and lossy compression for the later mentioned method. Although lossy compression techniques achieve very high compression ratios but the decompressed image is not exactly same as the original one. These methods take advantage of the fact that to certain extent the human eye cannot dierentiate between the images although noise exists in the decompressed image. Lossless methods on the other hand, give very less compression ratios but exactly recover back the original image. Most advances in the compression eld are in lossy compression [1]. Lossy compressions proposed recently use wavelet transforms [2], [3]. But using wavelets proves to be computationally expensive and the problem of edge pixels also persists. For the proposed algorithm we use lossy compression as proposed in [4]. It is computationally inexpensive method and gives visibly good results as far as lossy compression is concerned. Lossless compression methods like [5], [6], [7], [8] have much lower performance with respect to lossy compression [9]. Human coding scheme [10] is an

entropy encoding, lossless method of compression. It produces the fewest bits/symbol on an average. It has been extensively researched in last ve decades [11]. This coding method has been used in one version of Lossless JPEG, JPEG-LS [12], and is computationally less expensive than the arithmetic version of JPEG [1]. Various methods exist in literature which combine these two approaches. The one in [13] is for dierent bit rates, while [14] focusses on multiresolution transformation. Others, like [15], [16] concentrate on optimizing a particular technique. In this paper we present a simple algorithm incorporating the Human coding scheme and adaptive interpolation. We then compare it with JPEG2000. JPEG2000 is chosen, as it provides better compression than the traditional JPEG [17]. The paper is organized as follows. Section II deals with the proposed method. First part in this section describes the compression technique and the second part deals with decompression. Section III shows some computational results which proves that our method indeed gives good quality images, while giving a very high compression ratio. Lastly, Section IV takes up the conclusion and future work. II. The method The method, considered in this paper, is outlined for grayscale images but can be extended to colored ones. The image is considered to be a smooth function of x and y, even though it is not so. A b bit per pixel grayscale image of size mn pixels is considered for compression. Since we are considering b bits, therefore, gray level values of the pixel in this image will range from 0 to 2b 1. Here 0 represents black and 2b 1 represents white and similarly intermediate values represent the transition from black to white.

978-1-4244-4698-8/09/$25.00 2009 IEEE

- 197 -

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)
A. Compression The stepwise procedure to compress a given image is as follows: Step 1 Let us denote the image as a matrix of gray level intensity values and represent it by I. Step 2 Pass I through a high pass lter and name it IHP . The pass band frequency of the high pass lter to be used, must be chosen in such a way that the resulting ltered image has enough details; or in other words, the order of the lter is to be decided on the basis of the amount of information to be retained. Refer [18] for indepth discussion. Step 3 IHP is divided in sqaure blocks of some size, say 9 9. We take m and n , where represents 9 9 ceiling. Step 4 For each such block: 1) The grayvalue at each position (x, y) is obtained. We have 81 such values one for each pixel position. 2) If not more than half; i.e., 42 (threshold parameter), of these values are zero then the block is marked for Human encoding, else it is marked for adaptive interpolation. Step 5 Computations, from now onwards, are carried out on the original image and not on the ltered one. The original image is also divided in 9 9 blocks, as the ltered image. These blocks are numbered so that we can keep a track of which block is marked for which type of compression. Step 6 Each block, starting from block number 1, is checked for which method it is marked for in step 3. Step 7 All the blocks marked for Human encoding are placed together rowwise in a new image matrix, ImgHu. Step 8 In ImgHu 1) For each position, (x, y), grey value, Gn , is obtained. 2) Number of occurence of each Gn is calculated. 3) Encode ImgHu using the above calculated values. 4) This encoded matrix, can be further compressed using LZW or airthmetic coding, is represented as ComHu. Step 9 All the blocks marked for Adaptive Interpolation are placed together rowwise in image matrix, ImgInt. 1) ImgInt is divided into 3 3 blocks. 2) Centre pixel from each block is chosen as in [4]. 3) The chosen pixels form compressed image, say, ComInt.
Original image Pass through HPF Divide into 9x9 blocks z= no. of zeros in a block z > 42 Y ? N save block for lossless compression, for some n, flag=h blocks to be losslessly compressed Apply Huffman coding Compressed image 1
Fig. 1: Proposed Compression Scheme.

Copy of Original image Divide into 9x9 blocks n= block no. Save block for lossy compression and for some n flag=i Blocks for lossy compression Choose one pixel for each kxk block Compressed image 2

Hence, the two images, ComHu and ComInt, are the resultant compressed images corresponding to the original image.The overall ow of the proposed compression method is depicted in Figure 1. B. Variants of Compression (Based on threshold parameter) The proposed algorithm gives user exibity in terms of degree of lossless and lossy compression one wants to apply. To explain it further, if compression ratio is to be changed, threshold parameter (number of zeros to be counted to decide whether the block should be encoded losslessly or in lossy manner) should be changed. For example, we can increase it, to more than half if compression ratio is to be decreased. In other words, threshold parameter can be varied to increase or decrease lossless (and lossy) compression. This particular feature is further proved in results section. C. Decompression To reconstruct the original image the two compressed images, ComHu and ComInt, are considered. The procedure is outlined below: Step 1 We start with block number 1 and check whether it belongs to ComHu or ComInt matrix.

- 198 -

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)
(p,q) q

Intial Pixel Interpolated Pixel Fig. 2: Decompressing Block by Adaptive Interpolation. Decompressed Image One Interpolated Block

Fig. 3: One Block in Reconstructed Image.

Step 2 If the nth block belongs to ComHu 1) It is stored in a new image matrix ImgHuf1. This is done until all blocks compressed by the Human method are stored in ImgHuf1. 2) ImgHuf1 is decoded back to original pixel values using Human decoding algorithm. 3) The decoded blocks are placed in reconstructed image, say DecomImg, according to their number. Step 3 If the nth block belongs to ComInt 1) We consider 9 pixels in a block of 33 size. 2) 2 pixels are interpolated, as in [4], between every two adjacent pixels. The same is depicted in Figure 2 3) As can be observed from Figure 2 the above step will return a block of 7 7 instead of 9 9. For the time being values of adjacent row and column are copied to make it of 9 9 size. 4) This block is placed in reconstructed image, DecomImg, according to its number. Step 4 To make DecomImg more close to the original image, we consider all those blocks which were decompressed using adaptive interpolation. This is done because image data conatins large amount of local redundancy [19]. To explain this let us consider an interpolated block starting at position (p, q) in DecomImg (depicted in Figure 3): 1) We take gray values of pixels from (p1, q) to (p 1, q + 9) and from (p + 1, q) to (p + 1, q + 9). Obtain their average and put in positions (p, q) to (p, q + 9). Similarly average of (p + 8, q) to (p + 11, q + 9) and from (p + 11, q + 9) is placed in positions (p + 9, q) to (p + 9, q + 9). 2) Similarly, we take gray values of pixels from (p, q 1) to (p + 9, q 1) and from (p, q + 1) to (p + 9, q). Obtain their average and put in positions (p, q) to (p + 9, q). Similarly average of (p, q+8) to (p+9, q+11) and from (p+9, q+11) is placed in positions (p, q + 9) to (p + 9, q + 9).

Acquire image 1 Apply Huffman decoding m=1 N flag==h ? Y Acquire a 9x9 block from compressed image 2 Apply interpolation (which will result in a 7x7 block) Place block in decompressed image m=m+1 Decoded Huffman blocks Acquire a 9x9 block from decoded Huffman blocks

Y m<=n ? N

Acquire all interpolated Add 0s at all four edges blocks and put average of (which will result in a neighbouring pixels in place 9x9 block) of 0s (in all four edges) Place block in decompressed image
Fig. 4: Proposed Decompression Method.

3) The above two steps are repeated for all interpolated blocks in DecomImg and the resultant image, say FinDecomImg, is obtained. Step 5 FinDecomImg is the nal reconstructed image. The overall ow of decompression is pictorically represented in Figure 4. III. Computational Results The experiments were performed using the scheme described in the previous section. All experiments are done on COMPAQ PC with Windows XP OS, and use

- 199 -

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)
compression techniques and the compression ratio is high with respect to purely lossless techniques. Table I shows the compression ratio for images of dierent size. As can be noticed in the table, text image has less compression ratio as it has more minute content in comparison to other images. B. Comparison with JPEG2000 (.JP2) To compress images as .jp2, a freely available jpeg compressor, A3D compressor 1.0 was used.The comparison is based on PSNR and SSIM for the same compression ratio obtained for a particular image. PSNR stands for Probabilistic Signal-to-Noise Ratio. It is the ratio between the maximum possible power of a signal and the power of corrupting noise that aects the quality of its representation. PSNR is measure of peak error [20]. Higher PSNR indicates that reconstructed image is of higher quality. To calculate PSNR rst MSE (Mean Square Error) is calculated as: MSE = 1 nm
n m

(a) Original Image.

||O(i, j) D(i, j)||2


i=1 j=1

(1)

where O is the original image and D is the decompressed one. Using MSE, PSNR is calculated as: 255 255 (2) MSE SSIM measures similarity between images. It is a reference metric which measures quality of image with respect to original image. It is calculated on sections of image. If x and y are two sections then it is calculated as in Equation 3. PSNR = 10 log10 SSIM(x, y) = (2x y + c1 )(2covxy + c2 ) (2 + 2 + c1 )(2 + 2 + c2 ) x y x y (3)

(b) Decompressed Image.

Fig. 5: Scenery

the MATLAB R 7.1.1 . Dierent facets of algorithm are taken into consideration. Tests are conducted, on large set of images, based on dierent amount of edge information available to make a region signicant and dierent values of threshold parameter. The following subsections take on various aspects of applying the algorithm. Images used in this paper are:

24953011 man and horse image with 256 gray levels 15301530 scenic image with 256 gray levels 11531153 text image with 256 gray levels 500500 frog image with 256 gray levels

A. Results: Figure 5a shows an original image of size 2.7MB. When our compression algorithm is applied on it, two compressed images of size of 4 bytes and 77.2KB are obtained. These compressed images when passed from the reconstruction algorithm result in image shown in Figure 5b. As we can observe from the results that although both lossy and lossless compression are used, a high compression ratio, 1 : 84, is obtained. The loss of information is very less in comparison to purely lossy

where x the average of x, y the average of y, 2 the x variance of x, 2 the variance of y, covxy the covariance of y x and y, c1 = (k1 , L)2 , c2 = (k2 , L)2 two variables to stabilize the division with weak denominator, L the dynamic range of the pixel-values, k1 = 0.01 and k2 = 0.03 by default. Table I shows that for the same compression ratio, or same amount of compression, our method gives better quality of decompressed image in almost all the cases. One exceptional case is when we have text in our image, then the performance of our method degrades slightly. Even then the performance can be improved by increasing the number of zero count in ltered image, in our algorithm, so that more blocks are human coded and hence more quality of image is improved. Our decompressed images are of higher quality then that of JPEG2000. This is made more clear by considering a part of decompressed image acquired by our algorithm and zooming it and doing the same with image acquired by JPEG2000. Figure 6a depicts images obtained by

- 200 -

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)

(a) Decompressed & Zoomed(b) Decompressed & Zoomed Image Obtained from Our Al-Image Obtained from JPEG2000 gorithm. Algorithm.

Fig. 6: Zoomed Images.

SSIM (J)

0.798 0.794 0.523 0.769

TABLE I: Compression Ratios and Comparison.

37.994 27.346 21.553 20.233

PSNR (J )

Threshold Parameter 10(< hal f ) 20(< hal f ) 30(< hal f ) 40(= half) 50(> hal f ) 60(> hal f ) 70(> hal f )

CR 1/146 1/143 1/138 1/84 1/30 1/27 1/25

PSNR 23.1 23.105 25.157 36.7 43.037 57.776 58.1

SSIM 0.580 0.582 0.613 0.974 0.993 0.999 0.999

SSIM (A )

0.989 0.974 0.613 0.869

1/65 45.113 1/84 36.747 1/46 26.734 1/54 27.137 OS = OriginalSize CS = CompressedSize CR = CompressionRatio A = OurAlgorithm J = JPEG2000

TABLE II: Varying Threshold Parameter.

PSNR (A )

proposed algorithm and Figure 6b shows that of by JP2. As is clearly visible details are lost in the later one. This proves that the proposed algorithm maintains much better quality of reconstructed image for same compression ratio then JPEG2000. C. Variants of Compression As discussed earlier, we depict eect of changing the threshold parameter. For the image in Figure 4, Table II shows the eect on compression rate for varying the threshold parameter. The eect of threshold value on PSNR and SSIM can also be observed. So depending on the requirements we can select a tradeo between, quality of decompressed image and compression ratio. D. Edge Information Depending on the smoothness of image obtained from lter, the threshold value can be varied. If we have image with sharp edges, that tells that details have been removed and hence threshold parameter can be decreased and vice versa. IV. Conclusion and Future Work We have proposed a new compression technique which is suited for both lossy and lossless compressions. Common compression techniques focus on either lossless or lossy mechanism. The proposed method is a combination of both. In addition, we can decide the degree of lossy and lossless compression. Block coding methods often stress on same method for all the blocks.

CR CS OS Image Name

M&H Scenery Text Frog

21.5MB 2.7MB 1.28MB 136.2KB

334.3KB 77.3KB 84.7KB 7.2KB

- 201 -

24th International Conference Image and Vision Computing New Zealand (IVCNZ 2009)
Here, how the block will be coded depends on details it carries. Adaptive interpolation and Human coding were the two methods used to implement the same. The proposed method can be used for any type of image. For example, if we have a medical image, where no data loss can be tolerated, then Human code will be executed on almost all blocks. In another case, where loss of data is not comprehensible, then interpolation will be used more often. Therefore, based on the type of image and where it has be used, we can decide on what quality of compression we require. It can be observed that it is not computationally complex. Results show that it gives better quality images than JPEG2000 for the same compression ratio. We are working towards making a neural network learn the threshold on which it has to decide whether the block has to be compressed in lossy or lossless manner. References
[1] X. Li, Y. Shen, and J. Ma, An ecient medical image compression,, in Engineering In Medicine And Biology 27th Annual Conference, Shangai, China, September 1-4 2005 IEEE. [2] J.W.Woods, Subband Image Coding, ed., Ed. boston, MA: Kluwer Academic Publishers, 1991. [3] J.M.Shapiro, Embedded image coding using zerotrees of wavelet coecients, in Special Issue On Wavelet And Signal Processing, vol. 41, no. 12. IEEE Trans. Signal Processing, Dec 1993, pp. 34453462. [4] S. Bhooshan and S. Sharma, Image compression and decompression using adaptive interpolation, in The WSEAS International Conference on Signal Processing, Robotics and Automation. University of Cambridge, Cambridge: WSEAS, Feb, 21-23 2009. [5] M.Rabbani and P. Jones, Digital image compression techniques, SPIE Opt. Eng. Press,, Bellingham, Washington, Tech. Rep., 1991. [6] G. Kuduvalli and R. Rangayyan, Performance analysis of reversible image compression techniques for high resolution digital teleradiology, in IEEE Trans. Med. Imaging, vol. 11, Sept. 1992, pp. 430445. [7] Progressive bi-level image compression, CCIT Draft Recommendation T.82, ISO/IEC Commite Draft 11544, Sept. 1991. [8] M.Rabbani and P.W.Melnychuck, Conditioning contexts for the arithmetic coding of bit planes, in IEEE Trans. Inform. Theory, vol. 40, 1994, pp. 108117. [9] A. Said and W. A. Pearlman, An image multiresolution representation for lossless and lossy compression, to appear in the IEEE Transactions on Image Processing. [10] D. Human, A method for the construction of minimumredundancy codes, in IRE, I.R.E, Ed., vol. 40, no. 1098-1102. IRE, Sept. 1952, pp. 10981011. [11] R.Ponalagusamy, E.Kannan, and M. Arock, A human decoding algorithm in mobile robot platform, Information Technology Journal, vol. 6(5), no. ISSN 1812-5638, pp. 776779, 2007. [12] W.B.Pennebaker and J.L.Mitchel, Jpeg: Still image data compression standard, in Van Nostrant Reinhold, New York, 1993. [13] D. Marpe, G. Blttermann, J. Ricke, and P. Maa, A two-layered wavelet-based algorithm for ecient lossless and lossy image compression, in IEEE Transactions On Circuits And Systems For Video Technology, vol. 10, no. 7, 2000 2000. [14] A. Said and W. A. Pearlman, An image multiresolution representation for lossless and lossy compression, in IEEE Transactions On Image Processing, vol. 5, no. 9, September 1996. [15] S.-G. Miaou and S.-N. Chao, Wavelet-based lossy-to-lossless ecg compression in a unied vector quantization framework, in IEEE Transactions On Biomedical Engineering, vol. 52, no. 3, March 2005. [16] W. Philips, The lossless dct for combined lossy /lossless image coding, in ICIP, vol. 3, October 1998, pp. 871875. [17] S. Haseeb and O. O. Khalifa, Comparitive performance analysis of image compression by jpeg2000: A case study on medical images, Information Technology Journal, vol. 5(1), no. ISSN 18125638, pp. 3539, 2006. [18] S. Bhooshan and V. Kumar, Design of two dimensional linear phase chebyshev r lters, in 9th, IEEE IET International Conference on Signal Processing, October 2008. [19] S.-K. Kil, J.-S. Lee, D.-F. Shen, J.-G. Ryu, E.-H. Lee, H.-K. Min, and S.-H. Hong, Lossless medical image compression using redundancy analysis, IJCSNS International Journal of Computer Science and Network Security, vol. 6, no. 1A, pp. 5056, January 2006. [20] Wanigasekara, N.R., S. Zuangzhi, and Y.Zeng, Quality evaluation for jpeg 2000 based medical image compression, in IEEE Trans. Image Proc., vol. 8, 2003, pp. 16871697.

- 202 -

You might also like