Professional Documents
Culture Documents
Test Data Compression For Low Power Testing of VLSI Circuits PDF
Test Data Compression For Low Power Testing of VLSI Circuits PDF
Test Data Compression For Low Power Testing of VLSI Circuits PDF
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
Test vectors are compressed using Rice entropy Figure. 1: Block diagram of Rice algorithm architecture.
coding [1]. The entropy coder first converts the number of
input vectors xi into preprocessor samples i using Test xi yi i=xi- i i
predictor. The preprocessing is done using a predictor, and Vectors yi
then by a prediction error mapper. Based on the predicted 11101000 232 - - - -
value, yi, the prediction error mapper converts each 01110100 116 116 -116 23 139
prediction error value, i, to an n-bit nonnegative integer, 11001000 200 200 84 116 168
i, which is suitable for processing by the entropy coder. 11000000 192 192 -8 55 15
For example, the benchmark circuit S298 MINTEST[6] 11101010 234 234 42 63 84
Vectors are taken, and the predictor is applied to 8 bit data 11100101 229 229 -5 21 9
values from 0 to 255, as shown below. 00001010 10 10 -219 26 245
10111000 184 184 174 10 184
10100111 167 167 -17 71 33
01110101 117 117 -50 88 99
00000010 2 2 -115 117 229
10111111 191 191 189 2 191
11101100 236 236 45 64 90
10000001 129 129 -107 19 126
11011010 218 218 89 126 178
11000110 198 198 -20 37 39
10000011 131 131 -67 57 124
Table1: Preprocessor
664
International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
If xmin and xmax respectively represent the minimum and (LSB) from each sample, and encodes the left-out higher
maximum values of any input sample xi, then, the predicted order bits with a simple FS code before prefixing the split
value yi, obviously would lie within this range [that is, in bits to the encoded FS data stream. Have a look at the
between xmin and xmax]. Consequently, the prediction example below.
error value i would surely be one of the 2n values in the
range [xmin yi and xmax yi]. For a selected predictor, it Preprocessed FS Code word
is more likely that there will be small values of |i| than Sample Values, i
large values. Consequently, the prediction error mapping 0 1
function will be as follows: 1 01
2 001
i = 2i 0 i i 3 0001
2|i| 1 i i 0 4 00001
i + |i| otherwise . .
. . .
where i = minimum ( yi xmin, xmax yi), 2n 0000.00001
2n zeros
The entropy coding module is a collection of
variable-length codes operating in parallel onblocks of J Table 2: FS code word example
preprocessed samples. The coding option making the
highest compression is chosen for transmission, along with i Binary K=5 K=6,
an ID code used to recognize the option to the decoder. As, 5LSB+FS 6 LSB+FS Code
a new compression option can be chosen for each block. Code
139 10001011 01011 00001 001011 001
The zero block option is the first option. This is 168 10101000 01000 000001 101000 001
chosen when one or more preprocessed sample blocks are 15 00001111 01111 1 001111 1
zeros. Here, the numbers of adjacent all zero preprocessed 84 01010100 10100 01 010100 01
blocks are encoded by a Fundamental Sequence (FS). 9 00001001 01001 1 001001 1
The second option is called the second extension Table 3: Split Sample option
option. This is designed to generate compression data in the
range of 0.5 bits per sample to 1.5 bits per sample. In this The final option is the no compression option.
option, the encoding scheme initially pairs the consecutive When none of the above given options provides any data
preprocessed samples of the J-sample block, and then compression on a block, this final option is selected. Under
transforms the sample pairs into new values that are coded the no compression option. The preprocessed block of data
with an FS codeword. The FS codeword is represented for is transmitted without any modification other than a
, where: prefixed identifier.
= (i + i+1) (i + i+1+1)/2 + i+1
The entropy coder chooses the option that needs
The third option is the Fundamental Sequence the fewest bits to encode the current block of symbols. An
code. This is also called the comma code. Here, the identifier bit sequence makes reference to the option
codeword is comprised of a string of 0 digits which is selected. When the quantization is equal to or less than 8
equal to the decimal of the symbol which has to be coded. bits, a 3 bit Id will be the output, while the larger
At the end of each current codeword, the digit 1 is quantization will have a 4 bit ID.
applied to signal its termination. This simple protocol
permits one to decode the FS code words without going for The test data is split into blocks of fixed length,
lookup tables. and the variable length adaptive coding the following
specifications is applied to the test data. The test vectors are
The Fourth option is the split-sample options. In divided into several blocks containing J samples (8, 16, 32,
the entropy coder, most of the options will be split-sample or 64 samples per block), with a maximum of 32 bits per
options. The kth split-sample option takes a block of J sample.
preprocessed data samples, splits the k least significant bits
665
International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
The output format for the coded data of the first VI. CONCLUSION
block is ID, Reference value, J-1 FS data sample or default
value, and K split data for J-1 samples. And the remaining Rice Algorithm coding is a great way to compress
blocks are coded in the format ID, and K split sample test data. It comes with dual benefits in that, it reduces both
option for J samples or default value. For example, S298 the amount of test data required to be stored on the tester
MINTEST Vectors are divided into 8 sample blocks, with and the time taken to transfer the test data from the tester to
each sample containing 8 bits. the CUT. In this paper, we have discussed of how we have
Input Data i i Output applied our algorithm on different benchmark circuits, and
11101000 - 11101000 {(111), 11101000, have made comparison of our reults with existing test
01110100 139 10001011 10001011,10101000, compression techniques. By applying our technique, we
11001000 168 10101000 00001111,01010100, have achieved a significantly higher compression ratio.
11000000 15 00001111 00001001,11110101,
11101010 84 01010100 10111000} REFERENCES
11100101 9 00001001 [1] Lossless Data Compression. Report Concerning Space Data System
00001010 245 11110101 Standards,CCSDS 121.0-B-2. Blue Book. Issue 2. Washington, D.C.:
10111000 184 10111000 CCSDS, May 2012.
Table 4: Output format of Rice Algorithm [6] F. F. Hsu, K. M. Butler, and J. H. Patel, A case study on the
implementation of Illinois scan architecture, in Proceedings of IEEE
International Test Conference, 2001,pp. 538-547.
V. EXPERIMENTAL RESULTS
[7] I. Hamzaoglu and J. H. Patel, Reducing test application time for full
The experimental results for the various ISCAS89 scan embedded cores, in Proceedings of International Symposium on
benchmark circuits test vectors are compressed and Fault-Tolerant Computing,1999, pp. 260-267.
presented in the following table. We have used the Mintest [8] B. Koenemann et al., A SmartBIST Variant with Guaranteed
[6] test datas.We achieved the highest compression Encoding, Proc. 10th Asian Test Symp. (ATS 01),IEEE CS Press,
percentage for the different benchmark circuits. The 2001, pp. 325-330.
comparison is also given in the table below.
[9] M. Ishida, D.S Ha and T. Yamaguchi, COMPACT: A Hybrid Method
Compression Efficiency (%) for Compressing Test Data, VLSI Test Symposium, pp. 62-69, 1998.
Cicuits Golomb Selective FDR RICE [10] M. Tehranipoor, M. Nourani, and K. Chakrabarty, Nine- coded
[11] Huffman [15] Algorithm compression technique for testing embedded cores in SoCs, IEEE
Trans.Very Large Scale Integr. Syst., vol. 13, no. 6, pp. 719731, Jun.
[12] 2005.
s9234 45 54 61 74.1622
s13207 80 30 88 92.01 [11] A. Chandra and K. Chakrabarty, System on a chip test data
s38417 28 45 65 91.95 compression and decompression architectures based on Golomb
codes, IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst., vol.
Table 5: Comparison of different compression schemes 20, no. 3, pp.355368, Mar. 2001
using MINTEST test data
666
International Journal of Emerging Technology and Advanced Engineering
Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 4, Issue 1, January 2014)
[12] X. Kavousianos, E. Kalligeros, and D. Nikolos, Optimal selective
Huffman coding for test-data compression, IEEE Trans.
Computers,vol. 56, no. 8, pp. 11461152, Aug. 2007.
667