Professional Documents
Culture Documents
EC8093 Unit 5
EC8093 Unit 5
IMAGE COMPRESSION
Dr.S.Deepa
Mrs.B.Sathyabhama
Mrs.V.Subashree
IMAGE COMPRESSION
• Lossless
• Information preserving
• Low compression ratios
• Lossy
• Information loss
• High compression ratios
compression
Compression ratio:
RELEVANT DATA REDUNDANCY
Example:
TYPES OF DATA REDUNDANCY
N x M image
rk: k-th gray level
Example: l(rk): # of bits for rk
P(rk): probability of rk
auto-correlation
f ( x) o g ( x) = f ( x) g ( x + a)da
−
auto-correlation: f(x)=g(x)
INTERPIXEL REDUNDANCY -
EXAMPLE
• To reduce interpixel redundancy, some transformation
is typically applied on the data.
Original
threshold
threshold
11 ……………0000……………………..11…..000…..
Binary image
PSYCHOVISUAL REDUNDANCY
• The human eye is more sensitive to the lower frequencies than to
the higher frequencies in the visual spectrum.
• Idea: discard data that is perceptually insignificant!
C=8/4 = 2:1
GENERAL IMAGE COMPRESSION
MODEL
• Backward Pass
Assign code symbols going backwards
HUFFMAN CODING (CONT’D)
• Lavg assuming binary coding:
Construct Huffman code for the word ‘committee’. Construct the code word for
committee
Solution
Total number of letters in the word = 9. Determine the probability of each letter
in the word,
for ex ‘c’ → occurs 1 time→ Prob of ‘c’ is 1/9, and e is 2/9 and so on
Symbol Code
C 001
E 01
I 0000
M 10
O 0001
T 11
CONSTRUCTING THE CODE
C O M M I T T E E
10 11 11 01 01
10 0000
001 0001
Symbol Code
C 001
E 01
I 0000
M 10
O 0001
T 11
HUFFMAN DECODING-PROBLEM
• Decode the message 00010110011011000 for the given
symbols and probability
Symbol Codeword
1 a 1
01 L 01
00
000 M 000
Y 001
01
001
Symbol Codeword
a 1
L 01
M 000
Y 001
00010110011011000
A L A A A
M y L
M
10 10 11 12 13
10 12 12 14 15
10 12 12 15 15
10 10 10 11 11
10 11 11 11 12
ARITHMETIC (OR RANGE) CODING
(ADDRESSES CODING REDUNDANCY)
Encode
α1 α2 α3 α3 α4
0.4
code: 0.068
0.2
(any number within sub-interval)
Decode 0.572
α3 α3 α3 α3 α3
Inverse:
if u=0 if v=0
if u>0 if v>0
DCT (CONT’D)
Using
8 x 8 sub-images
yields 64 coefficients
per sub-image.
Reconstructed images
by truncating
50% of the
coefficients
RMS error
Entropy
encoder
Became an
international
image
compression
standard in
1992.
JPEG - STEPS
1. Divide image into 8x8 subimages.
Cq(u,v)
C(u,v)
Quantization
6. Encode coefficients:
Sequential
Progressive
JPEG 2000
• Based on DWT – Discrete Wavelet Transform
• No need to subdivide images into 8 x 8 blocks
• Hence no BLOCKING ARTIFACTS
• Produces better compression than JPEG
• Faster than JPEG based on DCT
WAVELET DECOMPOSITION
MPEG
• Used for video compression
• Moving (Motion) Picture Experts Group
• Based on DCT
MPEG STANDARDS
• MPEG-1: For the storage and retrieval of moving pictures and audio on
storage media.
• MPEG-2: For digital television, it’s the timely response for the satellite
broadcasting and cable television industries in their transition from analog to
digital formats.
• Skeletons: produce a one pixel wide graph that has the same
basic shape of the region, like a stick figure of a human. It can
be used to analyze the geometric structure of a region which
has bumps and “arms”.
REPRESENTATION
SKELETONS
• A thinning algorithm:
– (1) applying step 1 to flag border points for
deletion
– (2) deleting the flagged points
– (3) applying step 2 to flag the remaining border
points for deletion
– (4) deleting the flagged points
– This procedure is applied iteratively until no
further points are deleted.
REPRESENTATION
SKELETONS-EXAMPLE
• One application of
skeletonization is for
character recognition.
• A letter or character is
determined by the
center-line of its strokes,
and is unrelated to the
width of the stroke lines.
BOUNDARY DESCRIPTORS
First difference
f ( x)e
x =0
, u = 0,1,2,....M − 1
Discrete Fourier transform is represented as
− j 2uk
(U ) = S (k )e
M −1 K
x =0
Where u=0,1,2……..K-1
Inverse Fourier transform
− j 2ux
M −1 M
1
f ( x) =
M
F (u)e
u =0
, x = 0,1,2,....M
− j 2uk
1 k −1 K
S (k ) = (u )e
K k =0
BOUNDARY DESCRIPTORS
STATISTICAL MOMENTS
Topological property 1:
the number of holes (H)
Topological property 2:
the number of connected
components (C)
REGIONAL DESCRIPTORS
TOPOLOGICAL DESCRIPTORS
Topological property 3:
Euler number: the number of connected components subtract the number of holes
FOR A
E = C – H=1-1=0
FOR B
E = C – H=1-2=-1
E=0 E= -1
REGIONAL DESCRIPTORS
TOPOLOGICAL DESCRIPTORS
Topological
property 4:
the largest
connected
component.
REGIONAL DESCRIPTORS
TEXTURE
REGIONAL DESCRIPTORS
TEXTURE
k =0 i =0
1
– The measure R: R = 1−
1 + 2 ( z)
L −1
– The uniformity: U = p 2 ( zi )
i =0
L −1
– The average entropy: e = − p( z ) log
i =0
i 2 p ( zi )
REGIONAL DESCRIPTORS
STATISTICAL APPROACHES
• Structural concepts:
– Suppose that we have a
rule of the form S→aS,
which indicates that the
symbol S may be
rewritten as aS.
– If a represents a circle
[Fig. 11.23(a)] and the
meaning of “circle to the
right” is assigned to a
string of the form
aaaa… [Fig. 11.23(b)] .
REGIONAL DESCRIPTORS
SPECTRAL APPROACHES
0
S ( ) = S ( )
r
=0 r =1
REGIONAL DESCRIPTORS
SPECTRAL APPROACHES
REGIONAL DESCRIPTORS
SPECTRAL APPROACHES
REGIONAL DESCRIPTORS
MOMENTS OF TWO DIMENSIONAL FUNCTIONS
x y m00
= m11 − x m01 = m11 − ym10
REGIONAL DESCRIPTORS
MOMENTS OF TWO DIMENSIONAL FUNCTIONS
6 = ( 20 − 02 )(30 + 12 ) 2 − ( 21 + 03 ) 2
•
47
Pattern and Pattern classes
Two Methods
Minimum Distance Classifier
Matching based on Correlation
54
Minimum Distance Classifier
• Suppose that we define the prototype of each pattern class to
be the mean vector of the patterns of that class:
1
mj =
N j xw j
xj j=1,2,…,W (1)
D j ( x) = x − m j j=1,2,…,W (2)
55
Minimum Distance Classifier
1
= xT (mi − m j ) − (mi − m j )T (mi + m j ) = 0
2
56
Minimum Distance Classifier
• Decision boundary of minimum distance classifier
¹Ï3.a¡G¤Gºû¥--±¤ÀÃþ½d¨Ò
2 D
1.5
2
1
x
Class C 1
0.5 Class C 2
57
Minimum Distance Classifier
• Advantages:
1. Unusual direct-viewing
2. Can solve rotation the question
3. Intensity
4. Chooses the suitable characteristic,
then solves mirror problem
5. We may choose the color are one kind
of characteristic, the color question
then solve.
58
Minimum Distance Classifier
• Disadvantages:
1. It costs time for counting samples,
but we must have a lot of
samples for high accuracy, so it is
more samples more accuracy!
2. Displacement
3. It is only two features, so that the
accuracy is lower than other methods.
4. Scaling
59
Matching by Correlation
c( x, y ) = f ( s, t ) w( x + s, y + t )
s t
for x =0,1,2,…,M-1
y=0,1,2,…,N-1
60
Matching by Correlation
• Arrangement for obtaining the correlation of and at point
f w
( x0 , y0 )
Origin K
J
o
M ( x0 , y0 )
w = ( x0 + s, y0 + t )
f ( x, y )
61
Matching by Correlation
• The correlation function has the disadvantage of being sensitive to changes
in the amplitude of f and w
• For example, doubling all values of f doubles the value of c( x, y )
• An approach frequently used to overcome this difficulty is to perform
matching via the correlation coefficient
s t s t
• The correlation coefficient is scaled in the range-1 to 1, independent of
scale changes in the amplitude of and
f w
62
Matching by Correlation
• Advantages:
1.Fast
2.Convenient
3.Displacement
• Disadvantages:
1.Scaling
2.Rotation
3.Shape similarity
4.Intensity
5.Mirror problem
6.Color can not recognition
63
THANK YOU