Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Image Manipulation

Techniques and Visual Effects

1. Histogram and GMM


I-Chen Lin
College of Computer Science,
National Yang Ming Chiao Tung University
Outline
 What’s the intensity/color histogram?

 What’s the Gaussian Mixture Model (GMM)?

 Their applications and limitation.

Ref (plenty of slides are from) :


• Kenny A. Hunt, The Art of Image Processing.
• R. C. Gonzalez and R. E. Woods, Digital Image Processing.
• Tae-Kyun Kim, Machine Learning for Computer Vision. lecture notes.
• Andrew Rosenberg, Machine Learning, lecture notes. 2
Digital Image Sampling and Quantization

continuous tone scene sampled sampled and quantized


(partition space) (partitioned light levels)

3
Digital Image Sampling and Quantization

Continuous function (upper left) is


Sampled (above) and then
Quantized (upper right) to form
Digital image (left)

4
Image Types Example

E.g. RGB (8bits x 3 channel) (8bits x 1 channel) 1 bits (per pixel)

5
Intensity Histogram
 For instance, a 4 x 4 image (3 bits per pixel).

3 1 0 6
2 1 0 6
6 1 6 4
5 6 5 1

intensity
0 1 2 3 4 5 6 7 (bins)
6
Intensity Histogram Examples

Fig. from [Gonzalez and Woods] 7


Intensity Histogram Examples

Fig. from [Gonzalez and Woods] 8


Histogram Equalization
 Improving the local contrast of an image without
altering the global contrast to a significant degree.
 Creating an output image with a (nearly) uniform
histogram.

intensity intensity
(ideally)

9
Histogram Equalization (cont.)
 Estimating through the Cumulative Distribution
Function (CDF).

Histogram CDF
Fig. from Roger S. Gaborski, Intro. to Computer Vision
10
Histogram Equalization (cont.)
 The goal now become

Histogram

(ideally)
CDF

11
intensity (ideally) intensity
Numeric Example of Equalization

12
Histogram Equalization Examples

13
Simple Segmentation by Histogram

intensity intensity
15
Color Histogram

3x15x3 resolution 3x4x3 resolution 8x3x3 resolution

Consider the resolution of various color histogram binnings in RGB space.


The resolution of each axis may be set independently of the others.

Slides from Kenny A. Hunt, The Art of Image Processing.


16
Color Histogram (cont.)
 The segmentation now has to rely on a bounding
cuboid or thresholding by planes in RGB space

17
Color Histogram (cont.)

How about more


complex situations ?

18
Vector Clustering

 Data vectors (green) are grouped to homogenous clusters


(blue and red).
 The cluster centers are marked x.

19
Parts of the slides are from Tae-Kyun Kim, Machine Learning for Computer Vision. lecture notes.
Color Clustering (Image Quantization)
 Image pixels are represented by 3D vectors of R,G,B values.
 The vectors are grouped to K=10,3,2 clusters, and represented
by the mean values of the respective clusters.
R
G
B
`
`

20
Parts of the slides are from Tae-Kyun Kim, Machine Learning for Computer Vision. lecture notes.
K-means Clustering

Fig. from Christopher M. Bishop, Mixture Models and the EM Algorithm, lecture notes. 21
Gaussian Mixture Models

 Rather than identifying clusters by “nearest” centroids


 Fit a Set of k Gaussians to the data
 Maximum Likelihood over a mixture model

Slides are from Andrew Rosenberg, Machine Learning, lecture notes. 22


GMM example p( x) =
1
e

( x− )2
2 2

2 2

: mean : standard deviation23


Multivariate Gaussian distribution
N N
For random variables x, y Var ( X ) = 1 ( x − x ) 1
 i
N i =1
Cov( X , Y ) =
N
 ( x − x )( y
i i − y)
(scalar) i =1

vector

Determinant of 
matrix

Fig. from Christopher M. Bishop, Mixture Models and the EM Algorithm, lecture notes. 24
Mixture Models

 Formally a Mixture Model is the weighted sum of a number of


pdfs where the weights are determined by a distribution,

25
Gaussian Mixture Models

 GMM: the weighted sum of a number of Gaussians where the


weights are determined by a distribution,

26
Expectation Maximization
 The training of GMMs can be accomplished using
Expectation Maximization
 Step 1: Expectation (E-step)
 Evaluate the “responsibilities” of each cluster with the current
parameters

 Step 2: Maximization (M-step)


 Re-estimate parameters using the existing “responsibilities”

 Similar to k-means training.

27
EM for GMMs (algorithm)
 Initialize the parameters
 Evaluate the log likelihood

 Expectation-step: Evaluate the responsibilities

 Maximization-step: Re-estimate Parameters


 Evaluate the log likelihood
 Check for convergence

28
EM for GMMs (algorithm)
 E-step: Evaluate the Responsibilities

n: index for samples


k: index for basis functions (Gaussian)

−1
1 ( x −  )T  −1 ( x −  )
N ( x |  , ) = e 2

(2 ) 
d 2 12

29
EM for GMMs (algorithm)
 M-Step: Re-estimate Parameters

n n

K: num. of Gaussian; N: num. of samples


30
Visual example of EM

Slides are from Andrew Rosenberg, Machine Learning, lecture notes. 31


Maximum Likelihood over a GMM
 As usual: Identify a likelihood function

ln ෑ 𝑝(𝑥𝑛 |𝜋, 𝜇, Σ)
𝑛=1

 And set partials to zero…

32
Maximum Likelihood of a GMM de x
= ex
d ln x 1
=
dx dx x

 Optimization of means.
−1
1 ( xn −  k )T  k −1 ( xn −  k )
N ( xn |  k ,  k ) = e 2

(2 ) d 2  k
12

n
N N
 −1
k  ( z
n =1
nk ) xn = −1
k  ( z
n =1
nk )k

33
Maximum Likelihood of a GMM de x
= ex
d ln x 1
=
dx dx x

 Optimization of covariance
−1
1 ( xn −  k )T  k −1 ( xn −  k )
N ( xn |  k ,  k ) = e 2

(2 ) d 2  k
12

n n

34
Maximum Likelihood of a GMM de x
= ex
d ln x 1
=
dx dx x

 Optimization of mixing term


−1
1 ( xn −  k )T  k −1 ( xn −  k )
N ( xn |  k ,  k ) = e 2

(2 ) d 2  k
12

F=

dF N
N ( xn |  k ,  k ) N
 k N ( xn |  k ,  k )
=
d k n =1 K
+ =0  K
+  k = 0

k =1
k N ( xn |  k ,  k ) n =1

k =1
k N ( xn |  k ,  k )

𝐾 𝑁
N K N K

 ( z
n =1
nk ) +  k = 0   ( z
k =1 n =1
nk ) +  k = 0
k =1
෍ ෍ 𝜏 𝑧𝑛𝑘 = 𝑁
𝑘=1 𝑛=1

෍ 𝜋𝑘 = 1 σ𝑁
𝑛=1 𝜏(𝑧𝑛𝑘 )
𝜆 = −𝑁 𝜋𝑘 =
𝑘=1 𝑁
35
MLE of a GMM

n n

36
How to Apply the GMMs?
 Collect training data of each category (label).

 Choose an appropriate feature set.

 Estimate (Train) the GMM parameters of each


category by EM.

 Evaluate the probability for each category .

37
Practical Issues for Images
 The computational efficiency.

 Different types of covariance matrices.

 How about these images ?!

Fig. from W. Matusik, et al.,


"Image-based visual hulls” Fig. from the Grabcut database 38
Types of covariance matrices
 Spherical: Each component has
its own single variance.

 Diag: Each component has its


own diagonal covariance matrix.

 Tied: All components share the


same general covariance matrix.

 Full: Each component has its


own general covariance matrix

Fig. from: scikit-learn.org/stable/auto_examples/mixture/plot_gmm_covariances.html

39
Appendix: Matrix and Vector
Derivatives

40
Appendix: Matrix and Vector
Derivatives

Slides from Tae-Kyun Kim, Machine Learning for Computer Vision. lecture notes. 41
Appendix: Matrix and Vector
Derivatives

Ref:
• J. D. M. Rennie. A Simple Exercise on Matrix Derivatives.
• K. B. Petersen. The Matrix Cookbook 42

You might also like