Professional Documents
Culture Documents
IMVFX 1 HistGMM F23 S
IMVFX 1 HistGMM F23 S
3
Digital Image Sampling and Quantization
4
Image Types Example
5
Intensity Histogram
For instance, a 4 x 4 image (3 bits per pixel).
3 1 0 6
2 1 0 6
6 1 6 4
5 6 5 1
intensity
0 1 2 3 4 5 6 7 (bins)
6
Intensity Histogram Examples
intensity intensity
(ideally)
9
Histogram Equalization (cont.)
Estimating through the Cumulative Distribution
Function (CDF).
Histogram CDF
Fig. from Roger S. Gaborski, Intro. to Computer Vision
10
Histogram Equalization (cont.)
The goal now become
Histogram
(ideally)
CDF
11
intensity (ideally) intensity
Numeric Example of Equalization
12
Histogram Equalization Examples
13
Simple Segmentation by Histogram
intensity intensity
15
Color Histogram
17
Color Histogram (cont.)
18
Vector Clustering
19
Parts of the slides are from Tae-Kyun Kim, Machine Learning for Computer Vision. lecture notes.
Color Clustering (Image Quantization)
Image pixels are represented by 3D vectors of R,G,B values.
The vectors are grouped to K=10,3,2 clusters, and represented
by the mean values of the respective clusters.
R
G
B
`
`
20
Parts of the slides are from Tae-Kyun Kim, Machine Learning for Computer Vision. lecture notes.
K-means Clustering
Fig. from Christopher M. Bishop, Mixture Models and the EM Algorithm, lecture notes. 21
Gaussian Mixture Models
2 2
vector
Determinant of
matrix
Fig. from Christopher M. Bishop, Mixture Models and the EM Algorithm, lecture notes. 24
Mixture Models
25
Gaussian Mixture Models
26
Expectation Maximization
The training of GMMs can be accomplished using
Expectation Maximization
Step 1: Expectation (E-step)
Evaluate the “responsibilities” of each cluster with the current
parameters
27
EM for GMMs (algorithm)
Initialize the parameters
Evaluate the log likelihood
28
EM for GMMs (algorithm)
E-step: Evaluate the Responsibilities
−1
1 ( x − )T −1 ( x − )
N ( x | , ) = e 2
(2 )
d 2 12
29
EM for GMMs (algorithm)
M-Step: Re-estimate Parameters
n n
ln ෑ 𝑝(𝑥𝑛 |𝜋, 𝜇, Σ)
𝑛=1
32
Maximum Likelihood of a GMM de x
= ex
d ln x 1
=
dx dx x
Optimization of means.
−1
1 ( xn − k )T k −1 ( xn − k )
N ( xn | k , k ) = e 2
(2 ) d 2 k
12
n
N N
−1
k ( z
n =1
nk ) xn = −1
k ( z
n =1
nk )k
33
Maximum Likelihood of a GMM de x
= ex
d ln x 1
=
dx dx x
Optimization of covariance
−1
1 ( xn − k )T k −1 ( xn − k )
N ( xn | k , k ) = e 2
(2 ) d 2 k
12
n n
34
Maximum Likelihood of a GMM de x
= ex
d ln x 1
=
dx dx x
(2 ) d 2 k
12
F=
dF N
N ( xn | k , k ) N
k N ( xn | k , k )
=
d k n =1 K
+ =0 K
+ k = 0
k =1
k N ( xn | k , k ) n =1
k =1
k N ( xn | k , k )
𝐾 𝑁
N K N K
( z
n =1
nk ) + k = 0 ( z
k =1 n =1
nk ) + k = 0
k =1
𝜏 𝑧𝑛𝑘 = 𝑁
𝑘=1 𝑛=1
𝜋𝑘 = 1 σ𝑁
𝑛=1 𝜏(𝑧𝑛𝑘 )
𝜆 = −𝑁 𝜋𝑘 =
𝑘=1 𝑁
35
MLE of a GMM
n n
36
How to Apply the GMMs?
Collect training data of each category (label).
37
Practical Issues for Images
The computational efficiency.
39
Appendix: Matrix and Vector
Derivatives
40
Appendix: Matrix and Vector
Derivatives
Slides from Tae-Kyun Kim, Machine Learning for Computer Vision. lecture notes. 41
Appendix: Matrix and Vector
Derivatives
Ref:
• J. D. M. Rennie. A Simple Exercise on Matrix Derivatives.
• K. B. Petersen. The Matrix Cookbook 42