Professional Documents
Culture Documents
Fisher Discriminant
Fisher Discriminant
-
2
–
3 Fisher’s Discriminant
–
4
Fisher’s Discriminant for two
category case
Sample mean for the class ωi
1
mi =
ni
∑x
x∈Di
i = 1,2
~ 1
mi =
ni
∑ x =
w T
x∈Di
∑ y
y∈Di
= w T
mi
Si = ∑ (x − m )(x − m )
x∈Di
i i
T
–
5
Fisher’s Discriminant for two
category case
s% = ∑ ( y − m% i )
i
2 2
y∈Yi
2
m% 1 − m% 2
J (w ) = 2
s%1 + s%22
–
6
Fisher’s Discriminant for two
category case
%s12 + s%22 = wT S1w + wT S 2 w = w T S w w
–
7
Fisher’s Discriminant for two
category case
Let us define
Si = ∑ (x − m )(x − m )
x∈Di
i i
T
and S w = S1 + S 2
s%i2 = ∑ (
x∈Di
w T
x − w T
m i ) 2
= wT Si w
= ∑ w
x∈Di
T
( x − m i )( x − m i )T
w
–
8
Fisher’s Discriminant for two
category case
wT S B w
J (w ) = T is always a scalar quantity
w Sw w
–
9
wT S B w
J (w ) = T
w Sw w
T a = SBw
w a
J (w ) = T where b = SW w
w b
∂J (w )
= (w T a)b − (w T b)a = 0 For maximum w
∂w
( w T a)
a= T b
(w b)
( w T a)
= T b = J (w )b = λb
(w b) –
10
S B w = λS w w
−1
SW S B w = λ w Assuming SW is full rank.
w T S B w w T λ SW w w T SW w
J (w ) = T = T = λ( T )=λ
w SW w w SW w w SW w
–
11
Fisher’s Discriminant for two
category case
• SBw is in direction of m1 – m2. Also scale of w does not
matter, only direction does. So we can write
S B w = (m1 − m 2 )(m1 − m 2 )T w
= (m1 − m 2 ){(m1 − m 2 )T w}
–
Multiple Discriminant Analysis
-
13 Within-Class Scatter Matrix
Assume d >C
c
SW = ∑ S i
i =1
Si = ∑ (
x∈Di
x − m i )( x − m i ) T
1
mi =
ni
∑x
x∈Di
–
14 Total mean vector
1 n 1 c
m = ∑ x = ∑ ni m i
n i =1 n i =1
c
where
n = ∑ ni
i =1
–
15 Total Scatter Matrix
ST = ∑ (x − m)(x − m)T
x
c
= ∑ ∑ (x − m i + m i − m)( x − m i + m i − m)T
i =1 x∈Di
c c
= ∑ ∑ (x − m i )(x − m i )T + ∑ ∑ (m i − m)(m i − m)T
i =1 x∈Di i =1 x∈Di
c
= SW + ∑ ni (m i − m)(m i − m)T
i =1
–
16 Between class scatter
c
S B = ∑ ni (m i − m)(m i − m)T Rank = c −1
i =1
Sum of c c.
However , only
rank 1 matrices gives a rank of c −1
matrices are independent, owing to constraint:
1 n 1 c
m = ∑ x = ∑ ni m i
n i =1 n i =1
Hence, rank = c −1
–
17 Total scatter
S T = SW + S B
–
18 Basic Idea
yi = w Ti x i = 1,..., c − 1
T Projection of a
y=W x feature vector to a
lower dimension
~ 1
mi =
ni
∑y
y∈Yi
Projection of the class
mean vector
in lower dimension
~ 1 c ~
m = ∑ ni m i Projection of pooled
n i =1 mean vector to a
lower dimension
–
19
~ c ~ ~
SW = ∑ ∑ (y − m i )(y − m i )T
i =1 y∈Yi
~
SW = W T SW W Within class Scatter matrix
expression after projection
to lower dimension
~ c ~ ~ ~ ~
S B = ∑ ni (m i − m)(m i − m)T
i =1
~
S B = WT S B W Between class Scatter matrix
expression after projection
to lower dimension
–
20
~
Fisher Criterion function that
| SB || WT S B W |
J (W) = ~ = T
needs to be maximized
| SW | | W SW W |
S B w i = λi S w w i
| S B − λi S w |= 0