Professional Documents
Culture Documents
Lec 09
Lec 09
Lec 09
Transformations
簡韶逸 Shao-Yi Chien
Department of Electrical Engineering
National Taiwan University
1
Outline
• Estimation – 2D Projective Transformation
3T
yh 3T x − wh 2 T x h x i
i T i i i
2T 1T
xih x i − yih x i
0T
− wi x i T
yi x iT
h
1
T
2
T
wi x i 0 T
− xix i h = 0
− yix iT xix iT T
0 h 3
Ai h = 0
Direct Linear Transformation
(DLT)
• Equations are linear in h
Ai h = 0
• Only 2 out of 3 are linearly independent
(indeed, 2 eq/pt)
0T − wix iT yix iT h1
wx
i i
T
0T − xix iT h 2 = 0
− yix iT
xix iT 0T h 3
Direct Linear Transformation
(DLT)
• Equations are linear in h
Ai h = 0
• Only 2 out of 3 are linearly independent
(indeed, 2 eq/pt)
h 1
0T − wix iT yix i 2
T
T T
h = 0
wix i 0T − xix i 3
h
• Holds for any homogeneous
representation, e.g. (xi’,yi’,1)
Direct Linear Transformation
(DLT)
• Solving for H
A1
A
2 h = 0 Ah = 0
A3
A 4
size A is 8x9 or 12x9, but rank 8
Define: H * = x 4 l T
Then, ( )
H* x i = x4 lT x i = 0, i = 1,2,3
H*x 4 = x (l x ) = kx
4
T
4 4
li = H T li Ah = 0
Minimum of 4 lines
3D Homographies (15 dof)
Minimum of 5 points or 5 planes
2D affinities (6 dof)
Minimum of 3 points or lines
Conic provides 5 constraints
Solutions from Mixed Type
• 2D homography
• cannot be determined uniquely from the
correspondence of 2 points and 2 line
• can from 3 points and 1 line or 1 point and 3 lines
16
Cost Functions
• Algebraic distance
• Geometric distance
• Reprojection error
• Comparison
• Geometric interpretation
• Sampson error
Algebraic Distance
DLT minimizes Ah
e = Ah residual vector
ei partial vector for each (xi↔xi’)
algebraic error vector
2
0 T
− wix T
− yix
T
d alg (xi , Hx i ) = ei =
2 2 i i
h
−
i i
w x T
0T − xix
T
i
algebraic distance
d alg (x1 , x 2 ) = a12 + a22 where a = (a1 , a2 , a3 )T = x1 x 2
2
d (x , Hx ) = e = Ah = e
2 2 2 2
alg i i i
i i
Not geometrically/statistically meaningfull, but given good normalization it works
fine and is very fast (use for initialization for non-linear minimization)
Geometric Distance
x measured coordinates
x̂ estimated coordinates
x true coordinates
d(.,.) Euclidean distance (in image)
Error in one image e.g. calibration pattern
Ĥ = argmin d (xi , Hx i )
2
H i
H i
Reprojection error
(Ĥ, x̂ , x̂ ) = argmin d (x , x̂ ) + d (x , x̂ )
i i
H, x̂ i , x̂ i
i i
2
i i
2
i
subject to x̂i = Ĥx̂ i
Symmetric Transfer Error v.s.
Reprojection Error
( -1
)
d x, H x + d (x, Hx )
2 2
Reprojection Error
d (x, x̂ ) + d (x , x̂ )
2 2
Comparison of Geometric and
Algebraic Distances
Error in one image
xi = (xi, yi, wi ) x̂i = (xˆi, yˆ i, wˆ i ) = Hx
T T
h 1
0T − wix iT T
yix i 2 yiwˆ i − wi yˆ i
T T
h A i h = ei =
wix i 0T − xix i 3 wixˆi − xiwˆ i
h
d alg (xi , x̂i ) = ( yiwˆ i − wi yˆ i ) + (wixˆi − xiwˆ i )
2 2 2
2
(
d (x i , x̂i ) = ( yi / wi − yˆ i / wˆ i ) + ( xˆi / wˆ i − xi / wi )
2
)
2 1/ 2
= d alg (xi , x̂i ) / wiwˆ i these two distance metrics are related, but not identical
wi = 1 typical, but not wˆ i = h 3,x i except for affinities
➔ For affinities, DLT can minimize geometric distance
Sampson Error
between algebraic and geometric error
2
Vector X̂ that minimizes the geometric error X − X̂ is the
closest point on the variety ν H to the measurement X
CH (X + δ X ) = CH (X ) +
CH
X
δX δ X = X̂ − X ()
CH X̂ = 0
CH CH
CH (X ) + δX = 0 Jδ X = −e with J =
X X
Find the vector δ X that minimizes δ X subject to Jδ X = −e
Sampson Error
Find the vector δ X that minimizes δ X subject to Jδ X = −e
minimize δ X δ X - 2λ(Jδ X + e ) = 0
T
derivatives
2δ X - 2λ T J = 0T δX = JTλ
2(Jδ X + e ) = 0 JJ T λ + e = 0
λ = − JJ T( )e −1
δX = −J (JJ )
T T −1
e
X̂ = X + δ X δX
2
= δ δ = e JJ
T
X X
T
( ) T −1
e
Sampson Error
between algebraic and geometric error
2
Vector X̂ that minimizes the geometric error X − X̂ is the
closest point on the variety ν H to the measurement X
CH (X + δ X ) = CH (X ) +
CH
X
δX δ X = X̂ − X ()
CH X̂ = 0
CH CH
CH (X ) + δX = 0 Jδ X = −e with J =
X X
Find the vector δ X that minimizes δ X subject to Jδ X = −e
δX
2
= δ δ = e JJ
T
X X
T
( )T −1
e (Sampson error)
Sampson Approximation
= e (JJ ) e T −1
2
δ X
T
A few points
(i) For a 2D homography X=(x,y,x’,y’)
(ii) e = CH (X ) is the algebraic error vector
(iii) CH is a 2x4 matrix,
J=
X ( )
e.g. J11 = − wix i T h 2 + yix i T h 3 / x = − wih21 + yih31
= T 2
(iv) Similar to algebraic error e e e 2
in fact, same as Mahalanobis distance e JJ T
(v) Sampson error independent of linear reparametrization
(cancels out in between e and J)
(vi) Must be summed for all points e T
JJ T −1
( )
e
(vii) Close to geometric error, but much fewer unknowns
Statistical Cost Function and Maximum
Likelihood Estimation
• Optimal cost function related to noise model of measurement
• Assume zero-mean isotropic Gaussian noise (assume outliers
removed)
Pr(x ) =
1
e (
− d ( x, x )2 / 2 2 )
2
2 πσ
Error in one image
(xi ,Hx i )2 / (2 2 )
Pr(xi | H ) =
1 −d
2
e
i 2 πσ
2σ
Maximum Likelihood Estimate
d (x , Hx )
i i Equivalent to minimizing the geometric error function
2
Statistical Cost Function and Maximum
Likelihood Estimation
• Optimal cost function related to noise model of measurement
• Assume zero-mean isotropic Gaussian noise (assume outliers
removed)
Pr(x ) =
1
e (
− d ( x, x )2 / 2 2 )
2
2 πσ
Error in both images
− d ( x i , x i )2 + d (xi ,Hx i )2 / (2 2 )
Pr (xi | H ) =
1
2
e
i 2 πσ
d (x i , x̂ i ) + d (xi , x̂i )
2 2
= (X − X ) (X − X )
2 T −1
X−X
Error in two images (independent)
2 2
X − X + X − X
Varying covariances
X
2 2
i − Xi i
+ Xi − Xi i
i
Invariance to Transforms ?
~
x = Tx ~~
x = Hx ~
x = Hx
~
x = Tx ~
Tx = HTx
-1 ~
? ~ x = T HTx
H = T HT
-1
~ ~
x i
Does the DLT algorithm applied to i
x
~
yield H = THT ?
-1
Non-invariance of DLT
Effect of change of coordinates on algebraic error
~ ~ ~~
( )
ei = xi Hx i = Txi THT-1 Tx i = T* (xi Hx i ) = T*ei
for similarities
sR t R 0
T = T = s T
*
0 1 - t R s (T*: cofactor matrix)
so
~~
A i h = (~
e ,
1 2
~
e )T
= sR (e ,
1 2e )T
= s Ai h
~ ~
Does the DLT algorithm applied to x i x i
~
yield H = THT ?
-1
( )
i
~~ 2
minimize d alg xi , Hx i subject to H = 1
~
( )
i
~~ 2 ~
minimize d alg xi , Hx i subject to H = 1
~
i
Invariance of Geometric Error
Given x i x i and H,
~
and x i x i , x i = Tx i , x i = Tx i , H = THT
~ ~ ~ ~ -1
−1
w + h
Or
0 w / 2
Tnorm = 0 w + h h / 2
0 0 1
Importance of Normalization
h1
0 0 0 − xi − yi − 1 yixi yi yi yi 2
x h = 0
i yi 1 0 0 0 − xixi − xi yi − xi 3
h
~102 ~102 1 ~102 ~102 1 ~104 ~104 ~102
d (xi , Hx i )
2
f : h → (Hx1 , Hx 2 ,..., Hx n )
X − f (h ) X composed of 2n inhomogeneous
coordinates of the points 𝑥𝑖′
Symmetric transfer error
(
i xi ) + d (xi , Hxi )
-1 2 2
d x , H
i
(
f : h → H -1x1 , H -1x2 ,..., H -1xn , Hx1 , Hx 2 ,..., Hx n )
X − f (h ) X composed of 4n-vector inhomogeneous
coordinates of the points𝑥𝑖 and 𝑥𝑖′
Reprojection error
d (x i , x̂ i ) + d (xi , x̂i )
2 2
X − f (h ) X composed of 4n-vector
Initialization
• Powell’s method
• Simplex method
Levenberg-Marquardt Algorithm
For a mapping function f with parameter vector
To an estimated measurement vector
44
Levenberg-Marquardt Algorithm
Find to minimize
45
Gold Standard Algorithm
Objective
Given n≥4 2D to 2D point correspondences {xi↔xi’}, determine
the Maximum Likelyhood Estimation of H
(this also implies computing optimal xi’=Hxi)
Algorithm
(i) Initialization: compute an initial estimate using normalized DLT
or RANSAC
(ii) Geometric minimization of -Either Sampson error:
● Minimize the Sampson error
● Minimize using Levenberg-Marquardt over 9 entries of h
or Gold Standard error:
● compute initial estimate for optimal {xi}
● minimize cost d (x i , x̂ i ) + d (xi , x̂i ) over {H,x1,x2,…,xn}
2 2
(dimension+codimension=dimension space)
Codimension Model t2
1 Line (l), Fundamental matrix (F) 3.84σ2
2 Homography (H), Camera Matrix (P) 5.99σ2
3 Trifocal tensor (T) 7.81σ2
How Many Samples?
Choose N so that, with probability p, at least one random
sample is free from outliers. e.g. p=0.99
(1 − (1 − e) )
s N
= 1− p
(
N = log (1 − p ) / log 1 − (1 − e )
s
)
proportion of outliers e
s 5% 10% 20% 25% 30% 40% 50%
2 2 3 5 6 7 11 17
3 3 4 7 9 11 19 35
4 3 5 9 13 17 34 72
5 4 6 12 17 26 57 146
6 4 7 16 24 37 97 293
7 4 8 20 33 54 163 588
8 5 9 26 44 78 272 1177
Acceptable Consensus Set?
• Typically, terminate when inlier ratio reaches expected ratio
of inliers
T = (1 − e )n
Adaptively Determining the Number
of Samples
e is often unknown a priori, so pick worst case, e.g. 50%,
and adapt if more inliers are found, e.g. 80% would yield
e=0.2
• N=∞, sample_count =0
• While N >sample_count repeat
• Choose a sample and count the number of inliers
• Set e=1-(number of inliers)/(total number of points)
• Recompute N from e
• Increment the sample_count by 1 (N = log(1 − p ) / log(1 − (1 − e) ))
s
• Terminate
Robust Maximum Likelyhood Estimation
Putative correspondences
(268)
Outliers (117)
Inliers (151)