Multiple View Geometry in Computer Vision

advertisement
Parameter estimation
class 5
Multiple View Geometry
Comp 290-089
Marc Pollefeys
Content
• Background: Projective geometry (2D, 3D),
Parameter estimation, Algorithm evaluation.
• Single View: Camera model, Calibration, Single
View Geometry.
• Two Views: Epipolar Geometry, 3D
reconstruction, Computing F, Computing
structure, Plane and homographies.
• Three Views: Trifocal Tensor, Computing T.
• More Views: N-Linearities, Multiple view
reconstruction, Bundle adjustment, autocalibration, Dynamic SfM, Cheirality, Duality
Multiple View Geometry course schedule
(subject to change)
Jan. 7, 9
Intro & motivation
Projective 2D Geometry
Jan. 14, 16
(no class)
Projective 2D Geometry
Jan. 21, 23
Projective 3D Geometry
(no class)
Jan. 28, 30
Parameter Estimation
Parameter Estimation
Feb. 4, 6
Algorithm Evaluation
Camera Models
Feb. 11, 13
Camera Calibration
Single View Geometry
Feb. 18, 20
Epipolar Geometry
3D reconstruction
Feb. 25, 27
Fund. Matrix Comp.
Structure Comp.
Planes & Homographies
Trifocal Tensor
Mar. 18, 20
Three View Reconstruction
Multiple View Geometry
Mar. 25, 27
MultipleView Reconstruction
Bundle adjustment
Apr. 1, 3
Auto-Calibration
Papers
Apr. 8, 10
Dynamic SfM
Papers
Apr. 15, 17
Cheirality
Papers
Apr. 22, 24
Duality
Project Demos
Mar. 4, 6
Projective 3D Geometry
• Points, lines, planes and quadrics
• Transformations
• П∞, ω∞ and Ω
∞
Singular Value Decomposition
A  UΣ V
T
UΣ
Σ VT X
Homogeneous least-squares
min AX subject to X  1
solution X  Vn
Parameter estimation
• 2D homography
Given a set of (xi,xi’), compute H (xi’=Hxi)
• 3D to 2D camera projection
Given a set of (Xi,xi), compute P (xi=PXi)
• Fundamental matrix
Given a set of (xi,xi’), compute F (xi’TFxi=0)
• Trifocal tensor
Given a set of (xi,xi’,xi”), compute T
Number of measurements required
• At least as many independent equations
as degrees of freedom required
• Example:
 x  h11 h12




λ  yx'  Hx
h21 h22
 1  h31 h32
h13   x 



h23   y 
h33   1 
2 independent equations / point
8 degrees of freedom
4x2≥8
Approximate solutions
• Minimal solution
4 points yield an exact solution for H
• More points
• No exact solution, because
measurements are inexact (“noise”)
• Search for “best” according to some cost
function
• Algebraic or geometric/statistical cost
Gold Standard algorithm
• Cost function that is optimal for
some assumptions
• Computational algorithm that
minimizes it is called “Gold
Standard” algorithm
• Other algorithms can then be
compared to it
Direct Linear Transformation
(DLT)
xxii  Hx i  0
 h 1T x 
 T i
T
xi  xi, yi, wi  Hx i   h 2 x i 
 3T 
 yh 3T x  wh 2 T x   h x i 
i
i
 i T i
1
3T

x i  Hx i  wih x i  xih x i 


2T
1T
 xih x i  yih x i 


 0T

T

 wi x i
 yix iT

T

 wi x i
T
0
xix iT
T
1



yi x i
h 
 2
T
 xix i  h   0

T
3

0  h 
Ai h  0
•
Direct Linear Transformation
(DLT)
Equations are linear in h
Ai h  0
• Only 2 out of 3 are linearly independent
(indeed, 2 eq/pt)
 0 TT
 wix TiT
  0 T  wTix i
0T
 wwixxi T
0 T
T
i
i

 yix i
xix i

 1
 2
1
T



h 
yi x Ti
yix i T  2 
 xix Ti  h   0
 xiTx i  3 
0  h 
 3
xi A
wi Aifi w0i’≠0)
(only
drop
i  ythird
i A i row
• Holds for any homogeneous
representation, e.g. (xi’,yi’,1)
Direct Linear Transformation
(DLT)
• Solving for H
 A1 
A 
 2Ah
h  0
A 3 
 or
 12x9, but rank 8
size A is 8x9
A 4 
Trivial solution is h=09T is not interesting
1-D null-space yields solution of interest
pick for example the one with h  1
Direct Linear Transformation
(DLT)
• Over-determined solution
 A1 
A 
 2Ah
h  0
  
No exact solution because
 of inexact measurement
An 

i.e. “noise”
Find approximate solution
- Additional constraint needed to avoid 0, e.g.
- Ah  0 not possible, so minimize Ah
h 1
DLT algorithm
Objective
Given n≥4 2D to 2D point correspondences {xi↔xi’},
determine the 2D homography matrix H such that xi’=Hxi
Algorithm
(i) For each correspondence xi ↔xi’ compute Ai. Usually
only two first rows needed.
(ii) Assemble n 2x9 matrices Ai into a single 2nx9 matrix A
(iii) Obtain SVD of A. Solution for h is last column of V
(iv) Determine H from h
Inhomogeneous solution
Since h can only be computed up to scale, ~
pick hj=1, e.g. h9=1, and solve for 8-vector h
0
0
 xi wi '  yi wi '  wi wi ' xi yi ' yi yi ' ~   wi yi ' 
 0

h  
x w ' y w ' w w '

0
0
0
xi xi ' yi xi '
i i
i i
 i i
 wi xi ' 
Solve using Gaussian elimination (4 points) or
using linear least-squares (more than 4 points)
However, if h9=0 this approach fails
also poor results if h9 close to zero
Therefore, not recommended
Note h9=H33=0 if origin is mapped to infinity 0
lT Hx 0  0 0 1H 0  0
1
Degenerate configurations
x1
x4
x2
x3
Constraints:
Define:
Then,
x1
H?
x4
x2
x3
(case A)
xi  Hx i  0
H’?
(case B)
x1
x4
x2
x3
i=1,2,3,4
H*  x4lT
H* x i  x4 lT x i   0, i  1,2,3


H* x 4  x4 lT x 4  kx4
H* is rank-1 matrix and thus not a homography
If H* is unique solution, then no homography mapping xi→xi’(case B)
If further solution H exist, then also αH*+βH (case A)
(2-D null-space in stead of 1-D null-space)
Solutions from lines, etc.
2D homographies from 2D lines
li  H T li
Ah  0
Minimum of 4 lines
3D Homographies (15 dof)
Minimum of 5 points or 5 planes
2D affinities (6 dof)
Minimum of 3 points or lines
Conic provides 5 constraints
Mixed configurations?
Cost functions
• Algebraic distance
• Geometric distance
• Reprojection error
• Comparison
• Geometric interpretation
• Sampson error
Algebraic distance
DLT minimizes
e  Ah
ei
Ah
residual vector
partial vector for each (xi↔xi’)
algebraic error vector
 0
 wix
d alg xi , Hx i   ei  
T
T


w
x
0
 i i
algebraic distance
T
2
2
T
i
 yix 
h
 xix 
T
i
T
i
2
d alg x1 , x 2   a12  a22 where a  a1 , a2 , a3 T  x1  x 2
2
2



d
x
,
Hx
 alg i i   ei
i
2
 Ah  e
2
2
i
Not geometrically/statistically meaningfull, but given good
normalization it works fine and is very fast (use for initialization)
Geometric distance
x measured coordinates
x̂ estimated coordinates
x true coordinates
d(.,.) Euclidean distance (in image)
Error in one image
2
e.g. calibration pattern
Ĥ  argmin  d xi , Hx i 
H
i
Symmetric transfer error
Ĥ  argmin
H


 d x i , H xi  d xi , Hx i 
-1
2
2
i
Reprojection error
Ĥ, x̂ , x̂   argmin  d x , x̂   d x , x̂ 
2
i
i
H,x̂ i , x̂ i
i
i
i
2
i
i
subject to x̂i  Ĥx̂ i
Reprojection error


d x, H x  d x, Hx 
-1
2
2
d x, x̂   d x, x̂
2
2
Comparison of geometric
and algebraic distances
Error in one image
T
T








ˆ
ˆ
ˆ
x i  xi , yi , wi  x̂ i  xi , yi , wi   Hx
1


h
T
T yw
Tw y



ˆ
ˆ

 0
 wixi i yi ix i i i 2 
 h 
 A i hT  ei T 
T
0  wi
xˆixixxi iwˆi 3 
 wix i
h 
d alg xi , x̂i    yiwˆ i  wi yˆ i   wixˆi  xiwˆ i 
2
2

2
d xi , x̂i    yi / wi  yˆ i / wˆ i   xˆi / wˆ i  xi / wi 
 d alg xi , x̂i  / wiwˆ i
2
2

2 1/ 2
wi  1 typical, but not wˆ i  h 3 x i, except for affinities
For affinities DLT can minimize geometric distance
Possibility for iterative algorithm
Geometric interpretation of
reprojection error
Estimating homography~fit surface ν Hto points X=(x,y,x’,y’)T in 4
xi  Hx i  0 represents 2 quadrics in 4 (quadratic in X)
X i  X̂ i
2
 xi  xˆi    yi  yˆ i   xi  xˆi    yi  yˆ i 
2
2
2
2
2
2
 d x i , x̂ i   d xi , x̂i 
d x i , x̂ i   d xi , x̂i   d Xi , H 
2
2
2
Analog to conic fitting
d alg x, C  x T Cx
2
d  x , C 
2
Sampson error
between algebraic and geometric error
2
Vector X̂ that minimizes the geometric error X  X̂ is
the closest point on the variety ν H to the measurement X
Sampson error: 1st order approximation of X̂
Ah  CH X  0
CH
CH X  δ X   CH X  
δX
X
C
CH X   H δ X  0
X
δ X  X̂  X

CH X̂  0
CH
JδX  e with J 
X
Find the vector δ X that minimizes δ X subject to Jδ X  e
Find the vector δ X that minimizes δ X subject to Jδ X  e
Use Lagrange multipliers:
minimize
δ X δ X - 2λJδ X  e   0
T
derivatives 2δ X - 2λ J  0
T
T
2JδX  e  0
 δX  JT λ
 JJ T λ  e  0
 e
 J JJ 
 λ   JJ T
 δX
X̂  X  δ X
δX
2
T
 
 δTXδX  eT JJ T
1
e
1
T 1
e
Sampson error
between algebraic and geometric error
2
Vector X̂ that minimizes the geometric error X  X̂ is
the closest point on the variety ν H to the measurement X
Sampson error: 1st order approximation of X̂
Ah  CH X  0
CH
CH X  δ X   CH X  
δX
X
C
CH X   H δ X  0
X
δ X  X̂  X

CH X̂  0
JδX  e
Find the vector δ X that minimizes δ X subject to Jδ X  e
δX
2
 
 δ δ  e JJ
T
X X
T
T 1
e
(Sampson error)
Sampson approximation
δX
2
 
 e JJ
T
T 1
e
A few points
(i)
(ii)
(iii)
For a 2D homography X=(x,y,x’,y’)
e  CH X is the algebraic error vector
CH is a 2x4 matrix,
J
X e.g. J    wx T h 2  yx T h 3 / x  wh  yh
11
i i
i i
i 21
i 31


T
(iv) Similar to algebraic error e  e e
2
(v)
2
in fact, same as Mahalanobis distance e T
JJ
Sampson error independent of linear reparametrization
(cancels out in between e and J)
  
T
T 1
e JJ e
(vi) Must be summed for all points
(vii) Close to geometric error, but much fewer unknowns
Statistical cost function and
Maximum Likelihood Estimation
• Optimal cost function related to noise model
• Assume zero-mean isotropic Gaussian noise
(assume outliers removed)
Pr x  
1
2 πσ
2
e

 d  x, x 2 / 2 2

Error in one image
Pr x i | H   
i
1
2 πσ
log Pr xi | H   

xi ,Hx i 2 / 2 2 
e
2
d
1
2σ
2
2

 d x i , Hx i   constant
Maximum Likelihood Estimate
2



d
x
,
H
x
 i i
Statistical cost function and
Maximum Likelihood Estimation
• Optimal cost function related to noise model
• Assume zero-mean isotropic Gaussian noise
(assume outliers removed)
Pr x  
1
2 πσ
2
e

 d  x, x 2 / 2 2

Error in both images
Prxi | H   
i
1
2πσ
2
e
 d x i , x i 2  d

Maximum Likelihood Estimate
 d x , x̂ 
2
i
i
2


 d x i , x̂ i 
xi ,Hx i 2  / 2 2 
Mahalanobis distance
• General Gaussian case
Measurement X with covariance matrix Σ
X  X   X  X   1 X  X 
2
T
Error in two images (independent)
2
2
X  X   X  X 
Varying covariances
X
i
i
 Xi
2
i
 Xi  Xi
2
i
Next class:
Parameter estimation (continued)
Transformation invariance
and normalization
Iterative minimization
Robust estimation
Upcoming assignment
• Take two or more photographs taken
from a single viewpoint
• Compute panorama
• Use different measures DLT, MLE
• Use Matlab
• Due Feb. 13
Download