Document

advertisement
Lucas-Kanade Image Alignment
Slides from Iain Matthews
Applications of Image Alignment
• Ubiquitous computer vision technique
• Tracking
• Registration of MRI/CT/PET
Generative Model for an Image
• Parameterized model
shape
P
Parameters
Image
appearance
Fitting a Model to an Image
• What are the best model parameters to match an image?
P
shape
Parameters
Image
appearance
• Nonlinear optimization problem
Active Appearance Model
Appearance
Warp to reference
Region of interest
• Cootes, Edwards, Taylor, 1998
g  g  Pg b g
Shape
Landmarks
x  x  Ps b s
Image Alignment
Template, T(x)
Warp, W(x;p+p)
Warp, W(x;p)
Image coordinates
x = (x, y)T
Warp parameters,
p = (p1, p2, …, pn)T
Image, I(x)
Want to: Minimize the Error
• Warp image to get
Template, T(x)
Warped, I(W(x;p))
T(x) - I(W(x;p))
compute
How to: Minimize the Error
Minimise SSD with respect to p,
Generally a nonlinear optimisation problem ,
… how can we solve this?
Solution: solve for increments to current estimate,
=
Linearize
Taylor series expansion, linearize function f about x0:
æ ¶f ö
f ( x 0 + D) » f ( x 0 ) + ç ÷ D +…
è ¶x ø x0
For image alignment:
Gradient Descent Solution
Least squares problem, solve for p
Solution,
Error Image
Gradient
Hessian
Jacobian
Gradient Images
• Compute image gradient
W(x;p)
W(x;p)
Ix
I(W(x;p))
Iy
Jacobian
• Compute Jacobian
Mesh parameterization
4
1
4
1
Warp, W(x;p)
Template, T(x)
2
3
3
Image, I(x)
2
Image coordinates
Warp parameters,
x = (x, y)T
p = (p1, p2, …, pn)T = (dx1, dy1, …, dxn, dyn)T
=
1
2
3
4
Lucas-Kanade Algorithm
1. Warp I with W(x;p)  I(W(x;p))
2. Compute error image T(x) - I(W(x;p))
3. Warp gradient of I to compute I
4. Evaluate Jacobian
5. Compute Hessian
6. Compute p
7. Update parameters p  p  p
-
=

Fast Gradient Descent?
•
To reduce Hessian computation:
1. Make Jacobian simple (or constant)
2. Avoid computing gradients on I
Shum-Szeliski Image Aligment
• Additive Image Alignment – Lucas, Kanade
T(x)
W(x;p)
W(x;p+p)
I(x)
• Compositional Alignment – Shum, Szeliski
T(x)
W(x;p) o W(x;p)
W(x;0 + p) = W(x;p)
W(x;p)
I(W(x;p))
I(x)
Compositional Image Alignment
Minimise,
T(x)
W(x;p) o W(x;p)
W(x;p)
W(x;p)
I(W(x;p))
I(x)
Jacobian is constant, evaluated at (x, 0)  “simple”.
Compositional Algorithm
1. Warp I with W(x;p)  I(W(x;p))
2. Compute error image T(x) - I(W(x;p))
3. Warp gradient of I to compute I
4. Evaluate Jacobian
5. Compute Hessian
6. Compute p
7. Update W(x;p)  W(x;p) o W(x;p)
-
=

Inverse Compositional
• Why compute updates on I?
• Can we reverse the roles of the images?
• Yes!
[Baker, Matthews CMU-RI-TR-01-03] Proof that algorithms take the same steps (to first order)
Inverse Compositional
• Forwards compositional
T(x)
W(x;p) o W(x;p)
W(x;p)
W(x;p)
I(W(x;p))
I(x)
• Inverse compositional
T(x)
W(x;p) o W(x;p)-1
W(x;p)
W(x;p)
I(W(x;p))
I(x)
Inverse Compositional
• Minimise,

• Solution
• Update
Inverse Compositional
• Jacobian is constant - evaluated at (x, 0)
• Gradient of template is constant
• Hessian is constant
• Can pre-compute everything but error image!
Inverse Compositional Algorithm
1. Warp I with W(x;p)  I(W(x;p))
2. Compute error image T(x) - I(W(x;p))
3. Warp gradient of I to compute I
4. Evaluate Jacobian
5. Compute Hessian
6. Compute p
7. Update W(x;p)  W(x;p) o W(x;p)-1
-
=

Framework
• Baker and Matthews 2003
Formulated framework, proved equivalence
Algorithm
Can be applied to
Efficient?
Authors
Forwards
Additive
Any
No
Lucas, Kanade
Forwards
Compositional
Any semi-group
No
Shum, Szeliski
Inverse
Compositional
Any group
Yes
Baker, Matthews
Inverse Additive
Simple linear 2D+
Yes
Hager,
Belhumeur
Example
Reprise… what have we solved for?
Lucas-Kanade Algorithm
Criterion :
Download