Uncertainty in epipolar geometry George Terzakis Center for Robotics and Neural Systems, Plymouth University, Plymouth, UK Georgios.Terzakis@plymouth.ac.uk Technical Report No. Plymouth University, April 2014 Abstract A method to propagate uncertainty from point-projections in two view geometry to the estimated epipolar constraint is presented herein. The idea is to use the implicit function theorem to obtain the derivative of the fundamental/essential matrix with respect to the data points. Thus, one can obtain a covariance matrix for the baseline and the orientation parameters. 1. Uncertainty of the fundamental matrix We assume that the essential matrix is calculated by minimizing the classic 8-point algorithm cost function: 𝑔(𝑒, 𝑑1 , … , 𝑑𝑁 ) = (𝐴𝑓 − 𝑏)𝑇 (𝐴𝑓 − 𝑏) (1) where 𝑁 is the number of data points, 𝑑𝑖 = [𝑥1 (𝑖) 𝑦1 (𝑖) 𝑥2 (𝑖) 𝑦2 (𝑖) ]𝑇 is the vector of the coordinates of the ith pair of matched points, 𝑓 ∈ ℝ8 (rows stacked in a single column vector) are the elements of the fundamental matrix (the 9th is set to 1, hence all the entries of 𝑏 are -1) and A is a data matrix of the form, 𝑎1 𝑇 𝐴=[ ⋮ ] 𝑎𝑁 𝑇 (2) The vector 𝑎𝑖 ∈ ℝ8 , 𝑖 ∈ {1, . . , 𝑁} is the so-called measurement vector (Hartley, Zisserman et al. 2003) that corresponds to the ith match of data points with respect to the essential matrix (superscripts are dropped for simplicity): 𝑎𝑖 = [𝑥2 𝑥1 𝑥2 𝑦1 𝑥2 𝑦2 𝑥1 𝑦2 𝑦1 𝑦2 𝑥1 𝑦1 ]𝑇 (3) 1.1 Derivatives of the fundamental matrix as an implicit function Since the cost function of (1) attains a minimum at 𝑓 = 𝑓0 while data is instantiated as, 𝑑1 = 𝜉1 , … , 𝑑𝑁 = 𝜉𝑁 , one may consider that f can be expressed as a function of 𝑢 = [𝑑1 𝑇 … 𝑑𝑁 𝑇 ]𝑇 in some open neighborhood of 𝜉1 , … , 𝜉𝑁 . Thus, the following holds: 𝜕𝑔 𝜕𝑓 𝜕𝑔 =− 𝜕𝑓 𝜕𝑢 𝜕𝑢 (4) 𝜕𝑓 One can therefore obtain the sub-matrix of 𝜕𝑢 associated to 𝑑𝑖 as follows: 𝜕𝑓 𝜕𝑔 −1 𝜕𝑔 = −𝑑𝑖𝑎𝑔 { } (𝟏𝟖 ⊗ ) 𝜕𝑑𝑖 𝜕𝑓 𝜕𝑑𝑖 𝜕𝑔 where 𝑑𝑖𝑎𝑔 {𝜕𝑓 } is the diagonal matrix with elements from (5) 𝜕𝑔 𝜕𝑓 , ⊗ denotes the Kronecker product and 𝟏𝟖 ∈ ℝ8 is the vector with all of its elements being equal to 1. 𝜕𝑔 The derivative 𝜕𝑓 can be easily calculated as follows: 𝜕𝑔 = 2𝐴𝑇 (𝐴𝑓 − 𝑏) 𝜕𝑓 (6) 𝜕𝑔 Similarly, the derivative 𝜕𝑑𝑖 can be computed from the measurement vector 𝑎𝑖 : 𝜕𝑔 𝜕𝑎𝑖 = 2(𝑎𝑖 𝑇 𝑓 + 𝑏𝑖 )𝑓 𝑇 𝜕𝑑𝑖 𝜕𝑑𝑖 (7) 𝑥2 0 0 𝜕𝑎𝑖 𝑦2 = 0 𝜕𝑑𝑖 0 1 [0 (8) 𝜕𝑎 where 𝜕𝑑𝑖 ∈ ℝ8×4 is, 𝑖 0 𝑥2 0 0 𝑦2 0 0 1 𝑥1 𝑦1 1 0 0 0 0 0 0 0 0 𝑥1 𝑦1 1 0 0] 𝜕𝑓 The derivative of f with respect to the data u, 𝜕𝑢 ∈ ℝ8×4𝑁 can therefore be calculated from (5), (6), (7) and (8) as follows: 𝑇 (𝐴𝑓 𝑑𝑖𝑎𝑔{2𝐴 𝜕𝑓 = −2 𝜕𝑢 − 𝑏)} −1 𝑇 (𝟏𝟖 ⊗ (𝑎1 𝑓 + 𝑏1 )𝑓 𝑇 𝑇 𝜕𝑎1 ) 𝜕𝑑1 ⋮ (9) 𝜕𝑎𝑁 𝑑𝑖𝑎𝑔{2𝐴𝑇 (𝐴𝑓 − 𝑏)}−1 (𝟏𝟖 ⊗ (𝑎𝑁 𝑇 𝑓 + 𝑏𝑁 )𝑓 𝑇 ) [ 𝜕𝑑𝑁 ] 1.3 Covariance of the fundamental matrix It is now easy to propagate uncertainty from u to the elements of the fundamental matrix as follows: 𝜕𝑓 𝜕𝑓 𝑇 (10) 𝛴𝑢 ( ) 𝜕𝑢 𝜕𝑢 where 𝛴𝑢 ∈ ℝ4𝑁×4𝑁 is the covariance matrix of u and 𝛴𝑓 ∈ ℝ8×8 is the covariance matrix of the fundamental vector. 𝛴𝑓 = 2. Uncertainty of the essential matrix Estimating the covariance of the essential matrix is a process identical to the one described in section 1. The essential matrix is estimated using the Euclidean homogenous coordinates of the data points; it is therefore sufficient to express the coordinates of the data points in calibrated (Euclidean) space. Equations (5)-(10) can be used to convey uncertainty from u to the fundamental vector, f. They can also be used verbatim for the case of the essential vector, 𝑒 ∈ ℝ8 if only we substitute 𝑑𝑖 and 𝑎𝑖 with the respective expressions in calibrated (Euclidean) space, let 𝑑𝑖𝐸 and 𝑎𝑖𝐸 . Thus, if K is the matrix of intrinsic camera parameters, then: 𝑑𝑖𝐸 = [𝑥1𝐸 and, 𝑦1𝐸 𝑥2𝐸 𝑦2𝐸 ]𝑇 = (𝐼2 ⊗ 𝐾 −1 )𝑑𝑖 (11) 𝑎𝑖𝐸 = [𝑥2𝐸 𝑥1𝐸 𝑥2𝐸 𝑦1𝐸 𝑥2𝐸 𝑦2𝐸 𝑥1𝐸 𝑦2𝐸 𝑦1𝐸 𝑦2𝐸 𝑥1𝐸 𝑦1𝐸 ]𝑇 (12) The covariance matrix of the essential vector can therefore be calculated using equations (5)(10). 3. Covariance matrix of orientation and baseline parameters Having recovered a rigid transformation from the essential matrix, then the covariance matrix of the orientation parameters (e.g., Euler angles, axis-angle parameters, etc.) and the baseline can be obtained by considering the parameter vector as an implicit function of the essential matrix (Faugeras, Luong et al. 2004). Thus, uncertainty can be propagated linearly in some open neighborhood of the essential vector (the elements of the essential matrix stacked in a 9vector) and the parameters (orientation parameters and baseline as a 6-vector). Consider a function ℎ(𝑒, 𝜉): ℝ8 × ℝ6 → ℝ9 where 𝑒 ∈ ℝ8 is the essential vector and 𝜉 = [𝜂𝑇 𝑏 𝑇 ]𝑇 , such that 𝜂 = 𝜂(𝑒) ∈ ℝ3 , 𝑏 = 𝑏(𝑒) ∈ ℝ3 are the orientation and baseline vectors obtained from e, such that, 𝑒 (13) ℎ(𝑒, 𝜉) = [ ] − 𝑣(𝜉) = 0 ∀ 𝑒 ∈ ℝ8 1 where 𝑣(𝜉) ∈ ℝ9 is contains the rows of the product (𝑅)𝛵 [−𝑏]× as a 9-vector. It is easy to verify that, [𝑏]× 𝑟1 𝑏 × 𝑟1 𝑣(𝜉) = [𝑏 × 𝑟2 ] = [[𝑏]× 𝑟2 ] 𝑏 × 𝑟3 [𝑏]× 𝑟3 (14) The derivative of 𝑣(𝜉) with respect to e is obtained implicitly from the following relationship between the derivatives of h: 𝜕ℎ 𝜕ℎ 𝜕𝜉 + =0 𝜕𝑒 𝜕𝜉 𝜕𝑒 ⇔ 𝜕ℎ 𝜕𝜉 𝜕ℎ =− 𝜕𝜉 𝜕𝑒 𝜕𝑒 (15) 𝜕ℎ 𝑇 Multiplying eq. (12) from the left by (𝜕𝜉 ) yields: 𝜕ℎ 𝑇 𝜕ℎ 𝜕𝜉 ((𝜕𝜉 ) 𝜕ℎ 𝑇 𝜕ℎ ) = − (𝜕𝜉 ) 𝜕𝜉 𝜕𝑒 𝜕ℎ 𝜕𝑒 (16) 𝜕𝜉 Assuming that 𝑟𝑎𝑛𝑘 (𝜕𝜉 ) = 6 then we may obtain 𝜕𝑒: −1 𝜕𝜉 𝜕ℎ 𝑇 𝜕ℎ = − (( ) ) 𝜕𝑒 𝜕𝜉 𝜕𝜉 𝜕ℎ 𝑇 𝜕ℎ ( ) 𝜕𝜉 𝜕𝑒 𝜕ℎ where clearly, 𝜕𝑒 = 𝐼9×8 . 3.1 Computing the derivative of h with respect to ξ The derivative of 𝑣(𝜉) is given by, (17) 𝜕([𝑏]× 𝑟1 ) 𝜕([𝑏]× 𝑟1 ) 𝜕𝑟1 [𝑏]× 𝜕𝜂 𝜕𝑏 𝜕𝜂 𝜕𝑣 𝜕𝑟2 𝜕([𝑏]× 𝑟2 ) 𝜕([𝑏]× 𝑟2 ) = = [𝑏]× 𝜕𝜉 𝜕𝜂 𝜕𝜂 𝜕𝑏 𝜕𝑟3 𝜕([𝑏]× 𝑟3 ) 𝜕([𝑏]× 𝑟3 ) [𝑏]× 𝜕𝜂 [ 𝜕𝜂 𝜕𝑏 ] [ where the notation 𝛥𝑏𝐴 〈𝑢〉 , 𝐴 ∈ ℝ𝑚×𝑛 , 𝑏 = (𝑏1 denote the following matrix: 𝛥𝑏𝐴 〈𝑢〉 = [ 𝜕𝐴 𝑢 𝜕𝑏1 𝜕𝐴 𝑢 𝜕𝑏2 [𝑏] 𝛥𝑏 × 〈𝑟1 〉 [𝑏] 𝛥𝑏 × 〈𝑟2 〉 (18) [𝑏] 𝛥𝑏 × 〈𝑟3 〉 ] 𝑏𝑘 ) ∈ ℝ𝑘 and 𝑢 ∈ ℝ𝑛 is used to 𝑏2 … … 𝜕𝐴 𝑢] 𝜕𝑏𝑘 (19) From (15) and (16), it follows that, 𝜕𝑟1 𝜕𝜂 𝜕ℎ 𝜕𝑟2 = − [𝑏]× 𝜕𝜉 𝜕𝜂 𝜕𝑟3 [𝑏] [ × 𝜕𝜂 [𝑏]× [𝑏] 𝛥𝑏 × 〈𝑟1 〉 [𝑏] 𝛥𝑏 × 〈𝑟2 〉 (20) [𝑏] 𝛥𝑏 × 〈𝑟3 〉 ] 3.2 Covariance matrix of the pose vector It is now easy to compute the covariance matrix of ξ from the covariance matrix of the essential vector e using (17) and (20) as follows: 𝜕𝜉 𝜕𝜉 𝑇 𝛴𝜉 = 𝛴 ( ) 𝜕𝑒 𝑒 𝜕𝑒 (21) Faugeras, O., et al. (2004). The geometry of multiple images: the laws that govern the formation of multiple images of a scene and some of their applications, the MIT Press. Hartley, R., et al. (2003). Multiple view geometry in computer vision, Cambridge Univ Press.