Analytic Phase-to-Height Model for Measuring Object

advertisement
A New Model for Measuring Object Shape using noncollimated fringe-pattern projections
B A Rajoub, D R Burton, M J Lalor, S A Karout
General Engineering Research Institute – Liverpool John Moores University - UK
bashar@ieee.org
Abstract. Successful measurement of object shape using structured light optical techniques depends
on various factors, most importantly in the final stage of relating the measured unwrapped phase
distribution to the object height. Although various phase-to-height models exist in the literature, the
different approaches employ numerous assumptions and simplifications which could render the
derived model inaccurate or very sensitive to parameter variations. This paper presents a new
approach for deriving the true analytic phase-to-height model for non-collimated projected fringes.
The 3-D spatial phase distribution of the pattern existing over the object surface is first evaluated
which is then transformed to its 2-D version via the camera 2-D image mapping transformation.
Therefore, the phase information stored in the camera images is expressed in terms of the camera,
projector and object variables. Complete measurement requires evaluation of the x, y, and z
coordinates of the measured samples over the object surface. Unlike existing approaches, the proposed
approach is universal in the sense it is not restricted to certain optical arrangements. Therefore, it will
be very useful in helping us understand the various effects of system parameters on the measurement
outcome.
1. Introduction
A wide range of structured light optical techniques for 3-D measurement of object shape based on a
variety of projection and imaging principles have been discussed in the literature [1-11]. In this paper
we are mainly interested in structured light systems based on non-collimated fringe-pattern projections
using digital projectors.
Its well known that when a periodic fringe pattern is projected onto an object surface and observed
from a different view angle a phase-modulated pattern will be formed. The amount of phase
modulation is obviously a function of the object height variations. Therefore, in order to extract the
surface height the phase information must be extracted (demodulated) from the intensity images and
then related to the required object height.
Various approaches have been adopted in the literature in order to relate the fringe phase to the
surface height. Usually, the phase-to-height model is obtained by relating the fringe displacements to
the equivalent phase differences. Since the fringe displacements are proportional to the height
variations the model can be derived in a straightforward manner.
In this paper, we are interested in finding a more generic approach that could be used to derive the
true analytic phase-to-height model for non-collimated fringe pattern projections and that can work
with arbitrary optical arrangements.
This paper is organized as follows: Section II presents proposed model where the mathematical
derivation of the true phase distribution in the object’s world coordinates as well as in the camera
image coordinates for arbitrary viewing angles is presented. Experimental results will be presented in
section IV. Finally, Section V presents discussion and conclusions.
2. Proposed Model
2.1. 2D based approach
In this section we present the derivation of our model based on 2D geometry. We have previously used
the presented approach for collimated projection applications and showed that the approach is
consistent [12]. Figure 1 shows the geometrical arrangement of a typical non-collimated projection of
sinusoidal fringes. The line, P, is called the projector’s phase line or phase axis. It lies in the yz-plane
making angle  with the y-axis. A lens with a focal point located at (yf, zf) is placed in front of the
phase axis to produce non-collimated projections of structured light. For non-collimated projections,
the emitted light rays pass through the projector’s lens centre where each has an phase value
depending on its source location on P. The phase received by any point C(yi,zi) on object surface can
be calculated by tracing back the light ray illuminating this point to its source location on P , i.e., point
B in Figure 1 with coordinates (y, z). From the figure we can write the equation for the phase axis, P
as
(1)
z   m( y   y0 )  zo
where (y0,z0) is the projection centre and m is the slope of P.
from triangle ABC we can write
z f  zi
yi  y f

z   zi
yi  y 
(2)
from which
y 
z   zi
( y f  yi )  yi
z f  zi
(3)
Equations (1) and (3) are linearly independent and can be solved for y and z to get
y 
z 
my0 ( y f  yi )  z o ( y f  yi )  z i ( y f  yi )  yi ( z f  zi )
[( z f  zi )  m( y f  yi )]
m( z i y f  yi z f )  my0 ( z f  z i )  z 0 ( z i  z f )
z i  z f  m( y f  yi )
(4)
(5)
The phase value of object point C(yi,zi) is the same as the phase of point B(y,z) on the phase axis;
hence from (7) and (8) we can write
 ( yi , z i )   
where
  
2f 0
cos 
f 0  1 / T0
yi ( z f  z 0 )  z i ( y 0  y f )  ( z 0 y f  y 0 z f )
( y f  yi ) tan   z i  z f
(6)
(7)
(8)
Equation (6) gives us the actual 3-D phase value for any point (x,y,z) on the object surface.
If a camera is used to capture the intensity variations over the object surface, the resulting images
will contain side effects due to camera operation and the phase-to-height relation in (6) need to be
modified in order to relate the phase distribution of the camera images to the object height. This is
obvious since the camera actually stores a mapped version of the actual image. Without the loss of
generality, assuming a pinhole perspective image mapping, a point in the world coordinates can be
mapped to its image in the camera coordinates using
y i  Y0 
Yi  Y0
(Z f  zi )
Z0  Z f
(9)
The registered phase in the camera image (Yi, Z0) is thus equal to the phase of the mapped 3-D object
point, (yi,zi) . Substituting yi from (9) in (6 ) results in


 Y  Yi  Y0 Z  z ( z  z )  z ( y  y )  ( z y  y z )
f
i  f
0
i 0
f
0 f
0 f
 0 Z0  Z f

 (Yi , Z 0 )    


Y Y
y f   Y0  i 0 Z f  zi  tan  zi  z f


Z0  Z f






(10)
The height function in terms of camera world coordinates is thus

  Y0 ( z f  z0 ) 



Yi  Y0
Z f ( z f  z0 )  ( z0 y f  y0 z f ) 

Z0  Z f



Y Y
  (Yi , Z 0 ) y f tan  Y0 tan   i 0 Z f tan   z f 


Z0  Z f


zi 
 Y Y

 Y Y

 (Yi , Z 0 ) i 0 tan  1    i 0 ( z f  z0 )  ( y0  y f ) 
 Z0  Z f

 Z0  Z f





(11)
This equation relates the phase at any pixel (Yi, Z0) in the camera image to the object height by the
projection and viewing parameters. It can be shown that if the camera and projector were allowed to
rotate arbitrarily then the phase-to-height relationship can be obtained in the constrained solution
sense. This renders 2D modeling approaches useless as they only provide us with little information;
besides, we also need to consider other sources that might affect our model including lens aberrations
and optimised system constants. In the following section we present a more generalised 3D approach.
2.2. 3D based approach
In order to generalise the solution we need to have a plane with arbitrary orientation and centre
location as well. Let  be a plane that lies in the xz plane whose centre is at the origin and having a
unit normal defined by n  0 1 0T . The equation of this plane is n.r  0 . If this plane is clockwise
rotated by arbitrary angles , , and  around the coordinate axis x, z, and y respectively then its centre
translated by r0 we thus can represent any arbitrary plane mathematically by utilising the appropriate
geometric transformations. The plane rotation and translation will result in a new normal vector n and
plane centre r0. The geometric transformation associated with this operation is generally of the form
 1


R 2
 3

 0
1  1
2  2
3  3
0
0
x0 

y0 
z0 

1 
(12)
Therefore, the new equation of the plane will become n.(r  r0 )  0 , where n  R n  1 2 3 0T . A
ray illuminating a point ri on the object surface passes through two distinct points: the lens centre,
F(xf,yf,zf) and the illuminated point ri(xi,yi,zi). These rays can thus be represented by the parametric
equation
r  ri  s(ri  F)

 xi  s( xi  x f )

yi  s ( yi  y f ) z i  s ( z i  z f ) 1 T
(13)
the source point can be traced back by solving the equation that describes the intersection between
this ray and the projection plane whose unit normal is n. The intersection point can be given by
r  ri  (ri  F).
n.(ri  r0 )
n.(ri  F)
(14)
z
Phase axis, P
P
z
P
(Y0, Z0)
0
p(y0, z0)
Camera
Y

B(y,z)
i
z
(Y0,
Zf)

C(yi, zi)
f(yf,zf)
zi
a.
A
Object
Surface

y
O
Figure 1.
Reference
plane
yi
y
Problem of relating phase-to-height in 2D
Expanding (14) in terms of xi, yi and zi we get
 x
 
y
r 
z
 
 1 
  1 p xi   2 p y i   3 p z i   4 p 


12p   22 p   32p


  x  y  z  
1p i
2p i
3p i
4p




12p   22 p   32p


 1 p x i   2 p y i   3 p z i   4 p 


12p   22 p   32p


1


(15)
where the ’s, ’s, ’s, and ’s are the projection constants. It should be noted that since the
physical phase in the projector’s LCD is independent of the grating orientation we can reduce the
complexity of the projector’s source point by reversing the rotational and transitional geometric
transformations involved in (12). This can be done by using
1p


R 1   1 p
 1 p

 0
 2p 3p
 2 p 3 p
 2p  3p
0
0
( 1 p x0 p   2 p y 0 p   3 p z 0 p ) 

 ( 1 p x0 p   2 p y 0 p   3 p z 0 p ) 
 ( 1 p x0 p   2 p y 0 p   3 p z 0 p )

1

(16)
The new source point lies in the normalised xz plane and is given by
 1 p xi   2 p yi   3 p z i   4 p 


12p   22 p   32p




0

1
r   R r   x   y   z   
1p i
2p i
3p i
4p




12p   22 p   32p


1


(17)
where ’s and ’s are the new projection constants. The projector projects a fringe pattern whose
intensity is based on the 2D phase distribution over the grating plane. This 2D phase function can be
of any arbitrary form. For now let us consider only three types of linear phase functions that produce
horizontal fringes, vertical fringes, and circular fringes, respectively.
Let the phase of any point on the projector’s grating be a function of the Euclidian distances
associated with that point and the grating’s origin. According to (17) we can write
 1 p xi   2 p yi   3 p z i   4 p 

2
2
2






1p
2p
3p


(18)
  1 p xi   2 p y i   3 p z i   4 p 

2
2
2






1p
2p
3p


(19)
 x   x 
 z   z 
 1 p xi   2 p yi   3 p z i   4 p
 xz   xz 
12p   22 p   32p

2


    1 p xi   2 p y i   3 p z i   4 p


12p   22 p   32p






2
(20)
where x , z ,and xz denote the phase function of horizontal fringes, vertical fringes, and circular
fringes, respectively. The quantities x, z and xz are the angular frequencies of the projected patterns
in radians. The projected phase-to-height models can thus be obtained by solving (18-20) for the
height zi .
There is still one thing remaining, including camera effects. In another words, expressing the
phase-to-height relationship in terms of camera and projector constants. It can be shown that
(analogous to the projector case) the camera the 3D mapping transformation is of the form
 xc 
 
y
rc   c 
 zc 
 
 1 
  1c xi   2c y i   3c z i   4c
 x   y   z  
2c i
3c i
c
 1c i

 1c xi   2c y i  3c z i   4c

  1c xi   2c y i   3c z i   c

 1c xi   2c y i   3c z i   4c
 x   y   z  
2c i
3c i
c
 1c i

1














(21)
by forcing one of the camera coordinates to become a constant, for example assuming our camera
lies in the xy – plane then the height of the image plane is constant at zc = Z0 then  = /2 ,  =  = 0
therefore 1 = 2 = 0 and 3 = -1;
 xc 
 
y
rc   c 
 zc 
 
 1 
  1c xi   2c y i   3c z i   4c 


 zi   c




 1c xi   2c y i  3c z i   4c 


 zi   c




Z0




1


(22)
note that this form is helpless some how since it is not possible to solve for xi and yi. Therefore, we
can return to the normalised focal plan of equation (i.e., similar to the form in (17) with  = /2,  = 
= 0 in mind) and write
( x f  x0 ) z i  x0 z f  x f z 0 
 z0  z f
xi 


z

z
zi  z f
x
 c  i
f


  z  z

(
y

y
)
z

z
y

z
y
z
0
f
f
0
i
0
f
f
0
c
  
yi 

zi  z f
 z i  z f

solving for xi and yi and using them in (18) we can then solve for zi to get
(23)
zi 
[[1 z f   2 z f ]xc  [1 z f   2 z f ]x0  ( 2 x f  1 x f   4 ) z 0   4 z f ] x  ( z f  z 0 z f ) x
[[ 2  1 ]xc  [ 2  1 ]x0   3 z 0   2 x f  1 x f   3 z f ] x  ( z f  z 0 ) x
(24)
this equation describes the generic form of the phase-to-height model. Note that unlike existing
models, this equation directly relates the absolute phase to the object height and not the phase
difference.
3. Experimental results
In order to use this model in practice we still need to do one more thing, that is, we need to 1) extract
the camera and projector parameters (i.e., position and relative rotation) 2) include effects of lens
aberrations (i.e., radial and tangential distortions) and 3) optimise the phase-to-height model. In order
to do that we perform elaborate calibration of projector and along optimizing the phase-to-height
system constants using genetic algorithms. Here we are actually optimising the phase-to-height
solution for the reference plane which is of the form z() = 0. This is in fact some sort of empirical
calibration for the measuring system. This is necessary in order to compensate for other sources of
error, for example, errors of phase measurement due to environmental illuminations, object properties,
projection and imaging contrast variations, .. etc where these were not accounted for in the geometric
calibration of projector and camera. It should be noted that this outcome is extremely important, for
example, in [13], the system calibration was based on optimising a certain equation of some form that
is relatively correct for a certain 2D arrangement. However, in this paper the correct generic 3D form
of the model is used. Figure 2 shows the measured outcome of such approach.
Figure 2. A 3D Measurement outcome using the optimised model
4. Conclusions
In this paper we presented a new generalised approach for modeling the phase-to-height
relationship. Due to length constraints, a lot of detail has been omitted but the principle behind the
approach should be clear. The model shows that 2D models are valid in the local sense and that 3D
approaches are necessary. Camera and projector parameters including lens aberrations were obtained
using geometric calibration techniques while error sources were minimised using genetic algorithms to
minimise the system constants using the data obtained from the reference plane; hence, elaborate
calibration objects are avoided.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
M. Takeda and K. Mutoh, "Fourier transform profilometry for the automatic measurement of
3-D object shapes," Applied Optics, vol. 22, pp. 3977-82, 1983.
D. R. Burton and M. J. Lalor, "Multichannel Fourier fringe analysis as an aid to automatic
phase unwrapping," Applied Optics, vol. 33, pp. 2939-48, 1994.
G. S. Spagnolo, G. Guattari, C. Sapia, D. Ambrosini, D. Paoletti, and G. Accardo, "Threedimensional optical profilometry for artwork inspection," Journal of Optics a-Pure and
Applied Optics, vol. 2, pp. 353-361, 2000.
G. S. Spagnolo and D. Ambrosini, "Diffractive optical element-based profilometer for surface
inspection," Optical Engineering, vol. 40, pp. 44-52, 2001.
G. S. Spagnolo, R. Majo, D. Ambrosini, and D. Paoletti, "Digital moire by a diffractive optical
element for deformation analysis of ancient paintings," Journal of Optics a-Pure and Applied
Optics, vol. 5, pp. S146-S151, 2003.
Q. C. Zheng and R. J. Gu, "Triangulation of point cloud data for stamped parts," in Seventh
Issat International Conference on Reliability and Quality in Design. Piscataway: INT SOC
SCI APPL TECHNOL, 2002, pp. 205-208.
S. H. Wang, C. G. Quan, C. J. Tay, I. Reading, and Z. P. Fang, "Measurement of a fiber-end
surface profile by use of phase- shifting laser interferometry," Applied Optics, vol. 43, pp. 4956, 2004.
D. N. Borza, "High-resolution time-average electronic holography for vibration
measurement," Optics and Lasers in Engineering, vol. 41, pp. 515-527, 2004.
C. Quan, Y. Fu, and C. J. Tay, "Determination of surface contour by temporal analysis of
shadow moire fringes," Optics Communications, vol. 230, pp. 23-33, 2004.
Yin-Li, Heung-Yeung-Shum, Chi-Keung-Tang, and R. Szeliski, "Stereo reconstruction from
multiperspective panoramas," IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 26, pp. 45-62, 2004.
J. Peipe and P. Andrae, "Optical 3D coordinate measurement using a range sensor and
photogrammetry," Proceedings of the SPIE The International Society for Optical Engineering,
vol. 5013, pp. 110-16, 2003.
B. A. Rajoub, D. R. Burton, and M. J. Lalor, "A new phase-to-height model for measuring
object shape using collimated projections of structured light," Journal of Optics A: Pure and
Applied Optics, vol. 7, pp. 368-375, 2005.
A. Asundi and Z. Wensen, "Unified calibration technique and its applications in optical
triangular profilometry," Applied Optics, vol. 38, pp. 3556-61, 1999.
Download