A Novel Geometric Calibration Technique for Scalable Multi- projector Displays

advertisement
ECE TECHNICAL REPORT CSP-06-010
A Novel Geometric Calibration Technique for Scalable Multiprojector Displays
Veeraganesh Yalla
Laurence G Hassebrook
Center for Visualization and Virtual Environments
Department of Electrical & Computer Engineering
University of Kentucky
Lexington, KY
25 September 2006
Copyright®
1
Abstract:
We present a novel geometric calibration technique for scalable multi-projector displays.
Geometric calibration plays an important role in the alignment of multi-projector systems
to present a seamless tiling of the multiple projector displays. Multi-frequency Phase
Measuring Profilometry technique for the geometric calibration is proposed in this work.
The required framework for implementing the technique is already part of the scalable
displays using camera for optical feedback. Instead of performing feature matching or
corner detection to estimate the projector-camera homography, the proposed technique
has projector-camera correspondence for each pixel in the camera space with sub-pixel
accuracy. Structured light illumination is widely used in 3D depth acquisition. The
proposed method can be further extended to scan any 3D display surface and use the
calibration matrices to map visualization data using the multi-projector display.
Keywords:
Scalable Multi-Projector Displays, Geometric Calibration, Structured Light Illumination,
Sub-pixel
1. Introduction:
Multi-projector displays are increasingly becoming a common trend to visualize
high resolution information on large display walls. Automatic alignment of the multiple
projector systems and photometric calibration are two important steps involved in
configuring the display systems. Raskar et al [1] proposed a general solution to the
seamless display problem. A series of pre-calibrated stereo cameras are used to define the
display surface and the intrinsic and extrinsic parameters of the individual projectors. The
computation time and resources required to implement this technique is quite complex.
Chen et al [2] proposed another approach where the projectors are aligned by using an
un-calibrated camera with pan-tilt and zoom-focus controls. The method has been
reported to take 30 minutes to obtain a final solution.
Surati [3] presented a technique where a camera acts as an optical feedback path
in aligning the multiple projectors. The technique can be used to project image on to any
arbitrary surface. The necessary constraint for this technique is the camera’s field of view
must encompass the fields of view of all the projectors in the display system. Yang et al
[4] proposed the PixelFlex system which uses an iterative technique to estimate the
display surface shape based on image-based correlation. The method takes a relatively
long time to converge, but can continually refine the display surface geometry even if the
change occurs during system usage. The calibration method proposed by Jaynes et al [5]
uses a circular Gaussian target, centered at a randomly selected point, in the projector
frame buffer to illuminate the display surface. Lee et al [6] proposed the use of Gray code
structured light illumination to establish the projector-camera correspondence. However,
the use of Gray code patterns has limited resolution due to the number of patterns that can
be actually encoded without aliasing effects.
2
In this work, we propose a novel technique for geometric calibration of the multiprojector system. Sine wave structured light illumination (SLI) has long been used in
high resolution 3D depth acquisition. The method is commonly known as the Phase
Measuring Profilometry (PMP) technique. The section 2 gives an overview of the
proposed structured light illumination technique for geometric calibration. The section 3
discusses the implementation of the proposed technique. Section 4 presents the
conclusions and future work.
2. Multi-Frequency PMP:
Phase measuring profilometry (PMP) has its origins from the classical optical
interferometry techniques [7]. Compared to other structured light algorithms like the
single spot, light stripe and gray code projection, the PMP technique uses fewer frames
for a given precision. PMP projects shifted sine wave patterns and captures a deformed
fringe pattern for each shift. The projected light pattern is expressed as
In(xp,yp) = Ap + Bp cos(2πfyp - 2πn/N)
(1)
where, Ap and Bp are constants of the projector, f is the frequency of the sine wave, and
(xp,yp) is the projector coordinate. The subscript n represents the phase-shift index. The
total number of phase shifts is N. The figure 1 below shows the PMP patterns for the base
frequency. The number of phase shifts is N= 4.
Figure 1: PMP base frequency patterns
From the camera viewpoint, the captured image is distorted by the target topology
and can be mathematically expressed as
In (xc ,yc ) = A(xc ,yc ) + Bc (xc ,yc )cos(φ (xc ,yc ) - 2π n/N)
3
(2)
c
c
c
c
The term Φ(x ,y ) represents the phase value at pixel location (x ,y ) of the captured sine
wave pattern. The term φ (x c ,y c ) can be computed as follows
⎡ N
c
c
c
c ⎤
⎢ ∑ I ( x , y ) sin( x , y ) ⎥
⎥
φ (x c ,y c ) = arctan ⎢ nN=1
⎢ I ( x c , y c ) cos( x c , y c ) ⎥
⎢⎣ ∑
⎥⎦
n =1
(3)
Once the value of φ (x c ,y c ) is computed the projector coordinate yp can be recovered as
y p = φ (x c ,yc )/(2π f)
(4)
Similar equations are used for calibrating the xp coordinates. The major advantage of the
PMP technique is high surface resolution. The major disadvantage is the limited depth
range.
Multi-frequency PMP is derived from the single frequency PMP technique. The
single frequency PMP technique is not quite suitable for reliable 3D data acquisition. The
scan time, in other words, the total number of phase shifted patterns projected is directly
proportional to the quality of the phase map acquired. By using additional frequencies,
[8] et al. have proved that the phase map acquired is less noisy and hence, the
reconstruction is much better. Yalla [9] et al, proved experimentally that for the same
number of phase shift patterns, using the multi-frequency technique is a better approach
to obtain high quality phase information. The multi-frequency PMP technique is an
extension to the single frequency PMP technique. The step by step algorithm is given
below:
i.
Project and capture the base frequency PMP patterns.
ii.
Calculate the phase value for each pixel location in the camera. The phase value lies
between (–π, π] because of the arctan function involved.
iii.
Repeat the following steps until all the high frequencies chosen are projected
a) project the frequency fi , where, I = 2 to N, N = number of frequencies to be
projected.
b) capture the phase shifted patterns corresponding to this frequency.
c) calculate the phase value for each pixel.
d) unwrap the phase using the base frequency phase obtained in the
previous step. This phase will have values between 0-2π.
e) subtract π from the phase obtained in step (d) to scale it between (– π, π].This
phase becomes the base frequency phase used in unwrapping the phase for the
patterns using next higher frequency.
iv.
Use this phase value obtain the corresponding projector coordinate value yp for each pixel
location in the camera.
4
3. Implementation of the proposed technique
3.1 Experimental Setup:
The figure 2 shows the schematic of a two projector display system. P1 and P2 are the
two projectors used in the display system and the optical feedback path is provided by the
camera C. The proposed technique can be extended to N projector display system as long
as the camera’s field of view is larger than the fields of view of all the projectors in the
display system. The case where the camera’s field of view is limited due to size of the
display system, multiple cameras can be used. It just adds to the complexity of the
proposed technique. Camera-camera homography needs to be established. This can be
done by making the cameras fields of view to share some of the projectors in the display
system. Such a system can be called the N-Projector M-Camera system or in short the NM display system.
Figure 2: Two Projector Display System
The two projectors used in our experiment are an Infocus 70LP and a Viewsonic
PJ250 with 1280x1024 pixel resolution. The camera used to calibrate the system is a
Cameralink Pulnix TM1400 CL with 1392x1040 pixel resolution. In general, a low cost
webcam of lower resolution can also be used. A linear pin-hole camera model is
assumed. The distortion correction can be incorporated in the camera and use the nonlinear calibration model proposed by Tsai [10]. The figures 3 (a) - (b) show the phase
images of the projectors P1 and P2. The figures 3 (a) and (b) show the phase information
in yp direction. The phase information in xp can be obtained by projecting PMP patterns in
the orthogonal direction. In order to eliminate the noise in the phase images, a novel
phase filter is used. The details of the phase filter are beyond the scope of this report. The
figures 4(a) and (b) show the mask images of the two projectors P1 and P2. The total
computation time including pattern projection, capture and correspondence matching in
xp and yp for each projector-camera combination is approximately 6 seconds without any
software optimization.
5
Figure 3 (a)-(b): Phase Maps of the Projectors
6
(a)
(b)
7
(c)
Figure 4(a)-(c): Mask Images
The homography Hci between the ith projector and the camera C can be
established by using least squared solution or SVD technique on the following Eq (5).
⎡ x1c
⎢
⎢0
⎢ x2c
⎢
Hci = ⎢ 0
⎢L
⎢ c
⎢xM
⎢0
⎣
y1c
1
0
0
0
−x1p x1c
−x1p y1c
−x1p
0
y2c
0
0
1
0
c
1
c
1
1
0
1
−y x
−x x
−y x
−y y
−x y
−y y
−y
−x
−y
x
0
x2c
y
0
y2c
p
1
p
2
p
2
c
1
c
2
c
2
p
1
p
2
p
2
c
1
c
2
c
2
p
1
p
2
p
2
L L
yMc 1
L L L
0 0 0
L
L
p c
−xM xM −xMp yMc −xMp
0
xMc yMc
1
−yMp xMc −yNp yMc −yMp
0
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
(5)
Let Cxy = (xc,yc,1)T represent the camera coordinate system and 2D projector coordinates
represented by Pxyi = (xp,yp,1) T and i = 1,..,N for multiple projectors in the display
system. The input consists of N projector phase map images captured by the single
camera C with the known homography matrices, Hc1, Hc2,…,HcN satisfying
Pxyi = Hci Cxy, i = 1, 2,3...N
(6)
For the complete display system, the projector mosaic is set to a pre-defined
region with known aspect ratio. Let the display screen coordinates be denoted as Rxy =
(x,y,1)T. The relationship between the display coordinates and camera coordinates can be
described by a 2D projective matrix Hrc. The relationship between the ith projector and
the reference display space is given by
8
Pxyi = Hci Cxy = ( Hci Hrc) Rxy, i = 1, 2,3...N
The pixel to pixel mapping between the projectors is then defined as
Pxy j = Hri Hrc −1 Pxyi
(7)
(8)
3.2 Methodology to Display on Arbitrary 3D Surface
Rendering on an arbitrary 3D surface requires an exact representation of the
surface in 3D world coordinates. Even the best known stereo vision based techniques can
only give a coarse representation of the 3D shape. SLI has already been proved to yield
the best 3D shape representation. The multi-projector display systems are inherently a
source of active illumination and a triangulated camera completes the requirements of a
3D scanner system. Rendering on any arbitrary 3D display surface will prove to be a
breakthrough in immersive environments with applications in scientific visualization,
gaming etc. The figure 5 shows a screenshot of the Doom3 game[11]. Imagine real
players playing this game in an immersive environment simulated with real 3D geometry
being projected onto an arbitrary 3D display surface.
Figure 5: 3D Screenshot of the Doom 3 game
The figures 6(a) and (b) show the phase images of the projectors P1 and P2. An
arbitrary surface (a linear ramp in this case) is introduced as interference between the
display screen and the projectors. The figures 7(a) and (b) show the mask images of the
two projectors P1 and P2.
9
(a)
(b)
Figure 6(a)-(b): Phase Maps of the Projectors
10
(a)
(b)
11
(c)
Figure 7(a)-(c): Mask Images for the Projectors
In order to register an arbitrary 3D surface, a set of reference points have to be
marked on the surface. It is possible to project simulated calibration markers using a
standalone pattern generator. The calibration model involves a Zw coordinate (depth
coordinate). Calibration points have to be carefully placed over the span of the display
surface. The calibration needs to be done only once as long as the multi-projector display
system is left untouched. Another solution is to pre-calibrate each of the camera-projector
combination with a known calibration grid. Multi-frequency PMP technique described in
section 2 can be used to do the calibration. The perspective transformation matrix for the
camera is
⎡ X1w
⎢
⎢0
⎢ X2w
⎢
Ac = ⎢ 0
⎢L
⎢ w
⎢ XM
⎢0
⎣
Y1w Z1w 1 0 0 0 0
0 0 0 X1w Y1w Z1w 1
Y2w Z2w
0 0
L L
YMw ZMw
0
0
1
0
L
1
0 0 0
X2w Y2w Z2w
L L L
0 0 0
−x1c X1w
− y1c X1w
−x1cY1w
− y1cY1w
−x1cZ1w
− y1cZ1w
0 −x2c X2w −x2cY2w −x2cZ2w
1 − y2c X2w − y2cY2w − y2cZ2w
L
L
L
L
c
w
c w
0 −xM XM −xMYM −xMc ZMw
0 XMw YMw ZMw 1 − yMc XMw − yNc YMw − yMc ZMw
mc = ⎡⎣ m11wc
m12wc
m13wc L m34wc ⎤⎦
−x1c ⎤
⎥
− y1c ⎥
−x2c ⎥
⎥
− y2c ⎥
L⎥
⎥
−xMc ⎥
− yMc ⎥⎦
T
Since Ac has rank eleven, the vector m c can be recovered from SVD with
12
(9)
(10)
A c = UDV T
(11)
where, U is a 2Mx2M matrix whose columns are orthogonal vectors, D is a positive
diagonal eigen value matrix, V is a 12x12 matrix whose columns are orthogonal. The
only nontrivial solution is corresponding to the last column of V, and that is the solution
to the parametric matrix Mwc. The matrix to solve for perspective transformation matrix
for the ith projector is similar to the camera perspective transformation matrix. Once
calibrated, the phase map from each projector can be used along with corresponding
projector and camera calibration matrices to obtain the true high resolution 3D shape of
the display surface.
The perspective transformation equations for both the projector and camera
coordinates are given in Eq 12-13.
x =
c
y =
c
m11wc X w + m12wcY w + m13wc Z w + m14wc
(12)
m 31 X + m 32 Y + m 33 Z + m 34
wc
w
wc
w
wc
w
wc
m 21wc X w + m 22wcY w + m 23wc Z w + m 24wc
(13)
m 31wc X w + m 32wcY w + m 33wc Z w + m 34wc
and
xp =
y =
p
m11wp X w + m12wpY w + m13wp Z w + m14wp
wp
wp w
wp w
wp
m31
X w + m32
Y + m33
Z + m34
(14)
m 21wp X w + m 22wpY w + m 23wp Z w + m 24wp
(15)
m 31wp X w + m 32wpY w + m 33wp Z w + m 34wp
Assuming Z w , x c and y c are the known parameters, X w and Y w can be obtained from
Eqs. (12) and (13) such that
X w = g1 ( x c , y c , Z w )
(16)
Y w = g2 ( xc , y c , Z w )
(17)
Substituting Eqs.(16)-(17) into Eq. (14)–(15),
x p = f warp1 ( x c , y c , Z w )
(18)
y p = f warp 2 ( x c , y c , Z w )
(19)
13
The mapping from the camera coordinates x c , y c to the projector coordinates x p , y p with
the known world coordinates Z w are established. The homography equations described
in section 3.1 can then be used to map the visualization data for each of the projectors in
the multi-projector system.
4. Conclusions and Future Research Work
A novel geometric calibration technique for scalable multi-projector systems is
proposed in this paper. Subpixel projector-camera correspondence is established using
multi-frequency PMP technique. Further, the PMP technique can be used to scan the
display surface in 3D. The existing multi-projector display systems with a camera
feedback can be used as such without any hardware changes. A pattern generator and the
equation to compute phase map for each projector can be implemented in software at no
cost. It is also possible to map visualization data onto any arbitrary 3D display surface
using pre-computed calibration matrices and 3D depth information of the display surface.
The advantage of the proposed technique is ability to obtain high resolution true 3D depth
information compared to the generally coarse depth information obtained using stereo
vision techniques. Photometric calibration will be implemented to match the color gamuts
of the projectors in the display system. Future research would involve extending the
calibration technique for an N projector-M camera system (NM display system).
References:
1. R. Raskar, M. S. Brown, R. Yang, W. Chen, G. Welch,H. Towles, B. Seales, and
H. Fuchs. “Multi-Projector Displays Using Camera-Based Registration”,IEEE
Visualization, 161-168, San Francisco, October 1999.
2. Y. Chen, H. Chen, D. W. Clark, Z. Liu, G. Wallace, and K. Li. “Automatic
Alignment of High-Resolution Multi-Projector Displays Using An Un-Calibrated
Camera”, IEEE Visualization 2000, Salt Lake City, UT, October 2000.
3. R. Surati, “Scalable Self-Calibration Display Technology for Seamless LargeScale Displays”, PhD thesis, Massachusetts Institute of Technology, 1999.
4. R. Yang, D. Gotz, J. Hensley, H. Towles, and M. S. Brown, “PixelFlex: A
Reconfigurable Multi-Projector Display System”, IEEE Visualization 2001, San
Diego, CA, USA.
5. C. Jaynes, S. Webb, and R. M. Steele, "A Scalable Framework for High
Resolution Immersive Displays", International Journal of Electrical and
Technical Engineering Research, 48(3&4), pp. 278-285, May 2002.
6. J.C. Lee, D.M. Aminzade, P. H. Dietz and R. Raskar, “Method and System for
Calibrating Projectors to Arbitrarly Shaped Surfaces With Optical Sensors
Mounted at the Surfaces, US Patent No. 7,001,023 B2, February 21, 2006.
14
7. V. Srinivasan, H. C. Lui, and M. Halioua, “Automatic phase-measuring
profilometry of 3-D diffuse object”, Appl. Opt. 23(18), 3105–3108, 1984.
8. J. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phasemeasuring-profilometry light-sensor temporal-noise sensitivity”, J. Opt. Soc. Am.
A 20, 106-115, 2003.
9. V. Yalla and L.G. Hassebrook, "Very-High Resolution 3D Surface Scanning
using Multi-Frequency Phase Measuring Profilometry," Edited by Peter Tchoryk,
Jr. and Brian Holtz, SPIE Defense and Security, Spaceborne Sensors II, Orlando,
Florida, Vol. 5798-09, March 28, 2005.
10. R. Y. Tsai , “A versatile Camera Calibration Technique for High-Accuracy 3D
Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses”, IEEE
Journal of Robotics and Automation, Vol. RA-3, No. 4, 323-344, August 1987.
11. www.doom3.com
15
Download