Automatic Projector Alignment Algorithms - Display Wall

advertisement
Automatic Projector Alignment Algorithm
Princeton University Computer Science
Matlab coding by Han Chen
Contact person: Grant Wallace, gwallace@cs.princeton.edu
Copyright (C) 2002 Princeton University Computer Science Department
GNU General Public License – see copyright.doc
Introduction
These matlab files implement the algorithms discussed in the paper "Scalable Alignment of
Large-Format Multi-Projector Displays Using Camera Homography Trees ”.
There are three phases to aligning a display wall
1) Collect images of feature points on the display wall.
2) From the feature points determine a set of homographies (3x3 matrix
transformations) that when applied to the projectors will make the display appear
aligned.
3) Apply these homographies in real-time on the imagery displayed.
The matlab code in this distribution deals with phase 2 – determining the homographies.
The input to this algorithm is a series of images of feature points as described below. The
output is set of homographies, one per projector, or a configuration file that contains the
corner positions of the projectors in some universal frame of reference. The
homographies and configuration file are interchangeable; one can be derived from the
other.
We do not include code for phase 1 in this distribution because it is hardware dependent
and we do not currently have a generic implementation. Phase 3 typically involves
modifying applications to use the transformations. For 3D apps this just involves pushing
the homography onto the view stack. For 2D apps it requires rendering to texture memory
and then using texture mapping to warp the output to the frame buffer.
Using the Alignment Algorithm
Collecting Input Images
The algorithm requires a series of JPEG images to determine feature points. This
series of pictures can comprise multiple zoomed in views of the display wall, which the
algorithm will stitch together. Pictures taken with a greater zoom allow for more
precision when detecting the feature points. Zoomed in images must overlap each other
by one projector in either the horizontal or vertical direction. This allows the stitching to
be done. It is also possible to have the camera view cover the entire display, but precision
may be sacrificed. The camera coverage used is input to the algorithm by a configuration
variable ‘cam_coverage’ in ‘everything.m’. If you have an 8 projector wall, 2x4
configuration, then cam_coverage could be set to the following:
1. cam_coverage=2 – this means each camera view sees a 2x2 array of projectors. It
will require three views to cover the display. Each horizontal pan overlaps the
previous view by 2x1 projectors.
2. cam_coverage=3 – this means each camera view potentially sees a 3x3 array of
projectors. Our example display only has 2 rows so the first view will actually see
2x3 projectors. It will take two views to cover the display. The second view will
only see the last two columns.
3. cam_coverage=4 – each camera view potentially sees 4x4 projectors. For our
example it will see 2x4 projectors or the entire screen. Only one view is required.
P1
P2
P3
P4
P5
P6
P7
P8
--- camview 1
--- camview 2
--- camview 3
Three camera views for an 8 projector display where cam_coverage is 2
The input image series should have the following properties:
1. Two images must be taken for each projector within a camera view. Of the two
images, one will show a set of horizontal lines and the other will show a set of
vertical lines. The intersection of these lines will later form the feature points. The
lines can be shown with some offset from the projector edge and should have
uniform spacing. These parameters will later be input to the algorithm through the
prj_margin and prj_pattern variables. If we are going to show 5 vertical lines and
4 horizontal lines with the first line starting 80 pixels from the projector edge,
then prj_margin=80 and prj_pattern=[4 3] will be the settings. Prj_pattern records
the number of equal divisions, 3 divisions will have 4 lines, one at the beginning,
one at end, two in the middle.
2. The images should be named by some prefix and with a fixed numbering pattern.
The filenames should be like the figure below. The first number is the camera
view, for our example 8 projector wall with cam_coverage=2 this number will be
from 001-003. The second number is the projector within the view. The upper left
projector in each view is 01 and the counting proceeds clockwise through 04 for
cam_coverage=2. The final letter indicates if the image is of the horizontal or
vertical lines.
prefix_001_03_h.jpg
camera view number
horizontal or vertical lines
projector number within view
Image file naming convention
3. A background image should be taken from each camera view and subtracted from
the feature images. For each camera view take a picture of the projectors showing
black. This will be the background image. Subtract the values of this image from
all feature images from the same camera view. This will help the image
processing routines accurately detect the feature points.
For our example 8 projector display using cam_coverage=2, there will be 24 images.
These images will be divided among three camera views. Each view will have 2 images
of each projector within the view, one of horizontal lines and one of vertical lines. Since
there are 4 projectors in each view, this gives 8 images per view. See the srcimg directory
for the sample images.
Set Configuration Variables
The top level matlab file is “everything.m”. This file contains all configuration settings
and calls other matlab functions to perform the work.
Configurations:
1. host_prefix – The prefix used for each display computers hostname.
2. host_id – Lists the display computers host number in row order. For instance, if
hostnames are ‘node-1’ through ‘node-8’, host_prefix=’node-‘ and host_id=[1:8].
3. Display wall resolution: wall_res_x and wall_res_y
4. Projector resolution: prj_res_x and prj_res_y
5. Projector array configuration: prj_rows and prj_cols
6. Camera resolution: cam_res_x and cam_res_y
7. Edge pixels to discard from camera: cam_margin_x and cam_margin_y
8. Camera coverage per view: cam_coverage
9. Pixel offset on projector for first feature line: prj_margin
10. Number of feature line divisions, horz and vert: prj_pattern
11. Input file path: base_path
12. Input file prefix: img_name
13. Output configuration file name: scr_cfg
14. Camera calibration parameters: f, c, d - these are focal length, center, and
distortion parameters. Your camera can be calibrated using the OpenCV
calibration tool.
15. leds (optional) – a set of 4 LEDs can be put at the four corners of the screen. This
will mark the physical screen location and insure that the final result is aligned
with the physical screen. Only necessary for small display walls where the
orientation is hard to extract from the projector positions. leds_on_screen is the
pixel offset of the LEDs from the upper left projector. leds_inch is the offset of
the LEDs from the upper left corner of the screen given in inches. If LEDs are
used then for each of the four corner camera views, a picture must be taken with
only the LEDs on. These images should be labeled with the camera view and ‘led’
as in ‘prefix_001_led.jpg’.
Run the Algorithm
Once all of the configuration variables are set in ‘everything.m’, simply run the matlab
code in everything.m. The algorithm will write a configuration file with the name
specified in the configuration ‘scr_cfg’. This file will have the corner position of each
projector in some global frame of reference. Also, the matlab environment will have the
variable hwp, which are the homographies from projector space to the global reference
frame stored as a TFORM struct (see ‘maketform.m’). The configuration file can then be
feed to applications, which can use the corners information to correctly show aligned
content on the display.
Download