Precis 2 (Using Plane Parallax for Calibrating Dense

advertisement
Liz Bondi
V. Vaish, B. Wilburn, N. Joshi, and M. Levoy. (2004, July). Using plane + parallax for
calibrating dense camera arrays. Computer Vision and Pattern Recognition, 2004
[Online]. 1(CVPR’04), pp. I-2- I-9.
Available: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1315006&isnumbe
r=29133
In this paper, Stanford scientists discuss a method of calibration that uses parallax.
Parallax is simply the effect when you look at an object from different places and it does
not seem like it is in the same position. For example, if you are driving and you look at
the speedometer, you may see the needle point to exactly 60 miles per hour. The person
in the passenger seat probably sees the needle pointing to a different number since this
person is looking at the speedometer from a different angle. So, parallax-based
calibration uses these different views to blur out objects not on the reference plane.
Basically, a reference plane, or a plane on the scene, is parallel to the cameras. On the
reference plane, objects are not blurry. They are just like a normal picture. However,
objects that are not on the reference plane seem blurry due to parallax.
To calibrate the cameras, the position of the cameras must first be determined
using parallax, similar triangles, matrices, and several equations. Then, the two-plane
light field and its rays can be used for image-based rendering. Image-based rendering is
based on the idea that if all the rays in a scene are known, an image from any viewpoint
can be created. Using synthetic aperture photography, you can focus the array to different
depths without having to redo your measurements. Plus, objects that are not on the focal
plane will “disappear.” In other words, you can change the focal length to focus on
specific parts of the scene you would like to focus on. If we could figure out how to
change the focal length after the cameras record footage from the airport, we could
potentially use this idea to allow the user to change the focus at will.
In the experiment conducted by these Stanford scientists, they used parallax, a
reference plane, and the camera array to calculate the positions of the cameras and the
amount of parallax present. This data was then combined with the images taken by the
array to create an image in which the trees blocking a group of students were no longer
visible. Using this method, the scientists were able to “…change the focal length (depth
of the reference plane) without explicit knowledge of its depth, or having to measure
parallax for each desired focal plane.” So, this method allows the focal length to be
changed without redoing calculations. This can be used to change the objects that
“disappear,” which is a tool that could be extremely useful to security guards reviewing a
tape after a busy day at the airport.
This paper overall seems to be a good source because it is written for IEEE and
CVPR, it has eighteen references to back up its claims, it defines some of the vocabulary
it uses throughout the paper right at the beginning, and the diagrams and photos of
experiments illustrate the text in a helpful manner. For example, one figure gives a
diagram of the reference plane, the camera plane, and where parallax is located. It also
identifies variables used in some of the equations, matrices, and proofs. However, it is not
organized in a useful manner. It gives a relatively in-depth description of parallax-based
calibration in the introduction. Then, it gives some information on related work. Finally,
it goes to another description of parallax-based calibration. Not only is this confusing, but
it does not seem to help support their findings. There is so much jumping around that it is
difficult to follow. Also, the way they set up the camera will make it quite large. Parallax-
based calibration requires a reference plane to be parallel to the cameras, so the cameras
need to be in a straight row.
Nevertheless, I got a lot from this source because of all its diagrams describing
parallax and the experiment set-ups. For example, it states that they used a 45-camera
array on a cart that was controlled by a 933 MHz Linux PC. This could help give us an
idea of the kind of computer we might need to process data from so many cameras. We
could also experiment with the number of cameras, perhaps trying 45 at some point. In
the Conclusions and Future Work section, it does say that they are unsure of how to track
someone at this point, but it suggests that synthetic aperture photography could be used to
track a person. The obvious use of this paper is to use it as a “how-to guide” on parallaxbased calibration. We will most likely need to calibrate the cameras, so this could help us
do that. Therefore, this source will be helpful while building our multi camera array when
we try to make three to 45 cameras work together to “see through” barriers.
Download