Panorama Light-Field Imaging Clemens Birklbauer1 and Oliver Bimber1 Johannes Kepler University Linz Austrian Science Fund (FWF): P 24907-N23 Abstract Capturing and Construction Introduction and Motivation We capture overlapping sub-light-fields of a scene in the course of a circular movement of a mobile light-field camera and convert each sub-light-field into a focal stack using synthetic aperture reconstruction [7]. For light-field cameras that directly deliver a focal stack, this step is not necessary. With increasing resolution of imaging sensors, light-field photography is now becoming increasingly practical, and first light-field cameras are already commercially available (e.g., Lytro, Raytrix, and others). Applying common digital image processing techniques to light-fields, however, is in many cases not straight forward. The reason for this is, that the outcome must not only be spatially consistent, but also directionally consistent. Otherwise, refocusing and perspective changes will cause strong image artifacts. Panorama imaging techniques, for example, are an integral part of digital photography – often being supported by camera hardware today. Panorama Light-Field Imaging We present a first approach to constructing and rendering panoramic light fields (i.e., large field-of-view gigaray light fields computed from overlapping, lowerresolution sub-light-field recordings). By converting overlapping sub-light-fields into individual focal stacks from which a panoramic focal stack is computed, we remove the need for a precise reconstruction of scene depth or estimation of camera poses. Related Work Next, we compute an all-in-focus image for each focal stack by extracting and composing the highest-frequency image content throughout all focal stack slices. The registration and blending parameters are then computed for the resulting (overlapping) all-in-focus images. For this purpose, we apply conventional panorama stitching techniques, such as SURF feature extraction, pairwise feature matching and RANSAC outlier detection, bundle adjustment, wave correction, exposure compensation, and computation of blending seams. The registration and blending parameters derived for the all-in-focus images are then applied to all corresponding slices of the focal stacks. The result is a registered and seamlessly blended panoramic focal stack that can be converted into a light field with linear view synthesis, as described in [8]. This process is summarized in figure 2. Limitations and Future Work Limitations Rendering Applying linear view synthesis to create panoramic light fields from panoramic focal stacks requires a multiple-viewpoint circular projection instead of a single-viewpoint projection for rendering. This is similar to rendering omnidirectional stereo panoramas, as in [4]. Furthermore we use a GPU based renderer with an intelligent caching strategy [9]. Only the portion of the light field that is needed for the current and potential future viewing parameters is loaded into the graphics memory. This allows rendering gigaray panoramic light fields in real-time. The advantage of choosing an intermediate focal stack representation is that conventional image panorama techniques can be used for robust computation of a panoramic light field without precise reconstruction of scene depth or estimation of camera poses. However, the focal stack of a scene covers no more than a 3D subset of the full 4D light field. This limits our current approach to Lambertian scenes with modest depth discontinuities as in [8]. Future Work Reliable direct registering of sub-light-fields instead of registering focal stacks computed from them would ease our current confinement to Lambertian scenes with modest depth discontinuities and would better model the actual capturing process. This, however, might be less robust since it relies on precise registration of sub-lightfields. We will investigate this in the future. We use a four-dimensional (three rotations and focal length) motion model for registration and multi-band blending for composition. Figure 1: A 2.54 gigaray, 360°panoramic light field (spatial resolution: 17,885x1,260 pixels, angular resolution: 11x11, 7.61 GB) shown at two different focus settings (far: top, near: bottom). We thank the Raytrix GmbH for capturing and providing the raw data. Conventional panorama imaging techniques assemble large image panoramas with a wide field of view from several smaller subimages. We introduce the first techniques for imaging and visualizing panoramic light fields. To ensure robust registration and blending of sub-light-fields without the need for precise depth reconstruction or camera pose estimation, we apply common panorama techniques, such as those presented in [1; 2; 3], as a basis for our approach. The main difference from regular image panoramas, however, is that we must additionally encode directional information in a consistent way. For two different viewing angles, this has been achieved in [4], where a stereo panorama is created either by rotating a stereo camera or with the help of special optics. Concentric mosaics [5] are panoramas that contain more than two perspectives. They are captured by slit cameras moving in concentric circles. By recording multiple concentric mosaics at various distances from the rotation axis, novel views within the captured area (i.e., a horizontal 2D disk) can be computed. This idea was extended to the vertical axis in [6], which enables rendering of novel views within the resulting 3D cylindrical space. In contrast to concentric mosaics, our panoramic light fields are captured with regular (mobile) light-field cameras in exactly the same way image panoramas are recorded with conventional cameras (i.e., via circular movement of the camera). Neither a precise, mechanically adjusted camera movement, nor specialized optics are required in our case. Real-time rendering of the resulting gigaray panoramic light fields is enabled by a light-field caching approach. References 1. Brown, M., Lowe, D.G.: Automatic panoramic image stitching using invariant features. International Journal of Computer Vision 74 (2007), 59-73. 2. Shum, H.Y., Szeliski, R.: Systems and experiment paper: Construction of panoramic image mosaics with global and local alignment. International Journal of Computer Vision 36 (2000), 101-130. 3. Uyttendaele, M., Eden, A., and Szeliski, R.: Eliminating ghosting and exposure artifacts in image mosaics. In CVPR (2), IEEE Computer Society (2001), 509– 516. 4. Peleg, S., Ben-Ezra, M., and Pritch, Y.: Omnistereo: panoramic stereo imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (2001), 279–290. 5. Shum, H., and He, L.: Rendering with concentric mosaics. In: SIGGRAPH ’99, ACM (1999), 299–306. 6. Li, J., Zhou, K., Wang, Y., and Shum, H.: A novel image-based rendering system with a longitudinally aligned camera array. In EUROGRAPHICS’00 (2000), 107– 114. 7. Isaksen, A., McMillan, L., and Gortler, S. J.: Dynamically reparameterized light fields. In Siggraph’00, ACM (2000), 297–306. 8. Levin, A., and Durand, F.: Linear view synthesis using a dimensionality gap light field prior. In CVPR (2010), 1831–1838. 9. Opelt, S. and Bimber, O.: Light-Field Caching, ACM Siggraph (poster), 2011 Figure 2: Panoramic light-field construction from a set of overlapping sub-light-fields that are captured in the course of a circular movement of the light-field camera. 1 {firstname.lastname}@jku.at More information at: www.jku.at/cg © 2012 Johannes Kepler University Linz