Ppt Slides - Weizmann Institute of Science

advertisement
1
Linear View Synthesis Using a
Dimensionality Gap Light Field Prior
Anat Levin and Fredo Durand
Weizmann Institute of Science & MIT CSAIL
Light fields
Light field: the set of rays emitted from a scene in all
possible directions
2
2
Light fields
Novel view rendering
(Animation by Marc Levoy)
3
3
Light fields
Novel view rendering
(Animation by Marc Levoy)
4
4
Light fields
Novel view rendering
(Animation by Marc Levoy)
Synthetic refocusing
5
5
4D light field
v
The set of light rays hitting the camera
aperture plane is 4D:
• Ray hitting point- 2D
• Ray orientation- 2D
u
(In general: a 7D plenoptic space, including
time and wavelength dimensions)
6
6
Light field acquisition schemes and priors
Very different approaches to light field acquisition and
manipulations exist in the literature.
The inherent difference between them is a different
prior model on the light field space
7
7
Light field acquisition schemes and priors
• 4D:
The light field is smooth, but involves 4
degrees of freedom
-Capture: 4D data (e.g. camera array)
-Inference: linear
8
8
Light field acquisition schemes and priors
• 4D:
-Capture: 4D data (e.g. camera array)
-Inference: linear
• 2D:
For Lambertian scenes all rays emerging from
one point have same color.
If depth is known, only 2 degrees of freedom
-Capture: 2D data (e.g. stereo camera)
-Inference: non linear depth estimation
9
9
In this talk: 3D light field prior
10
10
• 4D:
-Capture: 4D data (e.g. camera array
-Inference: linear
• 2D:
-Capture: 2D data (e.g. stereo camera)
y
-Inference: non linear depth estimation
v
u
x
• 3D:
Depth is a 1D variable, hence the union of images
at any depth covers no more than a 3D subset.
Show that in the frequency domain there is only a
3D manifold of non zero entries.
-Capture: 3D data (e.g. focal stack)
-Inference: linear
Outline
• Linear view synthesis from a focal stack sequence
• The 3D light field prior
• Frequency derivation of synthesis algorithm
• Other applications of the 3D prior
11
11
Linear view synthesis with 3D prior
12
12
Input: Focal stack (3D data)
Output: Novel viewpoints (4D data)
1D set of 2D images focused at
different depth
2D Images x 2D set of novel
viewpoints
Linear image
processing
13
13
Linear view synthesis algorithm
No depth
estimation!
Shift focal stack
images by disparity
of desired view
1
Average shifted
images
2
Depth invariant
deconvolution
3
14
14
Shift invariant convolution~ focus sweep camera

Average shifted
images

Depth invariant
blur kernel
Inspiration: The focus sweep camera
Hausler 72,
Nagahara et al. 08
Captures a single image, average over all
focus depths during exposure, provides
EDOF image from a single view
Ideal pinhole
image
Linear view synthesis results
Video animation here
15
15
Disclaimers
• Novel viewpoints limited to the
aperture area
• Convolution model breaks at
occlusion boundaries
• Assume scene is Lambertian- in
practice holds within the narrow
range of angles of the aperture
16
16
Outline
• Linear view synthesis from a focal stack sequence
• The 3D light field prior
• Frequency derivation of synthesis algorithm
• Other applications of the 3D prior
17
17
4D light field
v
y
v
18
y
x
u
x
• The set of light rays hitting the lens is 4D
u
(x,y,u,v)
4D light field
v
y
v
19
y
u
x
x
(?,?,u0,0)
• The set of light rays hitting the lens is 4D
(x,y,u,v)
u
4D light field
20
v
y
v
y
x
u
x
u
(?,?,0,v0)
• The set of light rays hitting the lens is 4D
(x,y,u,v)
4D light field
v
y
v
21
y
x
u
x
• The set of light rays hitting the lens is 4D
u
(x,y,u,v)
4D light field spectrum
22
y
y
v
u
v
x
x
4D Fourier
Transform
• The set of light rays hitting the lens is 4D
• Study the 4D Fourier domain
u
(x,y,u,v)
L( x, y, u,v)
4D light field spectrum
23
y
y
v
v
u
v
u
x
x
4D Fourier
Transform
L( x0,0,?,?)
• The set of light rays hitting the lens is 4D
• Study the 4D Fourier domain
u
(x,y,u,v)
L( x, y, u,v)
4D light field spectrum
24
y
y
v
u
x
4D Fourier
Transform
Frequency content only along 1D segments
v
u
x
4D light field spectrum
Scene
4D Light field
spectrum
Energy portion away
from focal segments
The slicing theorem
26
y
y
v
u
v
u
x
x
4D Fourier
Transform
2D focused images at
varying depths
2D Fourier
Transform
The dimensionality gap
27
y
y
v
u
near
x
far
4D Fourier
Transform
Light field spectrum: 4D
Image spectrum: 2D
3D
Depth: 1D
→ Dimensionality gap
(Ng 05, Levin et al. 09)
Only the 3D manifold corresponding to
physical focusing distance is useful
vv
uu
x
3D Gaussian light field prior
Gaussian prior: assigns non
zero variance only to 3D set
of entries on the focal
segments
• Gaussian=> inference
simple and linear
• Focal stack directly
samples the manifold with
non zero variance
y
28
v
u
x
Outline
• Linear view synthesis from a focal stack sequence
• The 3D light field prior
• Frequency derivation of synthesis algorithm
• Other applications of the 3D prior
29
View synthesis in the frequency domain
Average focal
stack spectra
Spectra of
Sample correct depth
density

1


30
4D spectrum of
y constant depth
v
scene
u

x
 
Deconvolution
(frequency domain)
Spectra of focal
stack images
Outline
• Linear view synthesis from a focal stack sequence
• The 3D light field prior
• Frequency derivation of synthesis algorithm
• Other applications of the 3D prior
31
Prior to infer light field from partial samples
In many other light field acquisition schemes we
capture only a partial information on the light fieldlimited resolution, aliasing and each.
However, we capture linear measurements
On the other hand, we have a Gaussian prior, and we
know the light field actually occupies only a low
dimensional manifold of the 4D space.
Use the prior to “invert the rank deficient projection”
and interpolate the measurements to get a light field
with higher resolution, less aliasing.
32
Improved viewpoints sample
4D Light field acquisition systems sample a
2D set of view points
• Can we do with sparser sample and 3D
Gaussian prior for interpolation?
• How many samples needed? What is the
right spacing?
• Shall we distribute samples on a grid?
Better arrangement?
Grid: Standard Circle: Sampling pattern with
sampling pattern
improved reconstruction
using 3D prior
33
Superesolution of plenoptic camera measurements34
Plenoptic camera
measurements are aliased
Replicas off the focal
segments are high
frequencies which we can
re-bin and restore high
frequency information
Superesolution of plenoptic camera measurements35
Bicubic
interpolation
Our result:
applies for all
depths
simultaneously,
no depth
estimation
Lumsdaine and
Georgiev:
applies for a
single known
depth
Summary
• Light field acquisition and synthesis strongly depends on light
field prior
Existing priors:
4D prior: capture- 4D data (e.g. camera array), inference- linear
2D prior: capture- 2D data (e.g. stereo), inference- non linear
Our new prior:
3D prior: capture- 3D data (e.g. focal stuck), inference linear
• Linear view synthesis from the focal stack
• Other applications of 3D prior:
- viewpoints sample pattern
- depth invariant superesolution of plenoptic camera data
36
Download