IBMRtumblin13

advertisement
CS 395/495-25: Spring 2003
IBMR: Week 7B
Chapter 6.1, 6.2; Chapter 7:
More Single-Camera Details
Jack Tumblin
jet@cs.northwestern.edu
IBMR-Related Seminars
3D Scanning for Cultural Heritage Applications
Holly Rushmeier, IBM TJ Watson
Friday May 16 3:00pm, Rm 381, CS Dept.
Light Scattering Models
for Rendering Human Hair
Steve Marschner, Cornell University
Friday May 23 3:00pm, Rm 381, CS Dept.
Reminders
•
•
•
•
•
ProjA graded: Good Job! 90,95, 110
ProjB graded: Good! minor H confusions...
MidTerm graded:
ProjC posted, due Friday, May 16
ProjD tomorrow, Friday May 16, due Friday May 30
• Start Watson’s Late Policy?: Grade -(3n) points;
n = # of class meetings late
• Take-Home Final Exam: Thurs June 5, due June 11
Mirror Spheres: Why?
Traditional CG Rendering:
To make an image,
Compute radiance arriving at novel camera position:
– Specify: Incoming light: Irradiance function:
at each (x,y,z) point from every direction (,)
– Specify: Shape, Texture, Reflectance, BRDF, BRSSDF
at each surface point (xs,ys,zs)
– Compute: Outgoing light: exitance function:
(after incoming light bounces around the scene)
at any camera point (x,y,z) from any pixel direction (,)
Mirror Spheres: Why?
• IBMR:
input is far less defined!
‘images, (usually) no depth’ only
To make an image,
Compute radiance arriving at novel camera position:
– Specify: Radiance from images, and perhaps
– Specify: Radiance from images
– Compute: Outgoing light: exitance function:
(after incoming light bounces around the scene)
at any camera point (x,y,z) from any pixel direction (,)
‘Rendering’ from a camera image?
Conventional: external camera reads light field
(after rendering)
Camera
zc
xc
yc
Shape,
Position,
Movement,
Emitted
Light
BRDF,
Texture,
Scattering
Reflected,
Scattered,
Light …
‘Rendering’ from a camera image?
IBMR: Let camera measure light inside scene
Shape,
Position,
Movement,
Emitted
Light
Camera
x2
x1
BRDF,
Texture,
Scattering
x3
Reflected,
Scattered,
Light …
‘Rendering’ from a camera image?
IBMR: Camera measures light inside scene
TROUBLE!
Camera is an object; reflects
light, changes scene.
Shape,
Position,
Movement,
Emitted
Light
Camera
x2
x1
BRDF,
Texture,
Scattering
WANTED: tiny, point-like
panoramic camera:
a ‘light probe’
x3
Reflected,
Scattered,
Light …
One Answer: Light Probe
Photograph a small mirror sphere
Camera
yc
xc
Shape,
Position,
Movement,
zc
BRDF,
Texture,
Scattering
Mirror
Sphere
Emitted
Light
Reflected,
Scattered,
Light …
Light Probes: How?
• Tele-photo a mirror sphere (narrow FOV)
• warp image to find irradiance .vs. direction
High contrast?
Higher resol.?
More positions?
More Pictures!
Paul Debevec,
SIGGRAPH2001 course
“Image Based Lighting”
High Contrasts too!
Paul Debevec,
SIGGRAPH2001
short course
“Image Based
Lighting”
.
One Answer: Light Probe
Light probes can measure irradiance—
the incoming light at a point.
• Can use them as panoramic cameras,
--OR—
as local ‘light maps’;
– they define intensity of incoming light .vs.
direction, as if local neighborhood was lit by
lights at infinity.
– Caution! may not be valid at nearby locations!
– Caution! high dynamic range! (>> 1:255)
– Can use them to render synthetic objects...
A Mirror Sphere is...
A ‘light probe’ to measure
ALL incoming light at a point.
How can we use it?
Image-Based Actual Re-lighting
Image-Based Actual Re-lighting
Debevec et al., SIGG2001
Light the actress in Los Angeles
Film the background in Milan,
Measure incoming light,
Matched LA and Milan lighting.
Matte the background
Measure REAL light in a REAL scene...
Debevec et al.,
SIGG1998
Render FAKE objects with REAL light,
And combine with REAL image:
Debevec et al.,
SIGG1998
The Grand Challenges:
• Controlled Lights +
Controlled Cameras
suggest we CAN
recover arbitrary
BRDF/ BSSSDF
and ‘enough’ shape.
• Is any method PRACTICAL?
• Can we avoid/reduce corrupting interreflections?
• Can we understand the shape/texture tradeoff?
The Grand Rewards:
• Controlled Lights +
Controlled Cameras
suggest we CAN
recover arbitrary
BRDF/ BSSSDF
and ‘enough’ shape.
–
–
–
–
–
Holodeck? CAVE uncorrupted by interreflections
Historical Preservation? complete optical records
‘Fake Materials’? a BRDF/BSSRDF display ...
‘Shader Lamps’? exchange reflectance for illumination
IBMR invisibility?
Image-Based Synthetic Re-lighting
Masselus et al., 2002
Image-Based Shape Refinement
Fine geometric Details  Fine Texture/Normal Details
Rushmeier, 2001
Image-Based Shape Approximation
Matusik 2002
Matusik 2002
Image-Based Shape Approximation
Matusik 2002
Why all this Projective Tedium?
• So you have the tools to try IBMR
(and because I’m struggling, slowly, to it boil down to the essentials in this course)
• It’s almost over— last 3 weeks of
class will be reading good recent
research papers, and will
• Begin exploring some open research
questions...
Camera Matrix P Summary:
• Basic camera:
xf s px 0
0 yf py 0
0 0 1 0
x = P0 X where P0 = [K | 0] =
• World-space camera:
~
translate world origin to camera location C, then rotate:
x = PX = (P0·R·T) X
• Rewrite as:~
P = K [R | -RC]
Input: X
3D World Space
y
• Redundant notation:
P = [M | p4]
M = RK ~
p4 = -K R C
Output: x
2D Camera Image
x
X
(world space)
z
Chapter 6 In Just One Slide:
Given point correspondence sets (xi Xi), How
do you find camera matrix P ? (full 11 DOF)
Surprise! You already know how !
• DLT method:
-rewrite H x = x’ as Hx  x’ = 0
-rewrite P X = x as PX  x = 0
-vectorize, stack, solve Ah = 0 for h vector
-vectorize, stack, solve Ap = 0 for p vector
-Normalizing step removes origin dependence
• More data  better results (at least 28 point pairs)
(why so many? rule-of-thumb: #constraints = 5x #DOF = 55 = 27.5 point pairs)
• Algebraic & Geometric Error, Sampson Error…
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What does it do to basic 3D world shapes?
• Planes
– Given any point X on a plane in P3,
– Change world’s coord. system: let plane be z=0:
– Matrix P reduces to 3x3 matrix H in P2:
x = P·X =
p11 p12 p13 p14
p21 p12 p23 p24
p31 p32 p33 p34
x
h11 h12 h13
y
h21 h22 h21
=
0
h31 h32 h33
t
x
y
t
• THUS
P2 can do any, all P2 plane transforms
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What does it do to basic 3D world shapes?
• Points, Directions:
World-space P3 direction D  image space point xd:
Recall direction D = (x,y,z,0) (a point at infinity)
sets a R3 finite point d = (x,y,z). D
xd = PD = [M | p4] D = M d
xd
yc
p4 column has no effect, because of D’s zero;
Recall M = KR
d
xd = M d
M-1xd = d
zc
X
(world space)
p
f
xc
~
C
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What does it do to basic 3D world shapes?
• Lines: Forward Projection:
• Line / Ray in world  Line/Ray in image:
– Ray in P3 is:
– Camera changes to P2:
X() = A + B
x() = PA + PB
yc
A
B
PA
zc
p
f
xc
C
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What does it do to basic 3D world shapes?
• Lines: Back Projection:
Line L in image  Plane L in world:
– Recall: Line L in P2 (a 3-vector): L = [x1 x2 x3]T
– Plane L in P3 (a 4-vector):

p11 p21 p31 
L
1
p12 p22 p32 
T
2
L = P ·L = p p p
13
23
33
3
p14 p24 p34
zc
yc
L
p
f
..
• (SKIP Plucker Matrix lines…)
xc
C
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What does it do to basic 3D world shapes?
Conics 1:
• Conic C in image  Cone Quadric Qco in world
Qco = PT·C·P
C
yc
• (Tip of cone is camera center V)
zc
p
f
xc
V
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What does it do to basic 3D world shapes?
• Conics 2:
Dual (line) Quadric Q* in world 
Dual (line) Conic C* silhouette in image
C*
C* = PT·Q*·P
yc
Q*
• Works for ANY world quadric!
sphere, cylinder, ellipsoid,
paraboloid, hyperboloid, line, disk …
zc
p
f
xc
V
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What does it do to basic 3D world shapes?
• Conics 3:
World-space quadric Q  World-space view cone
Qco, a degenerate quadric
Qco =
(VT
QV)Q –
(QV)(QV)T
Qco
yc
Q
zc
p
f
xc
V
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What if the image plane moves?
A) Translation:
xf s px
0 yf py
• Given internal camera calibration K = 0 0 1
• In (xc, yc)? changes px,py. In zc? focal length f:
let k = (f+tz)/f, then:
K’ =
k 0 0 xf s (px-tx)
0 k 0 0 yf (py-ty)
1
0 0 1 0 0
• Define effect of K’ on image points x,x’;
x = [K | 0]X; x’ = [K’ | 0]X
zc
x’ = K’ K-1 x
yc
p
f
xc
c
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
? What if the image plane moves?
B) Rotation:
xf s px
• Given internal camera calibration K = 0 yf py
0 0
1
• Rotate basic camera’s output:
about its center C using 3D rotation matrix R (3x3):
x = [K | 0]X;
x’ = [KR | 0] X
yc
x’ = [KR(K-1K) | 0] X = (KRK-1) [K|0] X
• Get new points x’ from old image pts x
zc
(K·R·K-1) x = x’
•
aka ‘conjugate rotation’; use this to construct planar panoramas
R
p
f
xc
c
Chapter 7: More One-Camera Details
Full 3x4 camera matrix P maps P3world to P2 image
THUS: if the image plane moves:
ALL cameras
gather thethe
same
imagepoints:
content!
just rearranges
image
Same Center Point? Same image, just rearranged!
3)
R K-1)
(just a planar reprojection in P(K
•
a) Translations:
The camerab)
center
C must move
Rotations:
· ·
x = x’
(K’ K-1) x = x’
to change the image content:
no zooming, warping, rotation can change this!
Gathering 3-D image data requires camera movement.
Movement Detection?
• Can we do it from images only?
– 2D projective transforms often LOOK like 3-D;
– External cam. calib. affects all elements of P
• YES. Camera moved if-&-only-if
Camera-ray points (CxX1,X2,…) will
map to LINE (not a point) in the other image
• ‘Epipolar Line’ == l’ = image of L
• ‘Parallax’ == x1’x2’ vector
X2
X1
x
L
x2’
x1’
l’
C
C’
Cameras as Protractors
• Define world-space direction d:
– From a P3 infinity point D = [xd yd zd 0]T
define d == [xd yd zd]
• Use Basic Camera P0,
~
– (e.g. C=(0,0,0,1), R=0, P = P0)
– (Danger! now mixing P2, P3…)
– Link direction D to image-space pt. xd=(xc,yc,zc):
P0 Xd = [K|I]Xd = K d = xd
• Ray thru image pt. x has direction d = K-1xd
Cameras as Protractors
• Angle between C and 2 image points x1,x2
(see book pg 199)
cos  = x1T (K-TK-1) x2
(x1T (K-TK-1) x1)(x2T (K-TK-1) x2)
• Image line L defines a plane L
d2
n
– (Careful! P3 world =P2 camera axes here!)
– Plane normal direction:
n = KT L
d1

x2
x1
C
L
Cameras as Protractors
• Angle between C and 2 image points x1,x2
(see book pg 199)
cos  = x1T (K-TK-1) x2
(x1T (K-TK-1) x1)(x2T (K-TK-1) x2)
• Image line L defines a plane L
Something
Special
– (Careful!
P3 world =P2 camera
axes here!)
here? direction:
Yes!
– Plane normal
n = KT L
d2
n
d1

x2
x1
C
L
Cameras as Protractors
What is (K-TK-1) ?
• Recall P3 Conic Weirdness:
– Plane at infinity  holds all ‘horizon points’ d
(‘universe wrapper’)
– Absolute Conic  is imaginary outermost circle of 
• for ANY camera,
Translation won’t change ‘Horizon point’ images:
P Xd = x = KRd
(pg200)
• Absolute conic is inside ; it’s all ‘horizon points’
• for ANY camera,
P  = (K-TK-1) = = ‘Image of Absolute Conic’
Why do we care?
P  = (K-TK-1) = = ‘Image of Absolute Conic’
• IAC is a ‘magic tool’ for camera calibration K
• Recall  let us find H from perp. lines.
• Much better than ‘vanishing pt.’ methods
• With IAC, find P matrix from an image of
just 3 (non-coplanar) squares…
Cameras as Protractors
• Image Direction:
d = [xc, yc, zc, 0]T
• Image Direction from a point x: d = K-1x
• Angle  between C and 2 image points x1,x2:
cos  =
x1T (K-TK-1) x2
(pg 199)
(x1T (K-TK-1) x1)(x2T (K-TK-1) x2)
d2
• Simplify with absolute conic :

d1
P  = (K-TK-1) =  = ‘Image of Absolute Conic’
x2
x1
C
L
Cameras as Protractors
P  = (K-TK-1). OK. Now what was  again?
Recall P3 Conic Weirdness: (pg. 63-67)
– Plane at infinity  holds all ‘horizon points’ d
(‘universe wrapper’)
– Absolute Conic  imaginary points in outermost circle of 
• Satisfies BOTH x12 + x22 + x32 = 0 AND x42 = 0
• Can rewrite equations to look like a quadric (but isn’t— no x4)
x1 x2 x3 0 1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
x1
x2
x3
= dT··d
0
• AHA! ‘points’ on it are (complex conjugate) directions d !
– Finds right angles-- if d1  d2, then:
d1T· ·d2 = 0
Cameras as Protractors
P  = (K-TK-1). OK. Now what was  again?
– Dual of Absolute Conic  is Dual Quadric Q* (?!?!)
– More compact notation: for imaginary planes 
1 2 3 4
1
0
0
0
– Same matrix, but different use:
0
1
0
0
0
0
1
0
0
0
0
0
1
2
3
Inconsistent
notation!
= T·Q*·
4
--find a plane  for every possible direction d
-- is  to , and tangent to the quadric Q*
–  is circle in  where tangent planes  are  to 
– Finds right angles-- if 1  2, then: 1T· Q*·2 = 0
Cameras as Protractors
P  = (K-TK-1) =  = ‘Image of Absolute Conic’
• Just as has a dual Q*,  has dual * :
* = -1 = K KT
• The dual conic * is the image of Q* , so
* = P Q* = P
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
=
p1 p2 p3 0
(first 3 columns of P?)
Cameras as Protractors
P  = (K-TK-1) =  = ‘Image of Absolute Conic’
• Just as has a dual Q*,  has dual * :
* = -1 = K KT
• The dual conic * is the image of Q* , so
* = P Q* = P 10 10 00 00 = p p p 0
1
2
3
0 0 1 0
0 0 0 0
(first 3 columns of P)
Vanishing points v1,v2 of 2  world-space lines:
v1T v2 = 0
Vanishing lines L1, L2 of 2  world-space planes:
L1T* L2 = 0
Cameras as Protractors
Clever vanishing point trick:
• Perpendicular lines in image?
• Find their vanishing pts. by construction:
• Use v1T v2 = 0, stack, solve for  = (K-TK-1)
v3
v1
v2
END
Download