Introduction

advertisement
Computer Vision
CS 776 Spring 2014
Cameras & Photogrammetry 3
Prof. Alex Berg
(Slide credits to many folks on individual slides)
Cameras & Photogrammetry 3
Gamma
Scientific
Adapted from:
http://en.wikipedia.org/wiki/File:Commer
cial_Integrating_Sphere.jpg
http://www.math.tudresden.de/DMV2000/Impress/PI
C003.jpg
http://www.autobild.de/artikel/xeno
nlampen-im-test-2822873.html
Stories of light
Alex Berg 2012
Jensen & Buhler Siggraph 2002
Image formation
What determines the brightness of an image
pixel?
Light source
properties
Surface
shape and
orientation
Sensor characteristics
Exposure
Optics
Surface reflectance
properties
Slide by L. Fei-Fei
Fundamental radiometric relation
L: Radiance emitted from P toward P’
E: Irradiance falling on P’ from the lens
P
d
α
P’
f
z
What is the relationship between E and L?
Szeliski 2.2.3
Fundamental radiometric relation
P
d
  d
E   
 4  f
α
P’
f


4
 cos   L


2
z
• Image irradiance is linearly related to scene radiance
• Irradiance is proportional to the area of the lens and
inversely proportional to the squared distance
between
the lens and the image plane
• The irradiance falls off as the angle between the
viewing ray and the optical axis increases
Szeliski 2.2.3
The interaction of light and surfaces
What happens when a light ray hits a point on an
object?
• Some of the light gets absorbed
– converted to other forms of energy (e.g., heat)
• Some gets transmitted through the object
– possibly bent, through “refraction”
– Or scattered inside the object (subsurface scattering)
• Some gets reflected
– possibly in multiple directions at once
• Really complicated things can happen
– fluorescence
Bidirectional reflectance distribution
function (BRDF): how bright a
surface appears when viewed
from one direction when light
falls on it from another
Slide by Steve Seitz
BRDFs can be incredibly
complicated…
Slide by Svetlana Lazebnik
Radiometry
• Questions:
– how “bright” will surfaces be?
– what is “brightness”?
• measuring light
• interactions between light
and surfaces
• Core idea - think about light
arriving at a surface
• around any point is a
hemisphere of directions
• Simplest problems can be dealt
with by reasoning about this
hemisphere
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Lambert’s wall
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
More complex wall
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Foreshortening
• Principle: two sources that
look the same to a receiver must
have the same effect on the
receiver.
• Principle: two receivers that
look the same to a source must
receive the same amount of
energy.
• “look the same” means produce
the same input hemisphere (or
output hemisphere)
• Reason: what else can a
receiver know about a source
but what appears on its input
hemisphere? (ditto, swapping
receiver and source)
• Crucial consequence: a big
source (resp. receiver), viewed
at a glancing angle, must
produce (resp. experience) the
same effect as a small source
(resp. receiver) viewed
frontally.
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Solid Angle
• By analogy with angle (in
radians), the solid angle
subtended by a region at a point
is the area projected on a unit
sphere centered at that point
• The solid angle subtended by a
patch area dA is given by
• Another useful expression:
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Measuring Light in Free Space
• Desirable property: in a
vacuum, the relevant unit does
not go down along a straight
line.
• How do we get a unit with this
property? Think about the
power transferred from an
infinitesimal source to an
infinitesimal receiver.
• We have
total power leaving s to r =
total power arriving at r
from s
• Also:
Power arriving at r is
proportional to:
– solid angle subtended by s at r
(because if s looked bigger
from r, there’d be more)
– foreshortened area of r
(because a bigger r will collect
more power
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Radiance
• All this suggests that the light
transferred from source to
receiver should be measured as:
Radiant power per unit
foreshortened area per unit
solid angle
• This is radiance
• Units: watts per square meter
per steradian (wm-2sr-1)
• Usually written as:
• Crucial property:
In a vacuum, radiance leaving p
in the direction of q is the same
as radiance arriving at q from p
– which was how we got to
the unit
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Radiance is constant along straight lines
• Power 1->2, leaving 1:
• Power 1->2, arriving at 2:
• But these must be the same, so
that the two radiances are equal
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Irradiance
• How much light is arriving at a
surface?
• Sensible unit is Irradiance
• Incident power per unit area not
foreshortened
• This is a function of incoming
angle.
• A surface experiencing radiance
L(x,q,f) coming in from dw
experiences irradiance
• Crucial property:
Total power arriving at the
surface is given by adding
irradiance over all incoming
angles --- this is why it’s a
natural unit
• Total power is
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Light at surfaces
• Many effects when light strikes
a surface -- could be:
– absorbed
– transmitted
• skin
– reflected
• mirror
– scattered
• milk
– travel along the surface and
leave at some other point
• sweaty skin
• Assume that
– surfaces don’t fluoresce
• e.g. scorpions, washing
powder
– surfaces don’t emit light (i.e.
are cool)
– all the light leaving a point is
due to that arriving at that
point
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
The BRDF
• Assuming that
– surfaces don’t fluoresce
– surfaces don’t emit light (i.e.
are cool)
– all the light leaving a point is
due to that arriving at that
point
• Can model this situation with
the Bidirectional Reflectance
Distribution Function (BRDF)
• the ratio of the radiance in the
outgoing direction to the
incident irradiance
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
BRDF
• Units: inverse steradians (sr-1)
• Symmetric in incoming and
outgoing directions - this is the
Helmholtz reciprocity principle
• Radiance leaving a surface in a
particular direction:
– add contributions from every
incoming direction
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Suppressing Angles - Radiosity
• In many situations, we do not
really need angle coordinates
– e.g. cotton cloth, where the
reflected light is not dependent
on angle
• Appropriate radiometric unit is
radiosity
• Radiosity from radiance?
– sum radiance leaving surface
over all exit directions,
multiplying by a cosine
because this is per unit area not
per unit foreshortened area
– total power leaving a point on
the surface, per unit area on the
surface (Wm-2)
– note that this is independent of
the direction
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Radiosity
• Important relationship:
– radiosity of a surface whose
radiance is independent of
angle (e.g. that cotton cloth)
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Suppressing the angles in the BRDF
• BRDF is a very general notion
– some surfaces need it (underside of a CD; tiger eye; etc)
– very hard to measure
• ,illuminate from one direction, view from another, repeat
– very unstable
• minor surface damage can change the BRDF
• e.g. ridges of oil left by contact with the skin can act as lenses
• for many surfaces, light leaving the surface is largely
independent of exit angle
– surface roughness is one source of this property
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Directional hemispheric reflectance
• Directional hemispheric
reflectance:
– the fraction of the incident
irradiance in a given direction
that is reflected by the surface
(whatever the direction of
reflection)
– unitless, range is 0-1
• Note that DHR varies with
incoming direction
– eg a ridged surface, where left
facing ridges are absorbent and
right facing ridges reflect.
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Lambertian surfaces and albedo
• For some surfaces, the DHR is
independent of illumination
direction too
– cotton cloth, carpets, matte
paper, matte paints, etc.
• For such surfaces, radiance
leaving the surface is
independent of angle
• Called Lambertian surfaces
(same Lambert) or ideal diffuse
surfaces
• Use radiosity as a unit to
describe light leaving the
surface
• DHR is often called diffuse
reflectance, or albedo
• for a Lambertian surface,
BRDF is independent of angle,
too.
• Useful fact:
d
brdf
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
r
r
=
p
Specular surfaces
• Another important class of
surfaces is specular, or mirrorlike.
– radiation arriving along a
direction leaves along the
specular direction
– reflect about normal
– some fraction is absorbed,
some reflected
– on real surfaces, energy
usually goes into a lobe of
directions
– can write a BRDF, but requires
the use of funny functions
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Phong’s model
• There are very few cases where
the exact shape of the specular
lobe matters.
• Typically:
– very, very small --- mirror
– small -- blurry mirror
– bigger -- see only light sources
as “specularities”
– very big -- faint specularities
• Phong’s model
– reflected energy falls off with
Computer Vision - A Modern Approach
Set: Radiometry
Slides by D.A. Forsyth
Lambertian + specular
• Widespread model
– all surfaces are Lambertian plus specular component
• Advantages
– easy to manipulate
– very often quite close true
• Disadvantages
– some surfaces are not
• e.g. underside of CD’s, feathers of many birds, blue spots on
many marine crustaceans and fish, most rough surfaces, oil
films (skin!), wet surfaces
– Generally, very little advantage in modelling behaviour of light at a
surface in more detail
-- it is quite difficult to understand behaviour
Computer Vision - A Modern Approach
Set: Radiometry
of L+S surfaces
Slides by D.A. Forsyth
Diffuse reflection
• Light is reflected equally in all directions
• Dull, matte surfaces like chalk or latex paint
• Microfacets scatter incoming light randomly
• Effect is that light is reflected equally in all
directions
•
Brightness of the surface depends on
the incidence of illumination
brighter
darker
Slide by Svetlana Lazebnik
Diffuse reflection: Lambert’s law
N
S
θ
B   (N  S)
  S cos q
B: radiosity (total power leaving the
surface per unit area)
ρ: albedo (fraction of incident
irradiance reflected by the surface)
N: unit normal
S: source vector (magnitude
proportional to intensity of the source)
Slide by Svetlana Lazebnik
Specular reflection
• Radiation arriving along a source
direction leaves along the
specular direction (source
direction reflected about normal)
• Some fraction is absorbed, some
reflected
• On real surfaces, energy usually
goes into a lobe of directions
• Phong model: reflected energy
n
q 
coswith
falls of
• Lambertian + specular model: sum
of diffuse and specular term
Slide by Svetlana Lazebnik
Specular reflection
Moving the light source
Changing the exponent
Slide by Svetlana Lazebnik
Fundamental radiometric relation
  d
E   
 4  f


4
 cos   L


2
• Application:
• S. B. Kang and R. Weiss, Can we calibrate a camera using
an image of a flat, textureless Lambertian surface? ECCV
2000.
Slide by Svetlana Lazebnik
Photometric stereo (shape from
shading)
• Can we reconstruct the shape of an
object based on shading cues?
Luca della Robbia,
Cantoria, 1438
Slide by Svetlana Lazebnik
Photometric stereo
Assume:
• A Lambertian object
• A local shading model (each point on a surface receives
light only from sources visible at that point)
• A set of known light source directions
• A set of pictures of an object, obtained in exactly the
same camera/object configuration but using different
sources
• Orthographic projection
Goal: reconstruct object shape and albedo
S2
Sn
S1
???
F&P 2nd ed., sec. 2.2.4
Surface model: Monge patch
F&P 2nd ed., sec. 2.2.4
Image model
• Known: source vectors Sj and pixel values
Ij(x,y)
• Unknown: normal N(x,y) and albedo ρ(x,y)
• Assume that the response function of the
camera is a linear scaling by a factor of k
• Lambert’s law:
I j ( x, y )  k B ( x, y )
 k   x, y N x, y   S j 
   x, y N x, y   (k S j )
 g ( x, y )  V j
F&P 2nd ed., sec. 2.2.4
Least squares problem
• For each pixel, set up a linear system:
T

I
(
x
,
y
)
 1

V1 
 I ( x, y )   T 
 2
   V2  g ( x, y )

   


  T
 I n ( x, y )  Vn 
(n × 1)
known
(n × 3)
(3 × 1)
known unknown
• Obtain least-squares solution for g(x,y) (which we
defined as N(x,y) (x,y))
• Since N(x,y) is the unit normal, (x,y) is given by the
magnitude of g(x,y)
• Finally, N(x,y) = g(x,y) / (x,y)
F&P 2 ed., sec. 2.2.4
nd
Example
Recovered albedo
Recovered normal field
F&P 2nd ed., sec. 2.2.4
Recovering a surface from normals
Recall the surface is
written as
( x, y, f ( x, y ))
This means the normal
has the form:
N ( x, y ) 
 fx 
 
1
 fy 
2
2
fx  f y 1  
1
If we write the
estimated vector g as
 g1 ( x, y ) 


g ( x, y )   g 2 ( x, y ) 
 g ( x, y ) 
 3

Then we obtain values
for the partial
derivatives of the
surface:
f x ( x, y )  g1 ( x, y ) / g 3 ( x, y )
f y ( x, y )  g 2 ( x, y ) / g 3 ( x, y )
F&P 2nd ed., sec. 2.2.4
Recovering a surface from normals
Integrability: for the
surface f to exist, the
mixed second partial
derivatives must be
equal:
We can now recover
the surface height at
any point by integration
along some path, e.g.

( g1 ( x, y ) / g 3 ( x, y )) 
y

( g 2 ( x, y ) / g 3 ( x, y ))
x
f ( x, y )   f x ( s, y )ds 
(in practice, they should
at least be similar)
(for robustness, can take
integrals over many
different paths and
average the results)
x
0
y
  f y ( x, t )dt  C
0
F&P 2nd ed., sec. 2.2.4
Surface recovered by integration
F&P 2nd ed., sec. 2.2.4
Finding the direction of the light
source
I(x,y) = N(x,y) ·S(x,y) + A
Full 3D case:
N
S
 N x ( x1 , y1 ) N y ( x1 , y1 ) N z ( x1 , y1 )

 N x ( x2 , y 2 ) N y ( x2 , y 2 ) N z ( x2 , y 2 )





 N (x , y ) N (x , y ) N (x , y )
y
n
n
z
n
n
 x n n
1 S x   I ( x1 , y1 ) 
  

1 S y   I ( x2 , y2 ) 


  S z  

  





1 A   I ( xn , yn ) 
For points on the occluding contour:
 N x ( x1 , y1 ) N y ( x1 , y1 )

 N x ( x2 , y 2 ) N y ( x2 , y 2 )




 N (x , y ) N (x , y )
y
n
n
 x n n
1
 I ( x1 , y1 ) 
 S  

1 x   I ( x2 , y2 ) 
 Sy   


  

 A  




1
 I ( xn , y n ) 
P. Nillius and J.-O. Eklundh, “Automatic estimation of the projected light source
direction,” CVPR 2001
Slide by Svetlana Lazebnik
Finding the direction of the light
source
P. Nillius and J.-O. Eklundh, “Automatic estimation of the projected light source
direction,” CVPR 2001
Slide by Svetlana Lazebnik
Application: Detecting composite
photos
Real photo
Fake photo
M. K. Johnson and H. Farid, Exposing Digital Forgeries by Detecting Inconsistencies
in Lighting, ACM Multimedia and Security Workshop, 2005.
Slide by Svetlana Lazebnik
From light rays to pixel values
X  E  t
  d
E   
 4  f


4
 cos   L


2
Z  f E  t 
• Camera response function: the mapping f from
irradiance to pixel values
• Useful if we want to estimate material properties
• Enables us to create high dynamic range images
• For more info: P. E. Debevec and J. Malik, Recovering High
Dynamic Range Radiance Maps from Photographs,
SIGGRAPH 97
Slide by Svetlana Lazebnik
Limitations
•
•
•
•
•
•
Orthographic camera model
Simplistic reflectance and lighting model
No shadows
No interreflections
No missing data
Integration is tricky
Slide by Svetlana Lazebnik
More reading & thought problems
(reconstructing from a single image using priors)
Shape, Illumination, and Reflectance from Shading
Jonathan T. Barron, Jitendra Malik
Tech Report, 2013
(implementing this one is the extra credit)
Recovering High Dynamic Range Radiance Maps from Photographs.
Paul E. Debevec and Jitendra Malik.
SIGGRAPH 1997
(people can perceive reflectance)
Surface reflectance estimation and natural illumination statistics.
R.O. Dror, E.H. Adelson, and A.S. Willsky.
Workshop on Statistical and Computational Theories of Vision 2001
Download