ppt

advertisement
CSE 473/573
Computer Vision and Image
Processing (CVIP)
Ifeoma Nwogu
inwogu@buffalo.edu
Lecture 6 – Color
Schedule
• Last class
– Image formation – photometric properties
• Today
– Color
• Readings for today: Forsyth and Ponce Chp. 3
What is color?
• Color is a psychological
property of our visual
experiences when we look at
objects and lights,
not a physical property of
those objects or lights
(S. Palmer, Vision Science:
Photons to Phenomenology)
• Color is the result of
interaction between physical
light in the environment and
our visual system
Color
•
•
•
•
•
Human encoding of color
Physics of color
Color representation
White balancing
Applications in computer vision
Overview of Color
•
•
•
•
•
Human encoding of color
Physics of color
Color representation
White balancing
Applications in computer vision
The Eye
The human eye is a camera! But what’s the “film”?
• Lens - changes shape by using ciliary muscles (to focus on objects at
different distances)
• Pupil - the hole (aperture) whose size is controlled by the iris
• Iris - colored annulus with radial muscles
• Retina - photoreceptor cells (cones and rods) – “the film”
Slide by Steve Seitz
The Retina
Light sensitive receptors
• Cones
– responsible for color vision
– less sensitive
– operate in high light
• Rods
pigment
molecules
– responsible for gray-scale vision
(intensity based perception)
– highly sensitive
– operate at night
• Rods and cones are nonuniformly distributed on the
retina
– Fovea - Small region (1 or 2°) at the center
of the visual field containing the highest
density of cones – and no rods
Slide by Steve Seitz
Color matching
Color matching experiment 1
Source: W. Freeman
Color matching experiment 1
p1 p2
p3
Source: W. Freeman
Color matching experiment 1
p1 p2
p3
Source: W. Freeman
Color matching experiment 1
The primary color
amounts needed for a
match
p1 p2
p3
Source: W. Freeman
Color matching experiment 2
Source: W. Freeman
Color matching experiment 2
p1 p2
p3
Source: W. Freeman
Color matching experiment 2
p1 p2
p3
Source: W. Freeman
Color matching experiment 2
We say a “negative”
amount of p2 was
needed to make the
match, because we
added it to the test
color’s side.
p1 p2
p3
The primary color
amounts needed for a
match:
p1 p2
p3
p1 p2
p3
Source: W. Freeman
Trichromacy
• In color matching experiments, most people can
match any given light with three primaries
– Primaries must be independent
• For the same light and same primaries, most
people select the same weights
– Exception: color blindness
• Trichromatic color theory
– Three numbers seem to be sufficient for encoding
color
– Dates back to 18th century (Thomas Young)
Grassman’s Laws
• Color matching appears to be linear
• If two test lights can be matched with the same set of
weights, then they match each other:
– Suppose A = u1 P1 + u2 P2 + u3 P3 and B = u1 P1 + u2 P2 +
u3 P3. Then A = B.
• If we mix two test lights, then mixing the matches will
match the result:
– Suppose A = u1 P1 + u2 P2 + u3 P3 and B = v1 P1 + v2 P2 + v3
P3. Then A + B = (u1+v1) P1 + (u2+v2) P2 + (u3+v3) P3.
• If we scale the test light, then the matches get scaled by
the same amount:
– Suppose A = u1 P1 + u2 P2 + u3 P3.
Then kA = (ku1) P1 + (ku2) P2 + (ku3) P3.
Physiology of color vision
Three kinds of cones:
.
RELATIVE ABSORBANCE (%)
440
530 560 nm.
100
S
M
L
50
400
450
500
550
600 650
WAVELENGTH (nm.)
• Ratio of L to M to S cones:
approx. 10:5:1
• Almost no S cones in the center
of the fovea
S, M and L cones are respectively sensitive to short-,
Medium- and long-wavelength lights
© Stephen E. Palmer, 2002
Interaction of light and surfaces
• Reflected color is the result of
interaction of light source
spectrum with surface
reflectance
Interaction of light and surfaces
What is the observed color of any surface under
monochromatic light?
Olafur Eliasson, Room for one color
Overview of Color
•
•
•
•
Human encoding of color
Physics of color
Color representation
White balancing
The physics of light
Any source of light can be completely described
physically by its spectrum: the amount of energy emitted
(per time unit) at each wavelength 400 - 700 nm.
Relative
spectral
power
© Stephen E. Palmer, 2002
Electromagnetic spectrum
Human Luminance Sensitivity Function
Spectra of light sources
Some examples of the spectra of light sources
.
400 500
600
700
B. Gallium Phosphide Crystal
# Photons
Rel. power
# Photons
Rel. power
A. Ruby Laser
400 500
Wavelength (nm.)
700
D. Normal Daylight
# Photons
Rel. power
Rel. power
# Photons
600
700
Wavelength (nm.)
C. Tungsten Lightbulb
400 500
600
400 500
600
700
© Stephen E. Palmer, 2002
Spectra of light sources
Source: Popular Mechanics
Reflectance spectra of surfaces
Some examples of the reflectance spectra of surfaces
Yellow
Blue
Purple
% Light Reflected
Red
400
700
400
700
400
700
400
700
Wavelength (nm)
© Stephen E. Palmer, 2002
Spectra of some real-world
surfaces
Color
•
•
•
•
•
Human encoding of color
Physics of color
Color representation
White balancing
Applications in computer vision
Linear color spaces
• Defined by a choice of three primaries
• The coordinates of a color are given by the
weights of the primaries used to match it
mixing two lights produces
colors that lie along a straight
line in color space
mixing three lights produces
colors that lie within the triangle
they define in color space
Linear color spaces
• How to compute the weights of the primaries
to match any spectral signal?
Find: weights of the primaries
needed to match the color
signal
Given: a choice of three
primaries and a target color
signal
?
p1
p2
p1
p3
A = u1 P1 + u2 P2 + u3 P3.
p2
p3
Color matching functions
• How to compute the weights of the primaries
to match any spectral signal?
• Let c(λ) be one of the matching functions, and
let t(λ) be the spectrum of the signal. Then the
weight of the corresponding primary needed to
match t is
w   c ( )t ( ) d

Matching functions, c(λ)
Signal to be matched, t(λ)
λ
Computing color matches
•
How do we compute the weights
that will yield a perceptual match
for any test light using a given set
of primaries?
1. Select primaries
2. Estimate their color matching
functions: observer matches series
of monochromatic lights, one at
each wavelength
c1 (N ) 
 c1 (1 ) 
…


…
C   c2 (1 ) 
c2 ( N ) 
3. Multiply matching functions and
 c ( ) 

test light
 3 1 … c3 ( N ) 
Computing color matches
Color matching functions for a particular
set of primaries
p1 = 645.2 nm
p2 = 525.3 nm
p3 = 444.4 nm
Rows of matrix C
Foundations of Vision, by Brian Wandell, Sinauer Assoc., 1995
Slide credit: W. Freeman
Computing color matches
i
matches
c1 (i ), c2 (i ), c3 (i )
Now have matching functions for all monochromatic light sources, so
we know how to match a unit of each wavelength.
Arbitrary new spectral signal is a linear combination of the
monochromatic sources.
 t (1 ) 

 
t   … 
 t ( ) 
 N 
Computing color matches
So, given any set of primaries and their associated matching
functions (C), we can compute weights (w) needed on each
primary to give a perceptual match to any test light t (spectral
signal).
Fig from B. Wandell, 1996
Computing color matches
• Why is computing the color
match for any color signal for a
given set of primaries useful?
– Want the colors in the world, on a
monitor, and in a print format to all
look the same.
– Want to paint a carton of Kodak film
with the Kodak yellow color.
– Want to match skin color of a
person in a photograph printed on
an ink jet printer to their true skin
color.
Adapted from W. Freeman
Image credit: pbs.org
Linear color spaces: RGB space
• Primaries are monochromatic lights (for monitors, they
correspond to the three types of phosphors)
• Subtractive matching required for some wavelengths
RGB matching functions
RGB primaries
Linear color spaces: RGB space
• Single wavelength primaries
• Good for devices (e.g., phosphors for
monitor), but not for perception
Linear color spaces: CIE XYZ
• Established by the commission international d’eclairage
(CIE), 1931
• Usually projected for 2D visualization as:
(x,y) = (X/(X+Y+Z), Y/(X+Y+Z))
• The Y parameter corresponds to brightness or luminance
of a color
Volume of all visible colors in CIE XYZ
cone with vertex at origin
Intersect the cone with plane X+Y+Z =1 to
Get the 2-d CIE XYZ space
Linear color spaces: CIE XYZ
Matching functions
http://en.wikipedia.org/wiki/CIE_1931_color_space
Nonlinear color spaces: HSV
• Perceptually meaningful dimensions:
Hue, Saturation, Value (Intensity)
Distances in color space
• Since it is hard to reproduce color exactly, it is
important to know whether a color difference would
be noticeable to a human viewer.
– therefore, compare the significance of small color
differences to several people
– Humans can determine just noticeable differences, which
when plotted form boundaries of indistinguishable colors.
– In CIE XY, the properties of these boundary ellipses depend
on where in the space they occur
– Euclidean distance is therefore a poor indicator of the
significance of color differences.
• ellipses would be more circular in a more uniform space
• distances in coordinate space would better represent the
difference between 2 colors
Distances in color space
• Are distances between points in a color space
perceptually meaningful?
• Not necessarily: CIE XYZ is not a uniform color space, so
magnitude of differences in coordinates are poor
indicator of color “distance”.
Uniform color spaces
• Unfortunately, differences in x,y coordinates do not reflect
perceptual color differences
• CIE u’v’ is a projective transform of x,y to make the ellipses
more uniform
McAdam ellipses: Just noticeable
differences in color
Uniform color spaces
• Attempt to correct this
limitation by remapping
color space so that justnoticeable differences are
contained by circles 
distances more
perceptually meaningful.
CIE XYZ
• Examples:
– CIE u’v’
– CIE Lab
CIE LAB is now almost universally the most
popular uniform color space
CIE u’v’
Color
•
•
•
•
•
Human encoding of color
Physics of color
Color representation
White balancing
Applications in computer vision
White balance
• When looking at a picture on screen or print, our eyes are
adapted to the illuminant of the room, not to that of the scene
in the picture
• When the white balance is not correct, the picture will have an
unnatural color “cast”
incorrect white balance
correct white balance
http://www.cambridgeincolour.com/tutorials/white-balance.htm
White balance
• Related to the concept of
color temperature.
– a way of measuring the
quality of a light source.
– unit for measuring this ratio
is in degree Kelvin (K).
• WB is based on the ratio of
blue light to red light
– green light is ignored.
• A light with larger Kelvin
value has more blue lights
than one with value (lower
color temperature)
Light Sources
Color Temperature in K
Clear Blue Sky
10,000 to 15,000
Overcast Sky
6,000 to 8,000
Noon Sun and Clear Sky
6,500
Sunlight Average
5,400 to 6,000
Electronic Flash
5,400 to 6,000
Household Lighting
2,500 to 3,000
200-watt Bulb
2,980
100-watt Bulb
2,900
75-watt Bulb
2,820
60-watt Bulb
2,800
40-watt Bulb
2,650
Candle Flame
1,200 to 1,500
White balance
• Film cameras:
– Different types of film or different filters for
different illumination conditions
• Digital cameras:
– Automatic white balance
– White balance settings corresponding to
several common illuminants
– Custom white balance using a reference
object
http://www.cambridgeincolour.com/tutorials/white-balance.htm
White balance
• Von Kries adaptation
– Multiply each channel by a gain factor
• Gray card
– Take a picture of a neutral object (white or gray)
– Deduce the weight of each channel
•
If the object is recoded as rw, gw, bw , use weights 1/rw, 1/gw, 1/bw
White balance
• Without gray cards: we need to “guess” which pixels
correspond to white objects
• Gray world assumption
– The image average rave, gave, bave is gray
– Use weights 1/rave, 1/gave, 1/bave
• Brightest pixel assumption
– Highlights usually have the color of the light source
– Use weights inversely proportional to the values of the
brightest pixels
• Gamut mapping
– Gamut: convex hull of all pixel colors in an image
– Find the transformation that matches the gamut of the
image to the gamut of a “typical” image under white
light
• Use image statistics, learning techniques
White balance by recognition
Key idea:
• for each of the semantic
classes present in the image,
– compute the illuminant that
transforms the pixels assigned
to that class
– Do this so that the average
color of that class matches the
average color of the same
class in a database of “typical”
images
– Use learning techniques to
represent the distribution of
“typical images”
J. Van de Weijer, C. Schmid and J. Verbeek, Using High-Level Visual Information for
Color Constancy, ICCV 2007.
Mixed illumination
• When there are several types of illuminants in
the scene, different reference points will yield
different results
Reference: moon
Reference: stone
http://www.cambridgeincolour.com/tutorials/white-balance.htm
Spatially varying white balance
Input
Alpha map
Output
E. Hsu, T. Mertens, S. Paris, S. Avidan, and F. Durand, “Light Mixture Estimation for
Spatially Varying White Balance,” SIGGRAPH 2008
Given an input image (a), extract dominant material colors using a voting technique (pixels
corresponding to white material colors) (b); and brown material colors (c). Estimate the
local relative contributions of each light (d). Voting scheme only labels reliable pixels so
unreliable pixels, shown in blue, are inferred using an interpolation scheme (e). The mixture
information can then be used to compute a visually pleasing white balance (f).
Assumptions from that paper
• There are two illuminant types present in the scene and their
colors are known beforehand.
• Scenes are dominated by only a small number of material
colors. In other words, the set of reflectance spectra is
sparse.
• The interaction of light can be described using RGB channels
only, instead of requiring full spectra.
• Surfaces are Lambertian and non-fluorescent, so image color
is the product of illumination and reflectance in each
channel.
• Color bleeding due to indirect illumination can be ignored.
Lesson: there are still many open problems in vision
that are only partially solved
Color
•
•
•
•
•
Human encoding of color
Physics of color
Color representation
White balancing
Applications in computer vision
Uses of color in computer vision
Color histograms for image matching
Swain and Ballard, Color Indexing, IJCV 1991.
Uses of color in computer vision
Image segmentation and retrieval
C. Carson, S. Belongie, H. Greenspan, and J. Malik, Blobworld: Image segmentation
using Expectation-Maximization and its application to image querying, ICVIS 1999.
Uses of color in computer vision
Color-based image retrieval
• Given collection (database) of images:
– Extract and store one color histogram per image
• Given new query image:
– Extract its color histogram
– For each database image:
• Compute intersection between query histogram and
database histogram
– Sort intersection values (highest score = most similar)
– Rank database items relative to query based on this sorted
order
Color-based image retrieval
Example database
Color-based image retrieval
Example retrievals
Color-based image retrieval
Example retrievals
Uses of color in computer vision
Skin detection
M. Jones and J. Rehg, Statistical Color Models with Application to Skin Detection,
IJCV 2002.
Uses of color in computer vision
Building appearance models for tracking
D. Ramanan, D. Forsyth, and A. Zisserman. Tracking People by Learning their Appearance. PAMI
2007.
Next class
• Linear filters
• Readings for next lecture:
– Forsyth and Ponce Chp 4.1, 4.2; Szeliski 3.1-3.3
(optional)
• Readings for today:
– Forsyth and Ponce 3; Szeliski 2.3.2 (optional)
Questions
Download