>> Matt Uyttendaele: I'm Matt Uyttendaele. And this... Mark Fairchild here, who for the past 12 years has...

advertisement
>> Matt Uyttendaele: I'm Matt Uyttendaele. And this morning I'm thrilled to have
Mark Fairchild here, who for the past 12 years has been director of the Munsell
Color Science Laboratory at the Rochester Institute of Technology.
I met Mark through his HDR survey work which he recently did. So we have a
common interest in HDR photography and he got to travel the country for two
semesters it sounds like and has a great collection of HDR images. So if
anybody is in the need of high res, high dynamic range images definitely check
out Mark's Website.
Sounds like he's still out collecting HDR images. He got off the plane yesterday
at 3:30 and headed straight to Mt. Rainier to try to catch the sunset there.
Unfortunately the ->> Mark Fairchild: The portable kit.
>> Matt Uyttendaele: The colors didn't work out for last night. But I was
impressed with his drive to go out and capture Mt. Rainier last night.
Anyway, with that, please welcome Mark Fairchild.
(Applause)
>> Mark Fairchild: Thanks, Matt, and thanks for inviting me. I've only been here
an hour and it's been great all right. So I'm happy to get to visit the area. I have
an aunt that lives in Seattle who has been to my house a number of times, and I
haven't been to her house since I was 11 years old. So she's busy this week, so
I can't even visit her. So I tried, so I get the browny points.
And I've just been in Vancouver a couple months ago and I'm glad to get back
out to this neck of the woods and have the chance to see Mt. Rainier yesterday.
So the talk I'm going to give is a little bit of a collection of some of the research
we've been doing lately on image perception and the color science lab back at
RIT, and if any of you are at the SID conference in May, nobody is saying yes,
perfect. Then I don't have to explain. I actually gave this as an invited talk there,
and it was a 20 minute talk. Well, it was supposed to be 15 minutes with five for
questions. I used all 20 and I added only about two slides, and I'm sure I'll fill the
hour, so you can imagine what I did to try to fit it into 20 minutes.
So I want to start with an image, which is always a good thing. And this isn't one
of mine. I'll say that right away. I'm going to dim the lights for now and then
there's going to be a couple places I'm going to actually turn them off for some
demos. So hopefully this is okay to keep everybody awake on a Monday
morning. It's Monday afternoon for me, so I have no excuse at all.
I think this came off of one of NASA image of the day Websites. Has anybody
seen this image before? Good. I like to show this to students and ask the
student how did the photographer -- this is a single exposure, this isn't a high
dynamic range image, so some of you know I work with high dynamic range
images, there's all sorts of tricks there. This is a single exposure basically
normal straight photograph. How did the photographer get the setting sun and
the lightning and the stars all in one exposure?
>>: It's not the sun.
>> Mark Fairchild: Exactly. I knew there would be clever people here, better
than my undergraduate students. They usually sit there and scratch their heads
for a while. And it's not the sun, it's the moon, it's an overexposed moon setting
and the lightning went off during a time exposure probably couple of minutes it
looks like from how far the stars moved.
So the point is that it's not always what you immediately assume that it is. And
that's what a lot of image perception or color perception is, you like to look at the
physics or the chemistry and think I understand color but it's more than meets the
eye. And this, you know, while it may look like a sunset right away it's actually a
moon set, nice overexposed moon.
So what are these high dynamic range images? Well, if we look at the world, we
have this huge range of luminous levels we might encounter and this is in
candelas per square meter. You could pick your units from maybe, what do I
have there, point 0001 over starlight to over 10,000 under direct sunlight as far
as scenes on the earth. That's not necessarily looking straight at the sun, that's
things illuminated directly by the sun.
And you know there's however many people quote different numbers, but 10, 12,
14, 15 there's a magnitude of luminance in the world that we encounter every
day. Nice sunny day like today we easily encounter, go for a walk at lunch time,
sit in your office and look out the window from underneath your desk to out the
window that will be huge range from down here somewhere to up here
somewhere in one scene.
And most imaging systems aren't designed to either capture or display that, they
tend to display on the order of couple hundred to one if it's a really good display.
You'll see these if you're shopping for TVs, you see these contrast ratios. A lot of
those numbers are nonsense, but you know, they're talking about thousands to
one now for these very high contrast displays, not millions to one.
And then, and then on the capture end, there's probably a little bit more range
that's captured in a good camera, but pretty much immediately if say it's a point
and shoot camera or video camera or something like that, it's encoded down to
the dynamic range of a couple hundred to one. You know, if it was 8 bits linear, it
would be 255 to one, if it was a perfectly linear image.
So not a huge range of capture or display typically. And HDR image we're trying
to capture all of that. Essentially on the capture end, we're trying to capture
what's out there in the scene faithfully, and then on the display end, since the
displays are typically limited, prints are even more limited by the way, prints
might be 50 to one, a good print, a really good print.
Take that HDR high dynamic range information and render it under the display
and in my case in a perceptually meaningful way so that it looks like what you
saw when you were out there in the scene that you're able to adapt to different
regions of the scene. Let's make the image do that and display that perception.
So that's what a lot of my research is about. The question I'm going to start with
today, though, is if we had a display that could do it, and they're coming, I'll show
you a couple examples by the end of the talk, what would you do with it? And
one of the things you can do with it, is you can do some tricks with your color
perception to make the -- make the display perform seemingly better than it's
physically capable of. Perhaps to make the gamut bigger than is possible.
I'm going to bring the lights down a little bit here if I can take control again. It's
going to be a battle. Maybe not.
So measuring color gamuts. A color gamut is essentially the boundary in
lightness and chroma for a given display technology, a boundary of colors that
you can make. And lightness and chroma describe our perceptions. So they're
perceptual terms. They're talking about color appearance. They are not RGB
values, they are not, you know, physical luminance or power per unit wavelength
or anything like that, it's a description of our perceptions and of course scientists
like me try to describe those perceptions, but essentially, for every hue around
this circle we have a boundary in the lightness dimension which is coming out of
the screen here and the chroma dimension which is in the cylindrical coordinates
going away from the original. And there's a boundary. It makes a volume. It's
very important it's a boundary -- a volume. Gamuts are volumes, they're not
areas, they're three dimensional things.
And for a given display there's this boundary, and we talked about color gamut.
And this happens to be an example of an old CRT display and an old dye
diffusion printer, that was an old Kodak printer with three dyes, (inaudible)
magenta and yellow. You can see there's some colors here that the display
could make that the printer couldn't, and there's some the other way, too. It's not
always one way. And that's an issue in color imaging called -- that's people try to
resolve with what's called gamut mapping is when I want to make this blue, I've
made it on my monitor, it's gorgeous, I want to make a print, there's a big change
in chroma in this case that I have to go through in order to print and how do I do
that? Well, I'm not going to talk about that stuff today. Michael can talk about
that if you want him to.
I'm going to talk about the perception and how we manipulate that and what
these two percepts are, lightness and chroma.
So lightness may seem somewhat obvious but its definition is the brightness of
an area judged relatively to the brightness of a similarly illuminated area that
appears to be white or highly transmitting.
So first of all, it's a perception. Brightness is a perception. Lightness is a
perception that's relative brightness. So brightness is what just changed in this
room. Perfect demo. And lightness would be if I had a reflecting object, say my
shirt, often I have a color checker in my hand, something like my shirt compared
to my pants or the podium. When the lights go down in the room, the brightness
of both of these objects goes down, but the lightness stays about the same
because lightness is judged relative to something that looks white in similar
illumination. So lightness is our perception about the object.
The chairs are kind of dark gray. That's going to be true in this room in the
sunlight, in the dark, we're going to always eliminate that overall brightness and
judge the object, which tends to be what we do most often is judge object color
and eliminate the effects of the illumination.
So that's lightness, our perception of relative brightness, light and dark. And
chroma is our perception of relative colorfulness, how different the appearance of
the object is from neutral or gray. So my little scale there is going from gray to
red, getting higher and higher in chroma. So again it's one of these relative
perceptions.
And what I really want to point out here without getting into all the definitions is
relative to an area that appears white in both those definitions which are written
by really clever color scientists and they're standardized, there's a dictionary of
color and lighting terms you can go to, white is important. What is perceived as
white is very important. And I say so right there. Importance of white.
So both lightness and chroma are relative to an area that appears white. If you
keep the stimulus constant but you change what appears white you can change
lightness and chroma, right, because you have the physical stimulus, you have
the stimulus that appears white and lightness and chroma are a judgment
comparing the two.
Well, if I keep the stimulus constant and change what appears white I've going to
change lightness and chroma. There's two ways to change lightness and
chroma. One is to change the stimulus, make it brighter or more saturated, one
is to change the white.
So I'm going to play around with the white a little bit today. And now I'm going to
actually have to have the lights all the way off and I can push off and see if it
sticks or happens. They're coming. There's a little temporal adaptation. Perfect.
Thank you.
So this is a little spot of light, and if I had a higher dynamic range projector, you
wouldn't be able to see the shadow of my hand. This would be really dark, and
that would be the only stimulus if the room. And assume I didn't have exit lights
and displays and all that.
And if I brought you into this room, let you dark adapt, flipped on this light and it
was the only thing in the room, I think you'd have no trouble saying that that was
pretty much a white patch of light. In fact it looks pretty convincing each with all
the flair and everything we have here, that that's a pretty good white. It's
reasonable. It will let me convince you of that anyway.
Well, so that's white. What if I change what's perceived as white by adding
something around it. So now I didn't change this middle patch at all but I put a
brighter patch. This is higher brightness than that one. Now I've created
lightness. We now have the perception of lightness because we have two things
to judge. We can do the relative. So this one is now darker, it has a lower
lightness than this one which is now perceived as white. And just to, you know,
kind of prove I'm not cheating, didn't change physically, and there comes my
white surround. Well, as you might imagine, I'm not going to stop there. So
here's another one. Again, I'm not physically changing anything. That's going to
be consistent here. This one's now white, this one's gray, this one's darker gray.
I've changed what's perceived as white. I've induced darkness or lower lightness
into those two center patches and I could do this all day. Literally if I had a high
dynamic range display I could pretty much do it all day and mess with your state
of adaptation.
I don't have enough dynamic range in this particular display to make that center
look black, but I could if I did have a higher dynamic range display. Back in my
lab, I have two, actually three slide projectors superimposed so I can really turn
one down and you know it's all analog and have huge dynamic range and I can
make white look black. But I can get pretty close here.
So again, that patch, it's just too good, I have to go back. Whoops. I got caught
in a loop here. I'll keep going forward. So there's another one. And one more.
And that's it. I'll stop there. But large range when you're pretty much dark
adapted this looks white as I increase what looks white those are induced to look
gray.
So that's changing the perception of lightness. It also works in chroma. So
here's an orange patch. And the definition of brown, I do a whole lecture on
brown, is a dark, which means low lightness, low chroma, orange hue. This is a
patch of light that's an orange hue. To make it brown, I have to lower its
lightness and lower its chroma. Well, the best way to lower lightness and chroma
is to introduce something around it that's perceived as light. And I didn't quite get
to a dark brown here because I don't have as much dynamic range. But that
again is physically the same as that one.
And you can see it gets browner. And again, if I had more range, I could actually
make -- actually if I knew this spot would be so bright, I could have made it
brighter to start with. And if I make a gray, it's somewhere in between.
So again, changing what's perceived as white changing both lightness and
chroma. It's part of the definition. It's not just the definition, it is our perception,
it's the way the visual system works. So when we want to look at color gamuts,
we can have the lights back up for a few minutes now, when we look at color
gamuts we don't need -- we can't just look at the physics of a display and draw a
triangle and say that's the color gamut. Color is a perception. Actually color is
chromaticity gamuts because they are the gamuts of chromaticity coordinates
that you can make on the display.
Color gamut you need to know how you're perceiving things, you need to know
what appears white. So you can do good things with that. You can also do bad
things. And maybe some of you of seen these data projectors that particularly
the DLP ones, I love DLP, so don't take that the wrong way, but there's DLP
projectors that have a fourth channel that's white. So you have red, green, blue
and white. Sometimes they have other channels for other reasons but
essentially they have four.
And the reason they add in the white is because the low end DLPs are sequential
displays, so they have a red, a green, a blue on in sequence and compared to
say a cheap LCD projector, which has three LCDs that are superimposing on
each other, it's hard to get the same brightness, because you're only ever
showing a third of the light.
So you put on a fourth channel that's white in order to get more brightness, so
they show, you know, essentially a quarter red, a quarter green, a quarter blue
and a quarter white and then they can put their ANSI lumens on the side of the
box as much higher.
Well, the problem is I made things look darker and less colorful by introducing a
brighter white. So if you add that fourth white channel you're making your display
look darker and less colorful, even though your ANSI lumens, you know, the
luminance of the display is higher. You've actually destroyed the color quality of
the display.
And what's kind of interesting about that is because in general, colorfulness and
brightness increase with more light. That's just perception, you know, when this
room is illuminated, everything in here looks brighter and more colorful because
there's more light. That's a perceptual effect. We're not linear. Otherwise
everything would always look the same. It would be really boring and we
wouldn't need high dynamic images.
So in general, you think I get a brighter display, it's going to be more colorful. But
these displays are brighter and less colorful because of that fourth channel. So
Rod Heckman (phonetic) is one of my grad students and we actually did some
psychophysics on this, where we measured these in a color parent space and
these plots are showing lightness and chroma, something that correlates with
that perception, accounting for what actually looks white in the scene for two
modes in one particular display, photo mode, which works like a normal RGB
projector, red, plus green plus blue equals white, there's no fourth white channel,
and presentation mode, which is apparently when people are using Power Point
and have a white background and it looks great and overcomes all this extra light
in the room.
And what you see is here's the lightness chroma gamut. There's more lightness
up here -- well, they're normalized because you adapt to the whiteness. I was
thinking I have a brightness plot. I don't have that in this talk. They're
normalized to 100 here because the maximum always looks white. And here's
the presentation mode because that's got added white, it made all the colors
darker. So here's the prime. I think this is a red slice down here, whereas in the
photo mode where this is actually physically darker it looks the same whiteness,
you have all this extra color gamut.
And this is a red, green, yellow, blue so this is kind of a chromaticity gamut, and
the photo mode is much larger than the presentation mode. And we actually get
visual experiments on that and what we measure in the appearance model is
what people see. And I have a plot I'll show in a minute.
So this is just the little pet peeve first. This is what you see as gamuts of displays
is these little will triangles in chromaticity diagrams, I call them chromaticity
gamuts because they don't tell you about color, which may shock a lot of you to
hear from me, they tell you where the primaries lie and additive mixtures of
primaries fall within the triangle because additive mixtures on this particular
diagram lie on a straight line, so if this red mixes -- sorry, this red mixes with this
blue it's on a straight line, that green you can make everything in the triangle.
You don't know anything about whether it's light or dark because there's no
luminance information here. You don't know anything about what it looks like
because you don't know what your visual system is adapted to. You need all that
extra information to get color.
So there's a lot of sort of specsmanship with these things and you might see a
display that has a huge triangle and you put it next to a little display with a little
triangle that's brighter, the one with the little triangle and more luminance is going
to look more colorful. And we've done experiments on that as well.
So these things bad. There's -- if you want to talk about appearance. I'll even
put an X through it, that's how bad it is. Sometimes we tear them up. The
printed ones are really bad because people think that the colors printed actually
match the coordinates and if you ever tried to print something accurately you
know that's not easy.
So appearance gamuts we use typically a model called CKMO2, that's a
standard model which stands for the CAE color appearance model 2002. The
dates are in these models because people recognize when we might come up
with something better.
So in that model you can look at lightness, chroma, and hue as I've been
describing, you can look at brightness, colorfulness hue which keeps sort of the
absolute perception in there. You can do all that stuff. And it worked well in that
experiment that I mentioned.
So here is perceived, we did the psychophysics, we actually filled a lecture hall
with people, had them scale brightness and color fluence on little answer sheets
as we showed them displays. It was the most efficient experiment we had ever
done because we had a whole room of observers at once. And we have the ratio
plotted here, so this is a big summary of the data.
So the ratio of photographic to presentation mode, and this is colorfulness, we
did it on brightness and other dimensions as well. So generally the ratio's above
one, because photographic mode looks better, more colorful. It depends on
content. We had a portrait of a woman on a white background. She was all
down in that low part anyway, so it really didn't change much. So ->>: Can you define photographic ->> Mark Fairchild: That's the mode of the projector. The photographic mode
was R plus G plus B equals white. So it worked like a traditional additive display.
The presentation mode had the fourth white channel. So that was just a setting
on the projector.
>>: The name of the setting ->> Mark Fairchild: Yeah, exactly, exactly. There's no technical name to that
photo mode is because everybody would look at their photos in the four channels
and say they're horrible and you would go in and say there's a photo mode, now
they look better. Some of them now have an SRGB mode, which they still put in
a little bit of white but a lot less, so they tone them down, at least the one that we
have, I can't vouch for everybody.
So this is the range of visual data of the scaling. So the woman image was right
around one because that looked pretty much the same in both. And then the
dots are the predicted colorfulness range ratio from CKMO 2, and the dots all fall
in the error bars of the visual data. And we didn't have to do anything else just
calculated it. So that's a neat result.
All right. We are cutting off the top here, but that's all right. We won't miss
anything important.
So if you can make the gamuts look smaller, can you make them look bigger?
And of course yeah, why not? So if you make the white brighter and leave the
primaries alone, the perceived gamut goes down. If you make the bright -- I'm
sorry, if you make the white dimmer, leave the primaries alone, the perceived
gamut actually gets bigger. And again, there's some papers we've published,
this has gotten a lot of attention from the display business, as you might imagine,
say you don't have to do anything and you can make your gamuts bigger. They
like that. So that's one of the reasons they had me out at SID to talk and it's
been published in CRNA in the SID journal.
So what we did first computationally is change what we call the diffuse white
point. If you're really into digital video or anything like that, let's say you have a
zero to 255 range, 255 is not used to encode white. I forget the number that's
used, 230. Somebody may ->>: 235.
>> Mark Fairchild: 235? Okay. Thanks. So 235 is usual call a diffuse white,
like a piece of paper. And you leave some values above for highlights and light
sources and things. So that's a well known process in imaging photo CD did that
back when Kodak did that. Film did that. Film had the transparencies, didn't put
diffuse white if it was properly exposed at the minimum density of the
transparency, they had left some head room for more dynamic range.
So that's what we're playing with is really where that's set. So if we push that
point down relative to the primary maximum, in other words don't change the
primaries of display, but reserve some white, what is the effect on the
appearance gamut? And that's what that paper's all about.
And here's some quick results. This is again in CKMO 2, lightness chroma
gamuts, and you're going to have to kind of trust me on the explanation, but
essentially the red con tour, so this is lightness and then that's a red-green
dimension, this is red-green versus yellow-blue. The red contour is essentially
the spectrum locus on a chromaticity diagram. It's more than that because it has
a lightness dimension, it's the MacAdam limits, if you're familiar with those.
So it's the range of all physically realizable colors. It's what most, you know,
display engineers would say that's the ultimate, right, if you can make all those
colors, you've got it made, you're making everything physically realizable.
I think on the slide, yeah, so our title which is, you know, it's -- titles are always
important, expanding display color gamut beyond the spectrum locus, that's
catchy, right, because a display guy reads that and says what kind of light source
did they invent? You can't do that. There's a physical limitation there.
But perceptually, you can. So here's that boundary in appearance space and
you're going to have to forgive our kind of strange nomenclature here. Imagine
our display's linear, and 8 bits means that white's at 255, and all the colors are
below that, diffuse white's at 255, there's nothing above. 9 bits means we have a
range of zero to 512, we've got an extra bit but we leave white at 255. So we've
doubled the range of the display but we don't use it for white, we just use it for
the primaries.
10 bits means that again. So essentially at 10 bits white is at a quarter of the
capability of our display, yet the primaries can go the full way. So we're really
holding back on white, so it's a big change. But if we go to 11 bits so not a
quarter but an eighth, we can actually make perceptions that are outside the
spectrum locus. And that is possible to have those perceptions.
So you know, perceptions aren't limited by physics. Keep that in mind.
Sometimes they're perceptions that are predictive that aren't possible, too. A
really neat way to do it, I learned this from a student at Cornell when I was doing
a sabbatical there, she liked to lay out in the grass and lay on her back, close her
eyes and look at the sun with her eyes closed. So the sun's going through her
eyelids, which is all the blood vessels, you're exposed to this incredibly red
stimulus and she would do it for like 20 or 30 minutes, and you're adapting the
red, so you're becoming very desensitized to red and then she would quickly
open her eyes and look at the green grass and she would say it's the most
amazing green you'd ever seen. I guess she didn't need any drugs, right, you'd
just lay there for a while.
And if you do that, I'm going to show you some demos like that that are very
quick, it's really amazing and that's kind of perceiving things outside the spectrum
locus.
So 11 bits you can do it. You can do it. Now, I'm not going to take this projector
and do that because I'd have -- I'd be using -- I can't do the calculation in my
head, you know, 25 digital counts or whatever for my full range of the image and
leaving everything else alone. It wouldn't be a very good image, but if I had a
high dime range display, I could do it, with more bits and more range. That's
exactly what we want to do with high dynamic range display.
So I'm going to need the lights all the way off again. I'll try pushing here.
Because this demo won't work if there's any light. So here's the same gamuts
and brightness and colorfulness, essentially the same result. We assumed our
display had a hundred to one dynamic range, which is pretty realistic. So that's
why it doesn't go down to a perfect zero whereas the MacAdam limits do go to a
perfect zero. Same answer in brightness and colorfulness.
Now, you're going to have to forgive me for short cutting my demo. I have
another talk on this just on this that I take 20 minutes of the talk to adapt people
to the dark gradually so they don't know what's going on. I'm going to do it to you
much more quickly. Like in about three seconds.
So summarizing that result, 11 bits in our linear encoding. I had somebody send
me an email, why on earth would you encode video linearly. I was like no, no,
no, that's not it, we just use that as a shorthand of how many times you have to
multiply the white or divide it. If you have 11 bits linear and 8 bits of (inaudible)
the diffuse white, three bits of (inaudible) you can exceed the spectrum locus.
And then if you want to do non linear encoding you can save bits of course. So if
you have your diffuse white at 100 candles per square meter, which is a good
display historically, it's kind of dim these days, you're max would have to be 800.
8 times that.
So let me show some images. And these are images from my high dynamic
range survey that Matt mentioned. So I'll talk about that a little more in a couple
minutes.
This is a sunrise at bar harbor. I imagine somehow to wake up out of my stupor
before the sun came up and see that the clouds were pink and we had rented a
house right in the village there, so I didn't have to stumble too far to get the
sunrise. And it was actually pretty cool with the ships there and there's a nice old
inn here.
And that's a somewhat visual rendering that I did manually. So that's -- I'd say
that's pretty much what it looked like. As good as displays are. Here's another
image from the survey. This is in Oklahoma after driving through the most
harrowing thunderstorm I've ever seen, and I like thunder storms. This was last
spring when there was a bunch of tornados in Kansas killing people. Well,
there's a front in Oklahoma sitting there for days, these thunder storms and I was
driving east and there was no waiting it out, because it wasn't moving and I had
to drive faster than it was moving. So it was like driving through a hurricane.
I got through the one totally black storm with hail, and you know, barely could see
anything, I'm going like 20 miles an hour, the trucks don't slow down, they're still
blasting by you, and then it cleared up. I'm look oh, I'm done. But that was like
the eye of the storm, and I had to do it all again.
And then I wanted to stop at the Route 66 museum, so the storm's behind me
over there and everything's wet and I didn't want to stay very long, but I shot an
image with the neon lights which is kind of cool. And this is hard to do. It was
really dark. So this was kind of like late twilight, the neon glowing. It didn't
render too well, but in a good rendering of this you can actually see the neon
tubes and everything, which is a hard thing to photograph.
And those are just images. Those are showing to you normally just rendered as
8 bit images. Now I'm going to show you what would happen if I use only a
quarter of the dynamic range of the display. I'm not going all the way to 8 for
white and let the primaries go bigger.
So there's the normal image, here's our exceedingly spectrum locus. Well, not
quite but approaching that image. So what I did to you is the last few slides I've
been dimming the display so that the white up there, and I purposefully left the
white, is only using a quarter of the digital counts. So there's some nonlinearities. Forgive me for that.
And then I picked a point, I think I picked the white of the ship here to be my
diffuse white. And I used that as the maximum rather than the full image and let
the primaries go. So you get a much more vivid experience. And much more
realistic. And I didn't do anything to display. I didn't do any manipulation other
than reserving what I used for white. And here's my Route 66 museum. And
now look at those neon lights. They look kind of like neon lights now, don't they?
And they're on the spectrum locus physically. That neon red right out there. So
to me this is a real striking demonstration. This is a lot more what it looked like,
you know. Imagine really dark, there's some rain on the thing and that light's
glowing. You can see some of the lights, the street lights in the back were still
on. That's how dark it was after the storm. Even though it was about noon.
>>: Mark.
>> Mark Fairchild: Yes, sorry.
>>: It's the very simple minded explanation of what you're doing is taking the
color cube and squishing the sort of white point down so that there's --
>> Mark Fairchild: I'm squishing the white point down and leaving the red,
green, and blue points where they are.
>>: You're taking the things that are less chroma and lowering their lightness?
>> Mark Fairchild: Yeah, yeah. I'm taking everything and lower it's lightness
except things that would have a signal above that 255. Remember, this is the
high dynamic range image to start with. You can't just take a low dynamic range
image and do that. You have to have the information to start with.
>>: Right.
>> Mark Fairchild: So that's the key. And I have one more example. This was
up in the Adirondack in autumn, and the sun was actually coming through this
tree towards my camera, which is looks pretty nice there, but those leaves were
really glowing and that's what happens if you filled the gamut up.
So all I did was push your display, the display down. And you imagine when I did
this talk as a full talk on this topic and every slide was just a teeny bit darker,
nobody noticed. And then these popped up and everybody is like whoa. So it
can be done.
And you'll see how bright it gets the next slide. So there's the transition I did on
the white point. Look at the text up there. Whoops. To that. Now, we're back
up. We can have the lights back up now.
So the practicality of that is you need a high dynamic range display, you need
more bits, you need more dynamic range to play with in order to really implement
that. But they are becoming available at the SID meeting pretty much everybody
had an LCD with dimmable backlights, locally dimming backlights. So different
regions behind the LCD panel if it's a dark region they don't turn the back light on
as much, if it's a bright region, they turn it on more.
Dolby, which was formerly Brightside, which is formerly Sunnybrook up in
Vancouver, they will have products coming out. That's been announced, for you
know, high end, for digital cinema proofing applications. There's a TV coming
out. One of those was at SID if you're into like $30,000 televisions. Beautiful
wood cabinets, gorgeous TV. And it has these LED backlights that are locally
dimming. Samsung has a product out that does a little bit of it already.
And there's also globally dimming TVs, where the backlight goes down for dark
scenes and up for bright scenes and it saves a tremendous amount of energy.
That was a big thing at SID this year is how much energy you can save by
dimming the backlights. And you have to encode the images for that as well.
You can't just take the image and throw it up there, you have to say what's
diffuse white.
So we build these things, too. Kind of like Sunnybrook did in the original
prototypes, this is our version zero. We're quickly working on version 1 back
home right now this summer. But essentially what we do is take a DLP projector,
project it on to the back of an LCD display. And you'll notice these are not high
end components. These are 15 inch Apple displays that we can get on eBay for
a hundred bucks or something like that, because my students really like to break
them.
So you know, I don't know how many we went through to get that, but we peel
away the back, take out the backlight, project this on, put a fornel (phonetic) lens
and then diffuser in there, put a little bit of optical tricks, and essentially we're
modulating the image twice, once for the projector, once for the LCD filter. We
get huge dynamic ranges.
At one point he had it up to 350,000 to 1. And 3500 candelas for the luminance.
Very bright, high dynamic range. Right now we're building one with a 30 inch
Apple cinema display as the front end and 6 DLP projectors as the back end,
each projecting one sixth of it. We were going to buy a big huge cinema
projector for like $50,000, and we realized well, that luminance is equivalent to 6
of these little ones we can get for $1,000 each. And saner minds prevailed and
we're going with the 6.
So we've studied perception. This is actually just quickly a picture of the HDR
display here. This is not the greatest photography. We were doing an open
house at RIT so I snapped this really quick. This is the HDR display with a
normal 15 inch LCD next to it. And it was a little auto exposure.
This is black on the LCD and the exposure is high because the room was dark.
So this is white and black, and this is white and black on the HDR display. So
you get an idea that you know, this would have looked black if you were sitting
there, it would have looked like a bad black like you see on, you know, however
many, five, six, or seven year-old LCDs, but it's a huge difference.
And we've been studying perception. We build scenes, render them, have
people choose which they like. I'm going to run out of time, so I'll go through this
a little quickly. Here's some of the scenes. These are all built in the lab. So I
was actually going over this one with Matt before the talk, and we did a good job
because he thought this was actually a window in our lab, and apparently we
zoomed in, we think it's Sydney, Australia. I wish that was where the lab was
sitting, but if you go out the wall of our lab, you see a parking lot and a brick wall.
So not quite as exciting.
So we built these different displays. This is one I calibrated my camera with,
and, and then looking at different algorithms. I don't have time to go through the
various algorithms that are used for rendering these. ICAM is one that we
developed. ICAM 06 is the one the student doing this work developed, a revised
version.
This is Photo Shop global and Photo Shop local kind of manually tuning in to
make good images. And this set of data is visual accuracy. So they saw the
scene, they adapted to it, kind of remembered it, go to the display, adapt for say
a minute or two and say which looks more like what you just saw a couple
minutes ago? So it's accuracy, not preference.
And he rendered these for accuracy. So he's saying, look, mine works better
than yours. But, you know, maybe it did. So accuracy is one thing. Preference
is another. If you ask people which is the prettier image, you might not get the
same answer. We've done those experiments as well.
And one of the things we did with this that we published last year was comparing
the real scenes with the display. Can we use the display to simulate the real
scenes, and we get similar scaling results. So that's a -- and other people have
done this sort of experiment, which is good because then we don't have to build
all these scenes in the lab, we can actually simulate the world on the display.
That's why we're building a bigger one. 15 inches isn't a very good simulation of
the world. 30 inches is better. We hope to have 40 with three of them at some
point to kind of surround you in high dynamic range.
So this is the way we do the tone mapping right now, MO 6, it's all out on our web
page, if you want to download it and play with it. But it's a combination of what
we had done earlier with the ICAM, image color appearance model where we
have some local adaptation and confront spatial processing with Durand and
Dorsey's bilateral model. So we added a bilateral filtering step essentially to
ICAM and a couple other little things.
So essentially the image gets separated into a base layer and a detail layer and
then we have a blurry version of the image that we call our white, so in the
context of this talk that's what you're adapting to, you adapt locally to that white
image.
That's applied to the base layer which gives you local adaptation which really
compresses the range, then add the detail layer back in, and there's a little bit,
this is the part I don't like, this is a little bit ad hoc, adjusting the color.
There is some theory around that, the surround, there's definitely theory that the
surround of a display changes its perceived contrast and then we invert it and we
get back to an image that we can look at on a given display.
So there's a couple of examples. If I could have the lights down a little bit,
actually for the next several slides. Again, these are some of my images. This is
the Bar Harbor one again. These are for average dim and dark surrounds.
So that's showing the surround effect. In a dark surround you have to enhance
the contrast because the darkness perceptually decreases contrast. When
you're in a dark room, dark things look lighter. So that's what movies are done,
they are rendered at higher gamma, higher contrast to offset the fact that you're
sitting in a dark.
And this is a little shot at the Hancock Shaker village in Massachusetts. There's
a barge out here that you can see the window. There's no lights or very few
lights inside of Shaker building, traditional one. So there is one 60 watt bulb kind
of dangling from the ceiling behind me. So this is essentially in the dark. And
you can get a idea of the rendering.
So let me talk about the survey for a couple minutes. These are my maps, my
map of my journeys. I think I mentioned them. So I live here, in case you don't
know where Rochester is. It's much more like the Midwest than it is like the city
that's over here. So it's a wonderful place to live. And the main trip of course
was taking off through the north. There are a few highlights I wanted to hit.
There's a statue of Paul Bunyan in Minnesota that I just had to have an image of,
for example, so you can kind of pick out some key places. Yosemite's on there.
I have a good friend who lives in LA. I had to go golfing with him.
If you're into golf, bend and dunes is right here. I had to go there. So some key
places. And essentially what I did for most of the scenes I captured nine
exposures. Each separated by one stop. So that title says 9 stop mosaic. So
this is Golden Gate Bridge about a half hour after sunset. And essentially
underexposed by four stops, what would be a typical exposure if you just sat
there and took one and overexposed by four stops. Now you can see the flowers
in the foreground and everything.
And those are then combined into one linear file. That says linear open EXR on
top. There's some quantization because it's really an 8 by JPEG now, but that
would be sort of a linear rendering, a single exposure, if you will, where I tried to
preserve the light sources to a degree. I didn't blow out the highlight, so I lost the
shadows.
And then if I do local rendering, using one of these visual models where you
adapt differently to different regions, I get an image that looks like this. So I kind
of say it's like turning on the lights. And that's what your visual system is doing.
When you look down here, you adapt to this region, you can see that. And this is
exactly the sort of thing that Matt's working on, which is why I'm standing here
today.
But yet you keep the colorfulness of the bridge, the lights aren't blown away, you
can see the city in the background, you know, the waves and everything are
blurred away because this was nine exposures after dark, so the longest
exposure if I remember right was maybe a minute. You know, this is really after
dark, you know, so it was maybe from like a 30th of a second up to a minute
where the nine exposures or something like that.
Yes, absolutely.
>>: (Inaudible) change the exposure in your camera. Because a normal camera
has three stop bracket mode.
>> Mark Fairchild: This is a really nice camera, so a Nikon D2X that has 9 stop
auto bracketing. That's one of the reasons I picked that camera, so I can hold
the button down once and get 9 stops. A that's really repair. That's the only
camera that I know of that has 9 that can be separated by a stop as far as a
production camera.
>>: You use a remote shot --
>> Mark Fairchild: Yeah, yeah, you can do it. I have another -- the little one I
have with me now doesn't even have auto bracketing. It's a new Nikon, and they
took that feature off for some reason. And I change it manually and use the
remote control. Yes?
>>: (Inaudible) it kind of looks like late twilight or early twilight.
>> Mark Fairchild: Well, after sunset, yeah.
>>: I mean, is the sky perceptually how you want it to be?
>> Mark Fairchild: Yeah.
>>: Okay.
>> Mark Fairchild: I mean, it's a half hour after sunset.
>>: Okay.
>> Mark Fairchild: So the sky still was very blue. Yeah. Yeah. I'm sorry. After
sunset is what I should have said. So there's a Website. If you just search for
the HDR survey, you'll find it, or ask me, or ask Matt. There's about 100 of these
images with visual scaling data, color metric data, all free available for research.
That was the whole purpose. There's the thumb nails of all the images so you
can trace my trips. But there's things like neon lights and Las Vegas, Yosemite.
This is the lodge at Yosemite at the inside so it's an internal, external. The
Waffle House. You got to eat.
Some of the lab scenes, we have some self portraits. That's one of my grad
students. There's me with my Luxo lamp. There's Paul Bunyan. So there's a
wide range of images. We tried to make a variety. We tried to make them pretty
so when people do these visual experiments they don't get bored.
I think I have just a couple more. So this is Arcadia National Park up in Maine, a
place called Otter Point. And I put the sun right in the corner of the exposure so
that the flare from the sun would be in the other corner and not ruin the whole
image. You know, lens flare's an issue when you start doing this.
Here's the linear opening XR. If I don't blow away the sun too much, then the
rest of the image is black. Because the sun's really bright. And here's the
rendered one. And I think that's pretty faithful even on this display that when I
was sitting there on the edge of the cliff I could look at the green stuff and see it.
You know, the sun's beating down on it. It looked fairly colorful. The sky was
blue. It was really bright over there. I could look into these highlights. You can
imagine an ocean scene with the sun out there you can look at those, you can't
look at the sun, it's going to get blown away.
So then of course there's different ways to render them. This is when I did it
interactively in Adobe Photo Shop CF2, this is my students ICAM 02, which I
think is a little too saturated and too colorful, but yet in that accuracy experiment
it came out the best. So you know, I'm one observer.
>>: (Inaudible).
>> Mark Fairchild: We didn't on this one. Not on this experiment, no. Or not on
this image. Right. The algorithm. So this may be bad for this image. That's a
good point.
Yeah, I didn't sit there -- actually we're going back to Maine this summer if we
can get some funding for that. Need to move the lab to Bar Harbor for a week.
So here's again at the Shaker Village, so here's that lamp I was talking about.
This is looking into that kitchen scene. And you know, this one might be more
accurate for this particular image. The bricks are kind of too reddish there. But
there's more contrast here, too. And that comes from some of the bilateral
filtering, it enhances the edges.
>>: (Inaudible) you said that was manual?
>> Mark Fairchild: Yes. Well, I said the white and black point manually which
you have to do to get a decent image, and I think I did a little tone correction at
the end. So a little gamma correction. So it was there. I can't remember if the
settings were the default, but one set of settings for the local and then you get an
image that's really compressed and you have to set the white and black, and I did
that manually.
So a couple more minutes. I'm going to change gears here. There's other
aspects about color perception besides these sort of overall appearance things.
And one of them is the fact that we all have different color responses in our visual
system. We call that observer metamerism. Here I have it labeled observer
variability.
So this comes from a fairly recent CIE recommendation on how to calculate cone
responses for people of different ages on average so an average 20 year old
versus an average 80 year old. As we age our lens becomes yellower and
yellower, so those functions all get pushed over towards the longer wave lengths.
And also for different field sizes. So there's a way to calculate this. This is
actually the model I don't really have time to go through it, but they have the
absorptivity of the three cones, they have the transmittance of the macula, which
is a protective filter, transmittance of the occular media, the lens, the cornea, the
goo in between.
And all of that goes in -- those are all functions of age, of field size, of both. They
give you the average cone responses. So here again I guess that's just what I
showed you. That's how you get the functions. But what's interesting in a
practical sense is what effect does this have on displays? So again I did a little
computational experiment. So my original colors were McBeth color checker
under D65. This is all computational, so this is just Nikon. So I took the
reflectances, I took the illuminant D65 spectral power distribution, I took one of
those standard observers forgiven field size in a given age and I computed a
match on a display, so I'm thinking of cinema here, so one of the displays had
broadband primaries so it had a smaller color gamut, broadband means a wide
range of wave lengths. I'll show you the spectrum in a minute.
The other one was narrow band, so think like a laser display. Red, green, and
blue monochromatic light. Calculate the match. Different field sizes, different
ages. So essentially we're having a bunch of different people match the original
scene to the display. But I'm doing it computationally.
Then I had the CIE, 1931 standard observer evaluate that match. So I picked
one observer as kind of the judge and that observer, the normal two degree
observer said how different are these two to me? So now you see I changed
them. They matched before, they don't match now. You didn't catch that? This
one's pinker, this one's greener.
Because that standard observer is a different observer. And say okay for a 20
year old with a one degree field size that's a match but for me, that one's pink,
that one's green. And people are actually seeing this. The digital cinema folks
are looking at different display technologies. They'll do crazy things, like have
half a display with one set of primaries and the other half with another, and then
they'll get the colorimeters out and make them match and then they'll sit there
and argue about, well, it's pink, it's green. Because it is. To each of the
individuals. They're highly metameric matches.
And if you have monochromatic primaries, just three wavelengths, there's any
difference in the observers that those three wavelengths, all bets are off on the
match.
So here's what the spectra looked like. Pick three wavelengths. Pick three good
ones and then three Gaussians. This is a log scale. So the three Gaussians
added together. This is when both displays are making white. So one of them
has no energy anywhere but these three wavelengths, the other one has energy
everywhere.
And here's these evil chromaticity gamuts where everybody in the display
business would say that laser one has a huge gamut, the other one has a small
gamut. But again, if I change the brightness and so on, all bets are off on that.
Just give you an idea of where they plot.
And here are the average differences, average across those 24 patches on the
color checker. So this is a 32 year old observer with different field sizes in the
computation, so different observers. And this is C lab delta E, if you're familiar
somewhere around one is just noticeable. Somewhere it's very much a rule of
thumb. The two degree observer pretty much agrees with the two degree
observer from the new CIE proposal. That makes sense. It's fairly accurate. So
those are very low.
You get out to the 10 degree field size, you got differences on average of eight,
10, so on. Very big differences. For the narrow band display, not so big for the
broadband. The interobserver differences are bigger for a narrow band display.
That's something that people making these plays need to think about. They're
going to look less consistent to people because of pushing those gamuts out.
And they may not be making a bigger gamut anyway.
Here's the same thing for 10 degree observer at different ages. Again, way up
here. Essentially more than a factor of two larger differences for the narrow band
display. I don't know if this will show up, but these are four different observers
matching that spot in the middle. And you shouldn't be concerned so much with
how all of this matches but look at how different the four are. Because there's a
transformation for a differently observer anyway. But these kind of differences
are very realistic for different people looking at the same display.
In terms of photograph, there's my little girl looking out the window. And you can
see the range there. Yes?
>>: When you talk about field of view, that's like how closure sitting to the screen
so ->> Mark Fairchild: Yeah, well in these experiments it's how closure sitting to the
little two little color patches that you're matching. So in a real display it's a more
complicated thing. But, yeah, it would be the angular subtense of the display for
this stuff it's always two little color patches ranging from one degree, which is
about my thumb, to ten, which is two hands side by side. All right. I'm going to
end with a couple of demos here that are real cool. So you'll forgive me for going
over. I hope. I came a long ways so I'm turning the lights off again. So let's look
at adaptation. Because that's where we started, that's where we'll finish.
This is a classic demo that I know some of you have seen, because you've seen
it from me before. But this is due to Bob Hunt, who wrote the book the
reproduction of color, and Kalte (phonetic) literally wrote the book on color
reproduction.
So this is a nice beautiful bowl of fruit, one of those standard images people use.
And I'm going to take a computational cyan filter. Here my cyan filter. So
imagine I actually had a cyan filter and I was projecting away and I held it up in
front of the screen. I'd get something like this. So kind of a light cyan filter. And
it would have to have cyan filter written on it. But I'm going to take that filter and
I'm going to put it just in front of the banana.
So here's a quiz on subtractive color mixing. Yellow banana. I put a cyan filter in
front of it, there's yellow light there, the cyan removes red, so the cyan filter with
the yellow banana is going to leave you with?
>>: Green.
>> Mark Fairchild: Green. Very good. So cyan filter over the banana and lo
and behold there's a green banana. It's a light cyan filter so it's a light green.
And now instead of just putting that cyan filter over the green banana I'm going to
put it over the entire image. So what color should the banana be?
>>: Yellow.
>> Mark Fairchild: You guys are good. It's green, come on. Haven't you be
listening to anything? It's yellow, yeah. Physically it's the same chromaticity, so
if I flip back and forth, if I got out my colorimeter, that banana is not changing but
it looks yellow, yeah, because your adaptation state has been pushed towards
the cyan. So it takes away that greenness. And it does it very quickly and very
convincingly. So that's chromatic adaptation. And I'm going to show it to you a
different way.
So there's my girls again. It's funny I gave this talk and some of you may know
Joyce Farrell. She was there. And she's like wow, Mark, I hadn't seen your girls
for so long, they're so old, and it's really funny because this picture is five years
old. This one looks like this one now. The other one's, you know, a teenager.
It's just funny. I'm like yeah, Joyce, they've grown up a lot.
But here they are visiting Mickey Mouse. Those two images are the same. What
I'm going to do is adapt to different things and have you look at these images and
then I will quit, I promise. So now I'm going to do chromatic adaptation just like I
did before. Only with a different stimulus. So I want you to do for these demos is
you don't need to fix 8 in a single point but you need to keep your gaze in these
little yellow ellipses.
So I want you to look at the yellow, you can fix it if you want, but you don't need
to. You just need the -- to stay in that region. And what's happening is, excuse
me, on the right side of your visual field you're adapting to the yellow illumination
or the yellow image. So you're becoming less sensitive to yellow. On the left
side you're adapting to the blue, you're becoming less sensitive to blue.
Adapting is just like getting tired of something, you know.
In Rochester, winter comes it's really cold but by spring the first day when it's
above 20 degrees, all the kids are out in their bikinis playing frisbee, because
they've adapted to the cold.
So now I'm going to go back to those two images that are the same, and I want
you to notice what color you see. So you see this one is very blueish and that
one is yellowish. That's an afterimage. Afterimages are adaptation because
you've adapted the retina.
So that's cool. That's what I do all the time. Yes, Steve?
>>: This is a slight (inaudible) I see that color difference when I'm looking in the
middle.
>> Mark Fairchild: Yes.
>>: If I'm aiming my gaze elsewhere I don't really see the difference.
>> Mark Fairchild: Right. Right.
>>: Really is.
>> Mark Fairchild: Oh, yeah. Yeah. Because it moves with your retina. And
that's what I -- in a normal lecture I would have some lights on the white board
and I'd say look over here and you'd see the blue ridge and the yellow ridges,
and that shows you that it moves with your retina, that the cones in your retina
that that's happening. Some of the mechanisms are higher up but this particular
one happens there.
So that's really what I've been talking about the whole time, color adaptation. But
a lot of this spatial vision image quality depends on other dimensions, so other
things adapt. And there's this guy at University of Nevada Reno, Michael
Webster, who has done some really cool adaptation work and one of the things
he's done is on blur, and I saw him demonstrate this a couple years ago and
actually he did a really neat one. It was during the elections or just after the
elections between Bush and Kerry. So he had an image where he had morphed
George W. Bush and John Kerry together and he had this really weird image of
the two of them in between. So imagine morphing the two and there's this in
between image and he showed that to everybody and do you recognize this guy
and everybody's like it looks kind of familiar but no. Cause there were features of
Bush and features of Kerry. And then what he did, just like what I did with color
is he showed you a picture of George Bush and you fixated on that for 15, 20
seconds, then he shows you the morphed image and it looks just like John Kerry.
You adapt away all the Bush features and you see Kerry. It's really amazing. I
think it's on his website. It's really striking.
And then you do the opposite, you adapt to Kerry, you see Bush in the morphed
image. So you adapt to very high level features.
And then he showed blur which is really cool, and I'll show you that one. Same
picture. I'm going to blur one and sharpen the other. Nothing magic here. I want
you again to fixate, not fixate but keep your gaze in the little yellow ellipse, and
the same thing is going on here. This is what's known as spatial frequency
adaptation. It's exactly analogous to chromatic adaptation but now the wave
lengths are spatial frequency instead of light wavelengths.
And on the right side there's no high frequency information, so you're becoming
more sensitive to high frequencies or less sensitive to low frequencies. On the
left side there's plenty of high frequencies, and there is actually less low
frequencies relatively. So you're becoming less sensitive to the high frequencies
there.
This is very well known, goes back many years as far as, you know, looking at
sign waves and things. People don't look at it in images so much. But here I'll
go back to the original images which were intermediate between these two and
see what you notice. So you see the one on the right looks sharper. And that
one does, it happens at a higher level so it doesn't move with your retina like the
chromatic after image quite so much. So you adapt to spatial frequency, too.
So when Michael showed this in his talk, this was at a symposium at the
university of Rochester, I'm like, I go up to him after, so Michael -- we were
studying noise perception and images at that time, you know, looking -- noise is
just color difference that's spatially distributed across the image. So it's an
interesting color difference. And I'm like does that happen in noise, too? And
he's like I have no idea. Never looked at it.
So my talk was the next day, so I ran home, popped open Photo Shop and said
let's put noise in the images. So here's my girls again with some noise added.
And you may guess where this is going. Again, keep your gaze in the little yellow
ellipse. On the right there's lots of noise, it's white noise. Uniform distribution.
And digital counts. And then a clean image. I started with one that was
somewhere in the middle. So actually that's the original. So now I'm trying to get
you to adapt to noise. But this is different in spatial frequency adaptation
because the noisy added has all the spatial frequency within reason. I mean
obviously there's a limit here, but it has all the spatial frequency. So if it's just
spatial frequency adaptation you'd expect the contrast to go down but nothing
else.
And my question was does the noise actually go away? And now I'm going to go
to the original pair which is intermediate levels of noise. Do you see that? It
goes away really -- it's a short lived effect. It's not as strong as some of these
other ones. But it looked less noisy over here. Where you adapt to the noise
you see less of it. And we actually did some psychophysics on this. We could
measure the effect, we could model it with ICAM which is our image model, by
having the spatial contrast functions adapt to what they saw previously. So it's
really a measurable scientific thing. I can send you details if you're interested.
But it's really cool for imaging scientists or imaging engineers for building
systems. Because imagine you're looking at a newspaper and the images in the
newspaper are half tone with a certain frequent and a certain orientation. If
they're black and white they're 45 degree dots at 100 dots per Mr. Or whatever
they do. And every image you see has that same spatial pattern. So you get
tired of it. You don't see it so much. If you're looking at a TV with a certain pixel
pattern or in the old days scan lines you don't see those as much because
they're always there. So Michael had a paper called enhancing or I forget what it
was call or -- enhancing the novel stimulus essentially that your visual system is
tuned to get rid of the background and see what's new. So you think about
watching a TV. You don't want to see the artifacts. They're always there. You
want to see, you know, the actresses or whatever.
So your visual system is doing that, let's you get away with a lot when you're
building the systems. That's why people don't see some of these artifacts until
you teach them about them and then it drives them crazy. I used to teach my
students about interlace. And oh, and they hated me after that. It's like go home
and watch TV now.
So I'm going to wrap up here show you that even Yoda cares about color. So the
impact of all this is that color appearance is important, not just chromaticities,
contrast, not just those physical things but what you actually appear can -- what
you actually see is critically important. And how you use that is important, and
the impact to the viewing conditions. That's what all of this is about. CKMO2, I'm
not saying that's the answer, that's a tool that helps address some of these
things. The ICAM stuff we've been working on does some of that.
And I'll wrap up. I just want to mention Rod. He's my young grad student that's
been working with me a lot on this stuff. He's a Kodak retiree who stumbled into
my office one day and said I'm consulting now and it's getting kind of boring, can
I come get a Ph.D.? He helped invent the APS photo system you know those
little cartridges that were really cool until digital came along and blew them away
very quickly. You know, he helped invent that system working with Fuji and
Nikon and he knows some stuff, and he's a grad student which is awesome.
So he's been doing a lot of work with me and sailing on his boat and having a
great time. So thanks. I'm happy -- I went over my time, of course, so that was
my 20 minute talk. Happy to hang out and answer questions if we have time. I
don't know what the schedule is. Okay. Thanks.
(Applause)
>>: Can you increase the dynamic range of the appearance by reducing the
white to get to a point where the primaries become brighter than the white?
>> Mark Fairchild: Sure. Yes.
>>: And does it ever start to become the white?
>> Mark Fairchild: No. Because they're so saturated. There's probably a -whoops. There's probably a point in there where you can get in trouble because
you're transitioning from those primaries down to the white. But the thing is the
only objects in the scene that are going to come up those colors are things like
the neon lights. Any like your shirt is a nice saturated blue. It's not going to be
up there. It's going to come out darker than our diffuse white because that's
where it is.
So that's the key about encoding. If I just took your blue shirt and put it up there
relative to white, it would look like you're wearing neon and that would -- and then
you would start to adapt to it. So the key is to only use it for the right objects.
And that's actually part of what Rod is working on to wrap up his dissertation is a
really cool algorithm that figures out where the things are that are objects and
where the things are that are light sources. Not spatially or anything, all based
on color. I mean, you can do it with some computer vision, I mean, you guys
probably could think of lots of -- you know, all of us could think of a different
approach to do that. But just based on the color so it could be implemented in a
ICC profile or something, find that and then do this sort of thing with the things
that are outside the boundary, push them up to make the neon lights glow but
don't mess with your skin tone, for example, which as you know well you don't
want the skin tones do glowing orange.
We've been working with Sony on some of this. And Sony introduced a wide
gamut TV which had LED, RGB-LED backlights. Beautiful TV. We have a
prototype in the lab to do the experiments on. Cost something like 10, $12,000
and pretty much all of them got returned because they didn't have a good
algorithm for taking the normal content and scaling into the wide gamut and
essentially what was happening is skin tones were glowing orange. And the
people spending 10, 12 grand on a TV care. And they got them back and then,
you know, ended up saying can you help us a little bit and do some research on
the perceptions going on. You know, it's not a secret. I'm not telling you any
secrets here. Everybody knows that happened.
So that sort of got us in there. And there's a couple -- actually there's a couple
talks on that topic. Rod's giving one and Stacy Casella (phonetic) who is another
of my grad students on that very topic at the color imaging conference which is in
Portland this year, so not too far away from you guys.
>>: Portland (inaudible).
>> Mark Fairchild: Well, it wasn't my decision. Yeah. Yeah. Well, people from
Rochester would rather go to the desert, too.
>>: Is there any applications for this (inaudible) I'm trying to think of how I might
do this because you don't have the light source usually (inaudible).
>> Mark Fairchild: Yeah. There could be. People have tried in the past to
manipulate the adaptation on a print sort of the same thing, and one of the ones I
remember, it goes back to Techtronics, and that division is now part of Xerox. It
was probably one of the first color ink jet printers that was color managed, right, it
was calibrated. It was before there was any ICC or any of this. And the default -and back then we all had CRTs. The white point of the CRT was 9300 kelvin, so
very blue. You probably remember these blue CRTs. Well, some of you do.
Some of you are too young to remember CRTs probably. I'm just feeling old. So
the calibrated print mode actually sprayed cyan ink on the paper to make the
paper the same color chromaticity-wise under D50 or whatever the standard
lumen was, probably D 50. They shifted it to D 93 and then printed, so it had this
blue boundary.
But you take a print and you look at it with everything else. So you weren't
adapting to the print. If you viewed it all alone in a dark surround it would look
just like the display. But when you carry it around, it looked like a cyan print,
which was horrible. Because you can't do that with a print. You have the whole
environment.
But there are some tricks. I just learned this one from one of my students who
worked in Hollywood for a little while. Movie posters. Some of them are printed
two side. If you're in to collecting movie posters, you would probably know this. I
didn't know this until a couple months ago that you can go on to the like
movieposters.com, you can buy them. You can buy two sided posters. And
what they do is they print a low contrast version on the back reversed and then
the normal print on the front, so under daylight conditions or when it's normally
illuminated you get to see a normal print, at night they turn on the lights on the
back and it's just like our high dynamic range display, they have the modulation
twice, two sets of ink.
And I measured one. I got one, actually it was Yoda by coincidence. I'm not that
big a Yoda fan. And the light saber and everything, he was holding his light
saber and the change in dynamic range was a factor of four. The two spots they
measured which weren't white and black, I just measured two spots on it were 25
to one. They went to 100 to one because of having that extra layer. You could
think of it multiplying it together.
So there are some things you could do. And then if you were doing that, you
might be able to control a little bit more. Is there a hand up somewhere? No.
>>: So if I understand correctly I can (inaudible) has an algorithm for (inaudible)
for this (inaudible).
>> Mark Fairchild: Yes. Yes.
>>: How are you comparing the quality of that algorithm to (inaudible) graphics
(inaudible).
>> Mark Fairchild: Actually that's what a.
>>: (Inaudible).
>> Mark Fairchild: Sorry?
>>: (Inaudible) realistic.
>> Mark Fairchild: That's what we're trying to do. Because we're actually trying
to model the visual system and some of the ones in, I'll just say the graphics
community, I know a lot of those people, so they're not strangers, are really just
aiming to make pretty pictures. Not that that's a bad thing. You know, a lot of the
word, you know, Kodak had a whole company built on pretty pictures and they
weren't trying to make them accurate.
So we were going for realism and accuracy which a lot of people don't for
whatever reason with their HDR rendering. Sometimes they're trying for an
artistic effect, sometimes they're just trying to maximize what they can squeeze
into a small, small gamut. So we have a bunch of experiments we've done and
we're always criticized, well you forgot so and so's algorithm. Well, we can't do
them all. But we have a wide range of algorithms we've tested and generally for
accuracy it's right there at the top. And for preference it's close. Sometimes
some other ones are prepared because they're a little higher in contrast or a little
more chromatic or something and people just like that, but when we do the
accuracy experiments it comes out well.
>>: (Inaudible).
>> Mark Fairchild: The -- well, the reason we copied Durand and Dorsey a bit
was because that one in the earlier experiments was pretty much always at the
top. So the bilateral filter I'd have to dig up the ones that were used in the more
recent experiments. But you know, we certainly haven't included everybody's
because there's a lot out there, and there's more -- you know, they come out
faster than we did the experiments. But there's a couple of papers we've done,
one's just I forget which journal, transitions on image processing or something
like that it's coming out. But if you send me an email, I can get you the details.
So it works pretty well. It's always hard to conclude that it's the best, right?
You know, that's the traditional Siggraph model, right? You make your algorithm
and you compare it to one other and you look at it and clearly it's better and not
necessarily do psychophysics. When you're building algorithms it's hard to do
psychophysics, too and that's what we try to do is actually do visual assessments
with lots of people. But even then you pick different images like the sunrise I
thought looked too saturated. That could be true for that algorithm with that
particular sunrise image. Maybe it does overdo that one and there's, you know,
some improvement that could be made. Yeah?
>>: You were saying in ICAM 06 with an area that you didn't like where you were
(inaudible).
>> Mark Fairchild: Yeah, well Matt and I were just talking about that before.
When I'm being a purist I'd like it to just be a model that you never touch, right.
And it does whatever the visual system does and the output is the output. And
my student who did the 06 version wasn't quite as pure as me so he has a few
adjustments and I don't quite have a good handle on how much that's just, you
know, on like a colorfulness knob. So I have to look into that a little bit more.
Wasn't the main thing we were focusing on. It does well. And he -- if he was
here, he'd probably say it's perfectly realistic. It's what the visual system does.
And we'd argue about it. Which is okay.
>>: And does he do that on a per image basis or does he (inaudible).
>> Mark Fairchild: No, no, no, he had the settings once for a given display
because you invert for a given display. So he doesn't -- you could tweak those
all, a lot of parameters by image and get better results, absolutely, but he wasn't
doing that, no, it was one setting. It's just it wasn't like an auto, you know,
automatic vision system.
>> Matt Uyttendaele: All right. I think that's probably all the time we have.
>> Mark Fairchild: All right. Thanks.
(Applause)
Download