19072 >> Mary Czerwinski: Okay. Well, welcome everybody. ...

advertisement
19072
>> Mary Czerwinski: Okay. Well, welcome everybody. It's my pleasure to introduce to you
Edwin Ted Selker. Ted Selker is what he goes by. He's here from CMU Silicon Valley Labs
where he works most of his time and the rest of his time he's spending freelancing and doing
really cool projects always innovating and doing interesting stuff. Ted is very well known for his
adaptive user interface work. He was probably one of the first to start with adaptive help user
systems back in the '80s. Spent many years at IBM Research about 14 years there. Very well
known for his work at IBM. Then went to the MIT Media Lab where he did interesting projects like
in-car navigation systems and context-aware systems.
I'm sure he'll tell you more about the work he's been up to more recently. But right now, Ted, why
don't you take it away.
>> Ted Selker: Thank you, Mary. I've known about this lab more than I've ever been here. I may
have been here once when you guys were sponsored by Media Lab, and I sent Andrea Lockhart,
one of my best students, here one summer to do some work. Now she's a Georgia Tech
professor.
So I've been thinking about the intersection between AI and user interface for all my career, and
these days what I realized at some point is a lot of what mattered in my successes was the
impedance match that you try to create between people's expectations and the person.
And most recently I've kind of taken what I used to be calling context aware computing and
thinking about what the layer on top of that is. It's a grant proposal that I'm waiting on from NSF.
And so what does it mean to be considered a system it's what we'll talk about today. So here's a
picture of a cigarette that I made when I was kind of taunting Phillip Morris about the fact that
maybe they had something to offer besides the drug delivery in the bar when the bars would stop
letting them have cigarettes.
And we'll talk a little bit more about what that's about, why that's interesting. And yet the whole
idea of social connection probably is essential to every use of everything we have. It's kind of
what we live for is social connection.
I used to run -- oh, darn it. See, things can be very sure that they're important and it's not
always -- maybe it isn't up to them that they're so important. Maybe it's up to me.
So anyway, I ran a group. I built the user group, user system ergonomics research is what I
called it at IBM back the early '90s. And we had physical, graphical and cognitive interface
groups.
This here I will just take a moment to talk about just because this -- actually went to product and it
was -- it used all sorts of techniques that are quite different from what Bob and the other ways
that you try to simplify things that Microsoft do.
So the idea was to give people access to the full fancy interface. If you saw it for printer install, it
would look like printer install with this strange overlay. I did a bunch of visual work to understand
that people could find things if there was an overlay like this as fast as it was the only thing on the
screen but they could find all these other things as fast as if there was nothing on them.
So it was kind of a neat thing with this see-through Scrim allowed the adaptive agent to be
deciding how to print help at the novice, intermediate, professional or expert level based on its
experience, expertise, that it had seen you demonstrate to help you through these yucky things
like printer install and all those things.
OS2 doesn't exist anymore. That went away. But, anyway, I bring that up because you guys did
so much interesting work in that area. And I don't know I would be interested in following that.
So did all this stuff there. Then I went off to MIT Media Lab because my dad always wanted to be
a professor. They asked me three times. I actually don't like Boston so much. Wanted my kids
to come from Palo Alto. And came home after 10 years. I'm teaching at CMU Silicon Valley.
And, for example, right now I'm teaching -- it's so exciting teaching an android product
development class. We're four and a half weeks in. And I have out of 22 students I have like 14
apps that are demonstratable and they use everything from GPS to maps to accelerometers.
It's just amazing how the program development tools of today bring in all of those fancy window
debuggers and everything we loved back in the machine days that I lived in. And allow people to
have a sensor and effective data platform that any roboticist would go crazy to have. That's that.
So that's just intro about me. At my group at MIT I made lots of stuff. And we'll talk about some
of it in terms of what it means for making the considerate world as we go through.
But the idea is that using virtual sensors, virtual sensors are a model of task user and system that
allows a sensor to have a considered phase instead of just a calibration phase.
And the goal is to respect human intention. So here's a bunch of the stuff I made at the Media
Lab. But probably the power [inaudible] probably isn't so relevant to today's talk. But these are
typically made as platforms to allow research to happen. For example, this floor here is being
used in an art exhibit in New York.
This floor was made for commenting on social interactions between people. I made this cute one
transistor per square foot approach that we ran our lab on. And with the idea that maximum
implicit communication like you walking around is going to reduce how much interaction you're
having with systems because actually it's the people and your tasks that we care about. And we
want to take the tool out of the task as [inaudible] would say.
So the idea with that floor, well, is shown with these little guys making social commentary about if
there's some people standing next to each other it will kind of make little butterflies. If there's a
bunch of people standing in one place and one over there it will paint a podium on the floor
underneath them and shine a light on the floor, when they give their talk. If there's no one in the
room with you it will take you through my lab, all these demos all over the place, and present
them to you. And if anybody in the room with you it becomes demure because it gets annoying to
have computers are talking to you when you're trying to talk to other people.
So there's the sensor, right. And these are some of the things that it did. So in terms of trying to
understand what would be considerate, one of my most interesting projects was something with
public television. And what we built was -- what I built was a website to go along with The
Forgetting. The Forgetting is a one-hour special about Alzheimers. And my idea about making
this website when I was helping them develop that public television program was that I would help
people connect with each other and we'd get lots of photographs on the website of their life.
We'd get them e-mail so they can keep in contact with people. And working with experts I found
that was the opposite of what you want. What you want is to make their life work and make them
calm. So the typical scenario is when people come to visit an Alzheimer patient, both of them
end up stressed and with elevated meds for a few days, if not therapy.
So making kind of eight different interactive scenarios for them where they could interact as much
as they were able to was really what we ended up with. And it worked great. So this is a puzzle.
And this puzzle, you can pick whatever photograph. Got a dog and a cat playing with each other.
It's putting itself together. This piece is about to fall down there. When this whole thing is full it
will make a nice bezel around here and it's going to do tada and be very proud of putting the
puzzle together. But if you start fiddling with these, with the cursor keys, with the mouse,
anything, it will let you place them. So you can go faster.
Now, if you do that and show that you're succeeding, it will make it harder. So when the
11-year-old kid is over there playing with this puzzle it gets really hard. Flipping upside down,
going the wrong place, and having to fight this thing. And that way everyone gets the fun of
watching this totally competent kid doing this puzzle.
Or you can just sit here and watch it do it. So it's kind of adjusting. It's adaptive. And it's, in
other words, making adjustments to you. So just as another example of having that thing where
you take the fact that an Alzheimer's person is critically sophisticated.
They'll tell you if something's, if somebody's instrument's off key by a little smidgen. Trust you,
I've been there. They are productively challenge. They can't even make a sentence, usually.
So this kaleidoscope is another example where it's making these pictures. And they're all kind of
self-similar until you hit the keyboard. And then it takes that as criticism and changes somewhat.
And then it kind of is trying to build a hypothesis of what it is about these patterns that you like.
And so you find these people feeling and actually being somewhat creative, when in fact it's kind
of taking the art historian feedback and making an art historian an artist, which is something I've
talked about and been building this stupid kaleidoscope actually since '79, actually. But I've
found a place for it. And people like to play with it.
There's a bunch of others things, people like packing with Alzheimers, I find myself giving them
flowers to pack. Packing suitcases makes them feel like they have to move, something changes
big in their life. Much more androgenous to having them put tools around outside in a yard than
having them play with tools in the basement, which is dark. They can't stand dark or kitchen
things, which is not androgenous.
Anyway, people skipped lunch to play with this thing. You can go online to play with it. It's still
there. So ->>: Arrange another time. Wouldn't tomorrow at 10:30 a.m. work for you?
>> Ted Selker: So this is an early project I did when I first got to the Media Lab. You see that
floor again. And it was an IOUI paper. So basically the threshold's kind of an important social
demarcation. Are you in? Are you out? Are you allowed to be part of this conversation or not?
So we said what can we do about that. And when I'm in my office alone, I don't have something
on my schedule and when Lynn Burlson, now a professor at Arizona, knocks on the door, it pops
something up on the screen. I say let them them in. If there's someone in the room do you want
to disturb Ted to him if he knocks on the door. If he says yes, it will try to ask me again. If I say,
eh, it will put up a calendar. The calendar, of course, has a model of the organization, too. So it
gives him, when I speak with the students as opposed to when my wife comes and it shows all
my schedule. And Nicklaus Negaponte comes, says come on in. Doesn't even ask. He built the
damn building. Very simple expert system, kind of a rules system approach.
But what's neat about it, it kind of codifies a bunch of the things that we really succeed at using to
demark that very important social moment.
>>: This dog, he's going to bark.
>>: Amazing.
>> Ted Selker: So that's Diane Sawyer kind of playing with me and my dog. My dog has an
infrared sensor on top of it. This is IR. IR is -- it's about identity. I, exist. It also uses infrared.
And the circuit board has an eye with an eyebrow and an R in the back. The point is it has one
photo diode looking at your eye. With that one photo diode it can distinguish blinking nervously,
staring intently, gazing around without much direction.
Closing your eyes. Winking. Eyes closed. So that was kind of my goal was to say you could see
all of these social remarks with a $1 microchip pick and even with bright lights on it works, it
seems. That's why that MIT cap on, by the way.
But you stare at this dog, he starts barking. You stare at a person at a party they'll get your
business card. You stare at a demo, it will play. We had two videos set up, with the one photo
diode and the IR beacon, we could tell which you were interested in and change the video on
other one. Saab used it to notice in cockpits how people were focusing on their tasks.
So I was kind of happy about that. And here I am again on another version of Good Morning
America actually in bed on Broadway. It was a very scary thing. Because they got 100,000 watts
of light and I was using structured this time, something called Blue Eyes. I don't know if you've
ever seen that structured eye tracking system that I built when I was in my group at [inaudible]
actually ran that project at IBM.
But, anyway, the whole point is that maybe what we do that's staring at -- where we're staring
might not be as important as how we are being affectively. What's nice about this bed is I can lay
anybody in this bed in my lab, longest running demo media lab.
You'd lay anybody down in this bed, and the nice thing it held their head stable. Nice thing for
eye tracking. And then if they woke up, the alarm turns off. If they stare at their e-mail, it pops up
on this thing. If they blink nervously, it might go away.
If they look at the TV, it turns on. And if they blink nervously it will change the station. And so
just using this little tiny language of things that we are already thinking we do, right, which we -it's very easy to teach people the gestures they know. And I think a lot of user interface
experienced problems happen when we impose languages that are foreign as opposed to ones
that already can be augmenting what a person already believes kind of these things are kind of
for.
And that's what the ceiling looked like. It's those hills. There's books and e-mails scattered
around on the hills there. Now, this all started when we got the ->>: Even using cameras to track your eyes. The cursor moves to what you're looking at and the
computer puts that information on the screen.
>> Ted Selker: That's a younger Ted, sorry. But that was an ABC News segment about this
thing called Suitor. And Paul Maglio and I did this. And what it was was when we got Blue Eyes
working originally I said, look, people can't stand to be told to look at something. In fact if they
look at things very long it's kind of uncomfortable. All these eye tracking systems that think you're
going to stare at something and select it and move it over there, ignore the point that our eyes are
actually a guard dog mostly, right?
They're mostly saying hey there's no tigers coming right now. And anything you look at you can
close your eyes and see it for another second anyway.
So what we did is we put this thing down, it was back when people were starting to put banners
across things. We had it streaming stuff that was kind of we had this little bit of heuristic that
would choose help things from the command you were typing in your command editor in C++ or
some news.
If something attracted your attention, we could tell the difference between staring for a third of a
second and just reading the headline. And what's neat about a third of a second is that the
fastest I've ever seen anybody do pointing with any of the devices I've designed or any of the
other people's is .9 seconds. So what that means is that we can let people make selections and
pop up the article about the robot taking world inspector faster than you can make the selection
with direct manipulation.
So, sorry, Ben. Ben Schneiderman always says that direct manipulation always wins and there's
a whole lot of stuff about eye tracking that's kind of interesting.
Back in the '60s, already, there was these -- it's not that long ago. There was this guy, Coming
Home to Dinner is a famous painting. And he asked these people these various questions.
What's neat is he could kind of reliably get completely different eye tracking patterns based on
what he asked of them. So that's kind of exciting to realize that you can tell what people are
thinking pretty reliably, even if you can't get people to point at a word and have them select it with
eye tracking.
I had a master's student, Mike Lee, make this system that might -- I guess it doesn't have a video
running -- where -- there it is. Where basically as you moved your eyes around it grouped all of
the MIT Media Lab sponsors. You could tell if a person was interested in Intel and HP and
microchip and Phillips at the same time kind of grouped those sponsors together, just by watching
your eyes. There's Blue Eyes.
And that was pretty successful. But the most successful thing about that project was that if you
watched the eye tracking vector of where a person was looking, you could actually get a better,
much more accurate -- I think it was five times more accurate, idea of where the cursor was by
watching -- where the person was looking by watching the ballistic motion over and then back.
So you see where these two vectors meet and that vertex is much more precise than if you have
the guy just stand there and stare at the place and watched the tremor and all the other awful
things that the eye tries to do to keep from being stationary.
So that was probably, I think, one. But we tried lots of other modalities, too, for seeing if we could
watch people's intention. And Andrea Lockhart, now a roboticist, did this great piece of work with
Floria Mueller now in Australia. And what we did is we took the microphone from a camcorder
and pointed it at the person.
We also put one of these galvananic skimmer detectors on the wrist band of the camcorder. And
I have an attitude about this. I can point to this. This is the wrist response, noisy channel. Who
knows what happened here maybe there was a bad electrical connection. Maybe the sun went
away, I don't know.
But if we looked at what our support vector machine had done with the training data, we find that
we can very reliably find these three points where Andrea was reacting to the stuff she was
videotaping.
So this is an hour in Harvard Square. At one point she was watching some people play drums at
the entrance to the red line, which everyone likes to watch, and she was enjoying that. And
giggled a little bit. Another time somebody was taking a video of her taking a picture of them.
The third time there was something probably legitimate she was videoing.
But the point is this kind of gives us the data we need for automatic editing or annotation
metadata using the sensors that already exist. I think it's just amazing how for a long time we've
said, oh, we need more sensors, more sensors.
You know, it wasn't we needed more sensors. Yeah, we needed more sensors, this is great. Do
you know the Motorola phone that is coming out just added to its sensor suite which included an
accelerometer and compass and GPS, a gyroscope. It would be lovely if they told us why before
they did it. The great news is now there's this huge marketplace and people are thinking about
what to do with these sensors. But for -- anyway, we'll keep on that.
The point is that there's been good sensors in lots of things that we could use as scenario
designers and get somewhere with. Now, this one is really one of my favorites. Andrea's
master's thesis was an e-mail system that she actually came up here and tried to encourage you
guys to think about with the summer.
And what happened is we took an e-mail system and we made what I call the ransom note
interface on top of it. This ransom note interface put these weird bars and changed the fonts and
put colors on it. Pretty ugly. Doesn't take any more real estate per message, but what we found
is comparing this to a normal e-mail system, taking the same amount of real estate per message,
we got some differences.
And what these colorations are is you'd imagine that red might be scary. Actually, it means that
it's something of big importance. So we maybe got the valence of our ransom note wrong even.
Maybe green would be good and red would be bad, but no we were too stupid for that.
All we did we looked at a bunch of training data. Thrown a bunch of parties, if people would send
social e-mail as part of getting to know the people they went to the next party. We wanted to get
social, not business e-mails.
Again, a little bit of machine learning. And we said gosh these ones are urgent. These are
people you like. These are people you want to ignore and gave some visual indication of that.
Okay. So when we used that same training data to look at this e-mail, the bad news is that 60 to
75 percent coverage. It was terrible. Barely -- barely -- if you only can tell within 65, 75 percent
that something's true is that useful? Well, it turns out it was. Significant difference between
people having the annotated e-mails about what was important and what was urgent compared to
the other one.
Not in how many e-mails they read, but in which ones they responded to. And so we were very
proud of that. Very exciting thought that even noisy machine learning data can improve people's
performance and not only that in my experience, with almost all the experiments I've done, when
you have an improvement in performance, you have an improvement in perception. If you don't
then you have to run another experiment.
So I was really excited about this. Still am. And believe that maybe we can make a better
interface to show off that stuff. But I think it's also exciting to me that something other than
quarantining e-mails into folders might be useful. Because quarantining into e-mails for 40 years
is actually annoying.
>>: Who put the cloud on the message, it was an [inaudible].
>> Ted Selker: Yes. So it was following our -- we had like six different attributes based on
training data from these e-mails that we had gotten from a bunch of people just interacting.
So, yeah, I think there's a lot that can be done in improving communication. I think that this
affective response of the system, when it's not meant to just do things for you but annotate them,
is something that I've been pushing for my whole career. I say there's two kinds of agents.
There's assistive agents and advisory agents. Advisory agents teach you to fish. Assistive
agents give you a fish. And the idea of there's a lot of other reasons that I'm kind of trying to
promote about why advisory agents are good. If they're wrong, you can laugh at them as
opposed to being destroyed by them.
Here's another communications example. This was best paper award at IUI by Hugo Liu. I don't
know if any of you know of him. But he's an amazing star. Unfortunately he's trying to make
companies rather than be academic. If you have a chance, work with him.
And we had this common sense knowledge base -- open. Does anybody know about Open Mind.
Do you guys know about Open Mind got a million utterances people made. There's a lot of
problems with Open Mind in my view. But one thing that's got to be true is if we look through it
and see what kind of things people say with the word anger or fear or surprise or happiness in it,
that gives a lot of different alternatives for that. So what we did is we used that to color a
response about the e-mail that I was sending.
So it says I haven't talked to you for a while. Then when you do that, I'm not going to run the
video, but it's kind of sad. And I was wondering how you've been. Kind of expectant. I had a
pretty crazy weekend, and that becomes the surprise. I went sky diving last Saturday outside in
New York. So what's interesting we used to have text up here, surprise, fear and no one
responded. No one noticed it. As soon as we put these shabbily drawn, sorry, face images up
there, for different emotions, huge impact. It changes what you're typing.
Does it surprise you if you're flaming away, if there was somebody reacting to you, you might
think about it reflect and improve your e-mail. Anyway, no one does that.
So the question of how you respond to things socially was kind of a -- when I got my hands on
Chrysler and us built this concept car together. When I got my hands on it, the first thing I did
was start looking at all these -- we had 209 sensor systems in there.
I just threw them all away, and I put -- all I looked at was on what's called the can-two interface
bus. So braking, steering, gas, speed. Turning radius, and actually what's in the cup holder
because we have all these sensors around.
I just had these two knobs. This one is called affirmation and this one was called criticism. That's
my goal is to move those up and down. Well, in the end we did all sorts of experiments instead of
having people actually move them. We moved them for them.
We found out various things. I think this is the only slide on this so I'll tell you the story. So the
story is that if you wait a quarter of a second to two seconds, if nothing else is happening, people
listen much better. Not what I was expecting. I was expecting immediate feedback is important.
Right?
The second thing that we learned that was Tally Sherone's thesis. The second thing that we
learned that was even more exciting, something that I believed in beforehand, is I believed maybe
a variable schedule of reinforcement would be better than a predictable one.
And we learned even more. We learned that if you say very many things that are negative, you
are completely going to make people make more errors in their driving.
That's the most important result. If you make positive ones, rarely, it will help. So rarely, positive,
very, very infrequent negative. Don't respond every time they do something.
And it's the funniest -- one of the worst things about being a user interface person that believes in
ethnography is that we go out and we built this thing and we take people in it and we watch how
they feel.
Guess what, everybody that gets in it, when they pull away from the curb, they say, oh, that's so
cool. I'm sitting there, I built it, it's a beautiful car, blah, blah, blah. It turns out if it says, blinkers
please, when you move away from the curb. That's an immediate negative. And in fact, although
you might say [laughing] that's what the press said, and everybody else, that's not going to be a
productive relationship. And I think that's what's missing a lot of human factors experiments
which is we take -- guess what, we pay attention to each other.
I think a lot of our work can be a little too focused on -- yes, please?
>>: So is this what has informed your perspective of the assistive versus advisory?
>> Ted Selker: Always. I mean, I'm always testing those things. And you know it all starts with -as you start building anything that's an adaptive interface, people immediately freak out and they
freak out from the beginning of the '80s talking about my gosh, my dot CHRC is not what I
thought it was. Right?
And so many bad experiences with brittle systems that change underneath us. And so what I've
done is dozens of experiments showing ways of making systems that have, that modify
themselves in ways that don't disrupt and destroy your productivity.
And that's the exciting news, is that what I set out to do at MIT and I feel like I achieved with
those 50 different examples, was show across domains, in natural and even in dangerous
settings like kitchens and cars, you can make productive improvement to people's performance
using AI systems that actually modify their treatment of you and do it in a way that doesn't disrupt
and distract you. And so that's kind of the bottom line.
And, yeah, there's probably some -- you know, there's just some things you can do. But it's so
much lower hanging fruit to do things that are advisory. Also, there's a lot of assistive things that
work. ABS brakes work. Well, maybe using a car without ABS brakes after you've used one with
ABS isn't so good anymore, it turns out, right?
But the direction that the automotive industry has gone is very assistive. It's going to park for
you. It's going to find the car in front of you. And yet I kind of tend to say, yeah, but there's other
things you can do. They're cheaper. They're simpler, and they're better for learning, in some
cases. If I make blanket statements people will find counter examples.
But I think the most important thing is to, what we're looking for, is things that don't make people
feel disrupted.
>>: So to follow that up ->> Ted Selker: I'm sorry. Let me stop that.
>>: So if the example of the turn signal thing, when you see someone, the system knows that
you don't normally turn the blinker on when you turn left rather than saying you didn't turn your
blinker on, would you then say it would turn your blinker on for you.
>> Ted Selker: No, that would be assistive. What I want to do is wait until you use the blinker
and say thank you for blinking. That's what I want. If I have to blinker please, but I'll do it in a
situation where it's a four-lane road turning right from the left-hand lane and the light's orange.
That's not what I'm going to tell you about your blinker. I don't want to do more to load up your
cognitive problems. You're already in a dangerous situation, don't make it worse.
>>: What would be the problem, what would be wrong with doing it for you? Are you noticing
that you've made these particular motions to ->> Ted Selker: To make sure you do it right every single time, and the person doesn't ever need
to know about the blinkers, that's fine, there's nothing wrong with it.
And in terms of -- in terms of making robust systems is the question. Like these are
collaborations. Everything we do with computers is a collaboration. It's a mixed initiative event.
No matter what it is. No matter what you think it is it's mixed initiative.
People will do -- I don't know if anybody's old enough to remember when they started putting the
things, the door is open, the door is open, in cars. Do you remember those things?
>>: The door's ajar.
>> Ted Selker: See, I actually didn't ever -- I wasn't there. But and it's amazing how awful it is.
Regardless of what those policymakers believed they were doing to help us, right?
Okay. This is ->>: Press buttons to interact.
>> Ted Selker: This was a lot of work. This is 14 displays on a Pepsi machine. All coordinated
with max SMP. It was a mess. Anyway, but the deal is this was motivated by well Pepsi was a
sponsor. But one of my students went to a mall and there was a map on the front of a Pepsi
machine. Maybe it wasn't a Pepsi machine. Oh, please, stop.
[music]
>> Ted Selker: There's a bunch of people standing around looking at this map. And so my MIT
kid has a lot of self-importance. He walks up and buys a soda. All of a sudden everyone starts
buying a soda. Were they buying sodas because he had showed them how? Were they buying
sodas because he encouraged them, or were they all wanting to buy a soda but they didn't want
to disrupt everybody looking at the map or what. We don't know. But my goal was to get all of
that gigantic labeling off the sides of these soda machines and make them into a Starbucks.
So as you walk up to this, there's a eye tracker, face-to-face. This guy's thesis staring at you if
there's one person it puts up a game for one and she introduces you to -- you can play games.
And it isn't -- you don't actually have to tell people they can buy soda, but we tell them they can
buy soda because people comes up to this machine even when it was like not even plugged in
they put money into it. It's amazing, everybody is very much trained to put money into slots, I
don't understand it.
Anyway, it did all sorts of things, like it had news on when you were walking by, as an attractor. It
would play these puzzles that were collaborative when there were two people in front of it as
opposed to -- two people games when there were two. A lot of fun. And why did I say
Starbucks? I believe there's a lot of reasons people put the soda machine next to the bathroom
in the back of the stairwell. It's kind of -- they're so garish. Now this isn't any less garish but it at
least would maybe attract people to hang around and you there's a lot of things could you do.
Maybe at airports Pepsi could buy all those big screens that say when the airplane's taking off
and use that real estate as well. All right. Enough of that.
We spent a lot of time. Ernesto Arroyo Ph.D. student did his thesis on disruption and interruption.
Disruption is when an interrupting -- question?
>>: On your Pepsi machine thing, what was your goal?
>> Ted Selker: My goal was to demonstrate that engaging people socially would be better than
just using standard advertising methods of branding and to get them interested in being there.
There's some value ->>: To sell.
>> Ted Selker: Yes, Pepsi. But I think there's other things you can do with that. Once you have
all that infrastructure there. And this stuff was supposed to go to Beijing. You have WIFI and
there's a lot of things you can do once you throw some technology into a machine like that. Yes?
>>: The water cooler metaphor, how people go around the water cooler and socialize, they don't
see it around any machine. Did your proof vending machine enable some single ->> Ted Selker: That's the whole idea. We tried a couple different versions. One of them was we
literally took a great big screen and slapped it into a different vending machine that just had space
for it and made a bunch of tables and chairs around it and made it like try to tell who was around
and stuff like that and do some net stumbler kind of crap and really embarrass everyone.
You've got to try these weird ideas.
>>: Did you perform any longitudinal -- long studies?
>> Ted Selker: We only spent two years getting that thing to run. It was -- that was a lot of pain
making that thing run. It was way too ambitious technically for us to get to anything longitudinally.
You saw a couple hundred thousand dollars right there. Somebody have a couple million, sure,
we'll turn -- I tried to get Pepsi interested. They were so flummoxed. It was really fascinating how
disruptive seeing something like that was to their management because they want to be
innovative. That's the job of the Media Lab. They want to be innovative. When they see it, they
don't know how to start. They don't know, gosh, are we going to start making these things, how
are we going to interact with somebody? Who would even know how to make content for this
thing? Introductions, product development.
>>: When did the project happen? -ish?
>> Ted Selker: It ended 2008 when I left. When did we start it? I started talking about it in 2001.
So I didn't build anything right then. I try not to build these things, because look at what a mess,
look at what hard work that is. So I kind of talked about it. And it wasn't quite good enough for
them.
I talked about it some more. That wasn't quite good enough yet. They send me this machine,
and I start finding students and these things are progressive. But yeah.
And that particular one, I'm sorry to say, there's no publication that I can point to. There should
be, but I'm not sure that we -- no, I don't think there's a publication. It's been -Okay. We played around very early on. Late '90s doing mouse trails to see where people's eyes
were moving by what they were doing with their mouse, and a lot of time when people's mouses
are moving you can learn a lot. You know that now. But back then it didn't seem as clear.
One of the early little things we did we had this video of a baseball game that would come up on
your screen, and if you were active doing stuff, it would stay small. If the crowd went wild, it
would get bigger.
If you weren't doing something, it would get big. So this idea of kind of modulating what was
coming to you based on your activity level. And Sean Sullivan, who works here now, and I don't
think anyone here knows Sean Sullivan. You do? He's totally brilliant.
And you should be making all sorts of use of this guy. Anyway, one of the first -- he did lots of
stuff. He made the Blackbird system that's underneath the CarCoach. He made this, which was
something that any window that popped up on Macs, Unix machines or Windows, it would let you
have this McFarlane negotiation thing where you decided where you wanted to have it pop up or
have some mediation or schedule that comes up later or negotiated. It annoys you just to find out
if it should annoy you.
That was all a fine idea of how to deal with interruptions. But we wanted more. So Ernesto
Arroyo, who is now in Spain, made this thing where he took some of the low level stuff, like you
guys do, mouse and keyboard, and then some high level stuff. Like are you reading? Are you
thinking? Are you interacting, and we did some open mind kind of analysis of the text you were
typing to figure out what topic you were in. And so instead of just -- so we had this kind of
interruption model with high level and low level things. And decided how and when to disrupt a
person.
So there's the interruptions coming in, whether you're going to stop them from doing what they're
doing. And the experiment that we ran was we made -- by the way, two systems made. Sean
Sullivan unfinished masters thesis made a neat one. Great experiment at Wellsly college. You
should ask him about it. But Ernesto made this complicated thing. You're supposed to order
things. Get in your e-mail messages what to order and looking online to find out about them and
doing some calculations on the calculator. Someone was IMing you, and it was quite a stressful
job we gave people. They had to take breaks and everything.
But what we did is we just decided based on that model, the interruption model, to delay IMs up
to two minutes. So we would regroup them so they were about the same topic. We would delay
them or not, and we would -- if they were on the to pick you were typing, doing something on, we
would present them.
And we had two different cases. One case is, hey, get as much orders as you can, 30 percent
performance improvement. And the other was make fewer errors, 25 percent fewer errors.
And this IM -- some of the IMs were social, some of them were business related. And so that was
very, very exciting. And the companion experiment that Sean did was he made a different
system with the same idea for a model in it, because he always builds his own systems, or did
then. Now he probably works with corporate good stuff.
He took it over to Wellsly, and he turned on and off their system remotely, their filter system.
They much preferred, his thesis wasn't completely published yet. Which you guys can encourage
him to do.
But my understanding they greatly preferred this thing being turned on, which is amazing for
people that are social butterflies as college students might be. We did some stuff with cell
phones I'm not going to waste our time on.
I kind of find myself saying something about pointing devices when I talk about impedance
matches. One of the things that I'm known for is the pointing device in IBM's notebook. I did a lot
of work on the Think Pad, actually. That's a whole, long, long story.
But the interesting story that is this pointing device where I had read when Stugart's book and
Tom Moran's book came out, I read it turns out there's like 1.7 seconds to go over the mouse and
back to the keyboard. I thought wouldn't it be nice to have the pointing device where your hands
were so you didn't have that. A few pages later I actually found that the knee bar was actually
faster at pointing than the mouse for the first 10 or 15 minutes in this old English Englebert paper.
And what surprised me about that -- by the way, recently I confronted English about this and he
told me that it wasn't true, which he's wrong. I mean, people didn't realize to look at the data that
way to say, well, since the knee has no mapping in your mind how could it be a good point. The
only reason it was any good is you don't know how to use a mouse or a knee for pointing and it's
dominated by the back and forth. I spent a lot of time trying to make a pointing device in the
keyboard. Started '84 or '83.
By the time I was done we had this experimental paradigm where you ran a little racetrack on the
screen to see how fast you could make selections. And one day I had made a transfer function
as part of the cycle in two minutes you make the transfer function, you try it out. Transfer function
depends on how hard you pressed, how fast it goes. Why am I going on so much. What we
found out there's a very painful transfer function. It means it hurt your finger to use it. It was 25
percent faster than anybody ever reported a joystick being able to make a selection.
So that was amazing. And it turns out it wasn't because of the greater dynamic range from
pressing harder. It was because it hurt hard to go faster than your eyes could track. There's
eight or nine different cognitive modeling kinds of things that ended up in that pointing device.
Every single one were things we had an idea about and we were wrong, and through our
experiments we found how to make a cognitive model around that topic, that idea that made a
difference.
For example, one of the things I found out was you only have four or five bits of control in a finger.
Forced control. Repeatable control. And by knowing that and by learning that we were able to
make a 15 percent performance improvement for select slow selections, which I'll talk about too.
I'm not going to talk about the joystick.
>>: What's on the chart there?
>> Ted Selker: There's ten. They're track ball kind of stuff as the slowest. That's what it is. If
you want to know the amount of time, the mouse selection, a mouse selection for type and select
is -- by the way, if it was just selection, the graph looks different. Because I'm getting the extra
.9 seconds of the transfer time going over the mouse.
>>: Nine-tenths of a second?
>> Ted Selker: No, it's think of it as ten. So basically the number is going to be about
1.2 seconds.
>>: Okay.
>> Ted Selker: If you want to get more detail I can give it to you, but probably not right now. I'm
worried that it's getting to be five minutes. So I'm going to give more rushed as I talk about
things, I'm sorry to say.
There was a lot of work to try to make the integrated sensor net environment for the kitchen. The
Internet Home Alliance was one thing where they had the PDA where you could turn on and off
the stove from your car and burn somebody that happened to be sitting on the stove.
I kind of played around autonomous things. And the simple first one I made was the talking trivet.
The talking trivet has this model of cooking in it. The idea is if it feels like it's on a really, really hot
thing, maybe you might say fire because it's on the burner.
If you put your hands on something that's above 212 degrees and less than 454, maybe it says
ready to take out with a question mark. Because that's probably -- at that point you've started out
gassing, all the water's outgassed and the bread's going to start getting brown. Two sensors, one
on front, one on back. If you put your hand into an oven that's 425 degrees, and you have
something cold, like a pan in your hand, what should it say?
What it says is I'll remind you to take that out in 15 minutes. Now, it's a roast. Why would you
ever say that about a roast? Because if you put a roast into a 425 degree oven, it's going to
blacken on the outside in about 25 minutes, and it won't be hot on the inside. Has to be 167
degrees to be cooked. Pretty much no matter what it is.
So by doing that, it makes you think, oh, I better turn it down to 275. So it's this idea of what do
you respond, how do you respond in a way that makes a person think the right thing to solve their
problem. And it's kind of fun to realize that like a microchip chip with no memory can be smart
enough to have a model of cooking that works within the context of an oven. The whole point is
that the context is constraining what we expect of a person.
By doing that we can make things that have fidelity. They put a pot on the counter that says
needs rewarming. Because why would you put a pot, a cold pot on the thing. All right. Enough
of that. I'm not going to talk about progressive relationships right now.
But I'll say that in the kitchen we did lots and lots of stuff. But one of the ones, and I expect to
see this one turn on into a video, there it is, is when there's green -- when there's something like
vegetables, the camera sees that and makes cold water and colors the water blue and when
there's a pot it will color it red and make it hot, if you're going to wash the pot, going to boil some
water.
If it's hands, it will turn it purple and be warm. You might not think that's important, but 195
degree water in commercial kitchens does burn people.
Another thing we did with it which was even more fun is Kaiser came to us and said, hey, you
know, 50% of the people that are supposed to wash their hands in the hospital don't and we
actually pay people to watch people wash their hands. They wanted us to put RFIDs on the
people and have big brother tell them if they washed their hands or not. I did that. I didn't like it.
What I did that I liked we put an electrical solenoid on the door. If you washed your hands for
20 seconds the solenoid would pop open and the door would close to the examining room as if
you had an examining room. If you want to fight the door closure, you can.
But, in fact, it's just encouraging, good behavior, persuasive blah, blah, blah. There's lots of cool
things about that. This also went up to the right ergonomic height of the person using another
camera and that weird sync is silicon so I can throw weird goblets up and see how it's going.
Let's see how it's going.
>>: Salt please.
>> Ted Selker: There I'm trying to make crepes, and using this sensor pack in a spoon to do it
and it's much less fancy story than any of the ones I was telling you.
But what's interesting about it is that sensors in the kitchen have just a terrible history. My mom
had a thermostatic burner and no one would ever touch that. You always want the one that you
can control it.
Why is that and what do you do about it? And what's fun about this, just had a zinc and
aluminum interface for sensor here for PH, that will tell you if it's got vinegar in it. It will tell you if
you've put in baking powder instead of baking soda. There's a temperature sensor, just a
resistor.
That will tell you very important things. If you take a look at making candy, making chocolate,
how many of us feel we can do that? Pretty much 0. You can, you'll make chocolate? Okay.
And you know what I'm talking about. Every part of the vessel has to not get above 120
something degrees for it to change to one of the other crystal structures.
The idea is 13 percent of householders think they can cook right now and this could take you
through and train you, teach you, instead of being -- anyway. This is another thing that Sean
built. This was a way of making context-aware applications by dragging and dropping things and
attaching support for vector machine or real based system to the training data that you had
associated with that sensor and all that good stuff.
I'm not going to talk about that. Maybe I'll end, since it's 10:30, with the idea that I consider the
world's going to be one where everything acts -- acts as though it's in a social environment. And
maybe I'll just stop it.
But the point is that we are always in a social setting. And this idea that maybe the reason that
we have historically sometimes the reasons we have some of the things we have is because
they're brand new and we can show them to somebody else.
The iPhone actually took that to a level of that is the way they made their product. The product is
designed not to have a consistent interface. It has one of the worst telephone interfaces,
telephone address book interfaces I know of. Yet when you turn it on, you can show somebody a
picture instantly. You can show a map instantly. It's about showing somebody something.
This cigarette idea is about that kind of idea. If you haven't smoked it for a while, it kind of
vibrates in your pocket reminds you to take a break. If you come upon somebody that you met
before, it recognizes that cigarette. Sings a little hello song. If it's somebody you haven't, touch
their cigarette and kind of gives a light, kind of introduces itself. Defines you inside and outside a
social circumstance. Lets you be a little bit of a self-defying person. Helps set the pace for timing
activities. It's a comforting, calming device. We use everything for this.
I see only one coffee cup, two coffee cups in the room, but that is the Starbucks experience?
Right? You go it's kind of a ritual and you feel calmer and it's part of -- usually it doesn't say
Microsoft on the front. But around here there's kind of a strange culture. But and I think that
everything we have we use for creating and establishing and furthering our sense of self and our
sense of projecting who we are.
That's all I want to leave you with as kind of my story. [applause] and there's lots of other stuff I
can talk about, but I won't.
>> Mary Czerwinski: Any more questions?
>>: I have a question. Since you talked about the iPhone. You mentioned there's a enormous
amount of sensors being packed in this, in the new iPhone [inaudible] and everything. But yet
you've consistently shown the [inaudible] it's a phenomenal device. I see half of the people in the
room have glasses would it be that hard to put [inaudible] on the glasses and yet there's no
commercial product? Would you advocate that?
>> Ted Selker: I would love for you to give me a little bit of funding and I'll do it.
>>: How to do it.
>> Ted Selker: How to do it?
>>: How to convince people that make products that it's worthwhile?
>> Ted Selker: I think business cases are an art. And it's a matter of demonstrating -- I mean,
one of the worst problems that I had at IBM -- I was an IBM fellow so that means I was running
around the whole company thinking about the whole strategy and all that fun stuff. And what you
learn probably here you learn the same thing is that you pitch ideas that are going to be
$100 million in a year or else you don't pitch them.
What does it mean to pitch that? It means that there's two teams at IBM anyway called the
marketing team and the planning team. And those teams responsibilities are sizing product
introductions. So when you start thinking about what would it mean to introduce a product, what
they do is they look for benchmarks.
Those benchmarks don't exist for what you're talking about. So what you have to do is figure out
how to get confidence that they exist. That's why we go into research labs and we build these
prototypes. That's why we take those prototypes and we show them on national TV or whatever
we do, big conferences, everything we can do to get confidence that you have something usable
and useful. If you're Steve Jobs, you have a large group that no one has, you get to make bets
bet on the Newton and failed. And he bet on the Apple 3 and he failed. He bet on [indiscernible]
and failed.
That's not what we talk about. We talk about where he actually got more experience and used
the Newtons as a backdrop to get to amazing, amazing stuff to happen.
And legend and all that good stuff. So with respect to that, all I want to do, if you were to give me
$100,000 to make something for a pair of glasses right now I'd just put a clock on it. That's what
I'd do.
I mean, you start with something that you know that, gosh, who can argue with that, right? Why
would it be bad to have a clock that you could push a button and turn it off. But it just always
knows what time it is. That's the kind of -- it's the old-fashioned IBM conservative guy, there's two
sides. Right? There's the aspirational and there's the completely concrete and productive.
That IR, the eye sensor, I would totally accidentally put that in that camera-based, in that thing
with the clock, because it might be that I know when to turn it off pretty much.
And that might be a feature that you can turn off, too. I'm just designing the product here in front
of you, which is what I love to do. So I don't know if that's answering your question. I could go on
all day about this. I've got talks about business development, technology development.
In fact, that's a lot of what I do as a consultant is I work with executives at companies to think
about their strategy for incubation, innovation.
>> Mary Czerwinski: Anything else? All right. Thanks. [applause]
Download