>> Brian LaMacchia: All right. So thank you... should I say, such an interesting weather week, and I...

advertisement
>> Brian LaMacchia: All right. So thank you all for coming out today on such an ah,
should I say, such an interesting weather week, and I appreciate your accommodating the
shifting schedule so that we could get Jean in this morning to lecture, and so I am pleased
to welcome Professor Jean Camp of the University of Indiana here to MSR to speak as
part of the series. Jean and I have known each other…
>> L. Jean Camp: Let's go with a frightening number.
>> Brian LaMacchia: Yeah, a long time, certainly before I gave the infamous talk on
trusted computing at MIT to Stallman and crowd. And she graduated in 1996 from the
Engineering Public Policy Program at Carnegie Mellon and then was faculty at the Kane
School of Government at Harvard for a year, went to Indiana and then in 2010 took the
year off to be, she was selected to be a fellow of the IEEE USA Congressional Fellows
program and she actually spent a year on Capitol Hill doing public policy work in the
office of Representative Etheridge, who is from North Carolina, where you are from,
correct? So she got quite a different perspective than we normally do on the way policy
gets made. And so without any further ado, please welcome Jean Camp.
>> L. Jean Camp: Thank you for coming out this is my second snowpocalypse because I
was in DC when they got a foot and a half of snow and all of the flights out were
canceled after four o'clock and I was on the 3:59, so this is obviously not a crypto talk;
this is a talk about making trustworthy, trusting computing trustworthy, how do you
design systems that engage individuals to make the best risk decision for themselves. I
work a lot with mental models of risks, and I want to say the mental model of the
designer really matters. You can think about maybe a brain surgeon, not a stupid user, a
brain surgeon who is going to go into surgery and you don't want them to mess with the
security. You don't want them to use any cycles on the security. Think of them as
unique individuals, not as a group that has statistical characteristics, but as individuals.
We can't solve the network problem without solving the individual user problem.
So I am going to start with a discussion of security decisions as risk communication and
talk a little bit about risk perception. I'm explicitly talking about expressed preference.
Sometimes in the industry we work with revealed preference. Revealed preference
means people do exactly what they want to do and they are fully informed, so you just
observe their behaviors and then you know what they want to do. Expressed preference
means sometimes people are not fully informed, and so you have to ask them what it is
that they actually wanted to do. And then I am going to go through two very high level
examples of systems we built and do some elemental examples about risk warnings that
we have used in the industry. And I want to say one more thing. This is about
participants who engage in risk decisions. You've got to start thinking about
communicating to individuals who are in their own unique context of which you can only
detect partially but you can partially detect a human being brings to the risk decision their
own willingness, their own risk posture and their own context, and the mental models of
designers matter, and I'm sure we've all heard there are only the two industries that talk
about users, who use their products. Every other industry is focused on customers and
participants.
The fundamental assumption that we make because classical economics tells us and
agent-based models work perfectly with this, and it is what computers do and we like
computers, people do not engage in the calculus of risk. How many of us have seen that
there are 1 million X trust and everything else where you say I trust you 7.5%.
Individuals do not do that. One of the early experiments we did is we compared two
failures. We had self similar groups of 40 people. They were testing as consumers a life
management website which would remind you of people's birthdays and then when your
nephew turns 14 they would say here are the top presents for 14-year-olds right now; that
is what they were putatively doing and then half the group saw a pop-up and a change in
privacy policy and half of the group saw a pop-up of John Q Wilson's complete data of
this made-up person of John Q Wilson. They responded much more strongly for the
privacy policy even though this showed that they couldn't keep your information private
if they wanted to.
So people respond to benevolence. You can say oh, that was just incompetence. We
didn't know what we were doing, and even though the risk--I mean insanely different
from a risk calculus point of view, but from a risk perception point of view and from
traditional risk studies it makes sense. So one of the things I'm going to talk about is the
dimensions which individuals systematically use to evaluate risk. I want to talk about
risk perspectives from other domains.
These are all physical risks, so they are going to be different in the virtual world because
there is no fear of physical harm. Unless you are genuinely afraid that your computer is
going to burst into flames, you are not worried about physical harm. And we think a lot
about mental models because that is how individuals understand things. I don't know
how many times I have heard somebody in CS talk about some medical problem they
have and they say well, it's like the processor is doing this. You bring the model that you
want. If we can solve the model for the individual, for the self similar individual, and use
the fact that people are not random but they repeat the same behaviors over and over, as
well as trying to solve the problem for the network we will have complimentary
solutions.
Not shockingly enough, despite decades of highly consistent security training that just
teaches individuals exactly what to do in a computer security context, we are still having
problems. How can that be? And one solution is usable security. We want security to be
usable and I have got to say that I am actually chairing a conference on usable security,
so this is mildly hypocritical. Usability is for when you want to do something. I want to
draw the circle. I want to get this piece of software installed. No one wants to do risk
mitigation. How many people say I am going for drive? Oh my God, I get to wear my
seatbelt, yay. So people want to subvert or minimize or ignore security. But you don't
want people to be suspicious of your machine. What we want you to do is really never
enter that credit card number. And security warnings are irritating too. I adore is a jar.
How many of you guys remember that one? One of the things you do with usability is
you make that connection action consequence. You move the mouse this way; you draw
a line there. You accept this. The software will be installed. But the problem is what is
the consequence? There may be no consequence at all because risk is inherently
probabilistic. They may steal your credit card number, but they stole some money, they
end up not using yours until after it is expired. You may enter your Social Security
number and you may transpose two digits, and then it is basically useless. And the
consequence, that action risk consequence information which you see people trying to
deliver frequently in warnings, is completely overwhelming.
The other thing is to make security opaque. Security should be built in not built on.
Security should always be there. To the extent that it is disabling people try to disable it.
So if you are at an airport and you are using Chrome; see I am picking on Chrome here.
This is what you get. You can go back or you can say help me understand. But if you
say help me understand you get yada yada yada. Contact your organization’s help staff
for assistance in adding a new root certificate to your computer. There are two problems
with that. One, I have heard that people actually have computers in their houses. I know
it's like obviously this new thing that they haven't been… But the second thing is a lot of
people just don't have administrative access. This pops up in the airport, what are you
going to do? Not use your wireless? Maybe you are on a different time zone and there is
nobody to answer the phone. This is, this stops action. People can't get things done, so
they hate security. And so my whole idea is that you want to use translucent security. It
is not transparent; it's not opaque.
What it is is it is context dependent. If you focus on the human on everything that your
phone or your computer knows about you, you can guess a lot about context. And also if
you bring in information from the real world that we know in the real world that we just
don't use online, then you can help people to make an informed decision with a single
narrative. They understand the context and they understand how willing they are to take
risk. So I said something about the nine dimensions of risk and I am going through them.
Off-line risks are inherently physical. Car wreck, horrible disease, terrible injuries,
people are not going to be really scared online. Suppose you can make some scary
horrible interface, would you anyway? So there is a classic nine dimensional risk
perception model. The objection to this is in later work they moved to 15 dimensions.
This is Fischhoff and Slovic which some of you have heard of. I know two people have
heard of it.
And how do we use off-line security risk to design security online? So I am going to
introduce the dimensions and then talk about two different experiments that we did. Is a
risk voluntary or involuntary? Do you choose to be exposed to the risk of lung cancer
from smoking or are you just stuck in the city? Voluntary risks are more acceptable.
How immediate is the result of the harm? I run this picture because I don't know why
they are not dead. I mean he is like standing, jaywalking and like where is my shoe. I
am missing my shoe. Whereas, some risk you have a task that you want to complete and
any negative result of choosing against risk mitigation, the consequences will be visible
later.
So how much do you understand the risk? Are these cool? Suppose genetically modified
crops, how do I deal with that risk? Am I worried about it? Is there some genetic--I
mean obviously no genetic modification is going to happen to you, but do you know that?
What is the knowledge of being exposed? Whereas, how do you avoid this risk; you
don't stick your hand in it. Things that we don't understand are scarier. And the other is
the knowledge of the risk to science. If you compare, for example, science’s
understanding of alcohol, versus science’s understanding of pharmaceutical interactions,
and the more unknown it is, the more frightening it is. People are not afraid of mixing
gin and vodka, but they are afraid of taking, you know, an antidepressant and a mood
stabilizer. So this is not voluntariness; this is controllability. Voluntariness is can I avoid
being exposed to the risk? Controllability is if I am exposed to the risk, can I mitigate it?
So if I am exposed--this is actually--this was the best car wreck picture I could find. You
can't control this. You expose yourself to risk in airline travel, but you can't control that
risk. You expose yourself to automobile risk, you control it. And the more controllable
it is, the more you can make people feel in control, the more they take risk mitigation
actions. Nobody wants to turn off there--okay, it is not really a risk mitigation action, but
pretend that turning off your cell phone in-flight was risk mitigating. Nobody wants to
do it because you can't control it anyway.
How new is it? Of course because of the trace elements and the massive amounts of coal
that is burned in a coal facility, it actually has higher levels of radiation adjacent to a coal
facility. And I can say beautiful Catawba nuclear facility where I worked out of college;
this is scarier, even though you are exposed to more proven hazardous risk if you live in a
coal plant. And how common is it? Is it something we deal with every day? There was
a study done of epidemiologist at a conference of epidemiologists and only half of the
guys washed their hands. Now these are epidemiologists. Because they are just like flu,
whatever; it is everywhere; I can't do anything. As opposed to this, this is really rare, but
it is scary. You dread one. You dread being exposed to the risk. The other risk, yeah, all
the time. So I had to put that up there.
Chronic and catastrophic, I am not going to dignify that with an explanation, but so many
more people have died in car wrecks. But it is everywhere. It is like, it is similar to
common and dread except for the outcome is not dreadful. So there are other risks. How
severe is the risk going to be? This is another problem that you get to with inter-risk
communication in computer security. It is not skydiving, right? You are not going to be-if there is an equipment failure there, it is severe. It is more like this. How many people
don't chop vegetables because they are afraid that their hands are going to get cut off?
So one of the things that we've done is we have taken a set of virtual risks. Now there are
legitimate criticisms of this and we are redoing it with a lot more threats and also some
made-up threats, because we don't know to what extent people just answer, so we added
vampires and 218 fraud and all of these other made up things. It was a convenient
sample, which means they were self similar and non-representative, but we did another
study with a representative set of retirees and we had very similar outcomes. The thing
about retirees is they have a lot of money. They are very good targets for fraud. If you
want to steal money, do you want still money from an undergraduate with some lovely
college debt, or do you want to steal money from someone who just retired and has $3
million in the bank and has less technological expertise, and is psychologically more
susceptible to fraud because of the normal cognitive changes of aging? So this was, we
have done two these. So are computing risks scary or not scary? Well, they are not
immediate. They are kind of chronic. They are not dreadful and they are perceived as
being understood by experts. And then on the scary side we have the new and not
understood by the individual. The thing that dominates it is severity. If you think it is a
worse threat, the worse you think the threat is the more you respond. If it's voluntary or
uncontrollable, but people had very different assumptions about whether it was
uncontrollable or not.
What we did is we did a cluster analysis which is how people’s answers were self similar,
how they were grouped and instead of severity we broke it into four categories based on
distance and we found temporal impact hugely dominated. There was nothing else that
came close and so we want to make risk appear immediate, which means you've got to
make your mornings not only context dependent but timely. Not at the airport when you
are trying to connect to, what is that horrible airport provider, Boingo or something? And
we are never going to make it scary. Severity is just not an issue. This is not scary.
Maybe to Richard Stallman, but to the rest of us, we are hardwired not to be scared of
that. We are hardwired to be scared of some things; a computer is not one of them. We
are never going to scare people.
I warned you at the beginning. So computing won't be scary, so mitigation has to be
really easy. You have to walk people through this and sometimes mitigation is really
easy. Don't enter your password, yay. But patching your machine is a little bit harder.
Risk information can also be unpleasant when you are doing something in your car and it
goes beep, beep, beep and you can't make it stop. You don't want your computer to have
that experience. So we are arguing and have designed four systems that are timely and
targeted, personalized and also actionable. Actionable is something that computer
scientists forget. People do try, right? So this is Rick Wash, the rest of this is my stuff.
Rick Wash did this amazing study of, he went and he asked people their mental models
of security and then he asked them what you do? And he found that if it's eavesdropping,
this is actually from a ubiquitous computing study, if you are worried about information
leaking, you have to put your screen in the right place. And after that you go bury a live
chicken at midnight facing east, right because it's about the same thing.
If you think they just go after the big fish, you don't have to do anything, because you're
not a big fish. It's like, who is targeting me? I'm not worried about the CIA killing me
right now; I am just not that important. That is the mental model. Under no mental
model did anyone patch. There was no folk model that said I need to patch my machine,
zero. And if people thought that it was like the flu, ubiquitous, they would update their
antivirus sometimes. And the things that we have communicated as a community is be
careful what website you visit and don't open attachments, even if it says I love you.
And we have some nice standalone warnings. Adleman did a study of this and found out
that this is actually a warning sign. That if this is on a website it is more likely to contain
malicious code and bad privacy policies, and when I say malicious I mean spyware, not
malware. I got this off of a website that I really like because they offer 20% off list price
of Thomas The Tank Engine. Those of you with children appreciate the importance of
this. But it is totally meaningless, it is a cool lock, and then we provide things. Oh, who
has got a pencil? Quick, compare the fingerprints. How is this supposed--and they are
often quite ill-timed, right? So how do we do effective support of risk mitigation? I have
three videos on my website that you can go to and I won't force you to watch it.
Basically in the phishing video, we have this guy showing up pretending to be an IRS
agent and they ask you for all of your information and then at the end of it there are two
endings. One the guy gives them the information and two he calls his bank.
In the end the message is if you think it's a website, if you think it's malicious, call your
bank. And we use this for older people and they were all able to describe the risk and
they know what to do. It made it controllable. If they are exposed to the risk, they knew
what to do. So it was, they found it informative, highly non-technical. You will laugh at
the video and think that it is insane about not signing up as an admin user because we are
like, well, you can live in like this apartment or this apartment, and the thing about it is at
the end of it people take away their mental models and they say oh, this is more risky. I
can do more things, but it is more risky.
So it is actionable because it tells them what to do, so one of the things that we have
learned in communicating is that if you want to communicate to younger people you have
to say you will experience this risk. This will happen to you. You will die in a car
accident if your friend drives drunk. Whereas, with older people you have to say, it won't
happen to you. You can avoid this. This is how you avoid it. Context and audience both
matter. And it's got to be limited. It's got to be timely so we all know stay away from
this barrel. Now compare that to something like this. This website has identified itself
correctly. But it can be run by a third party. That pops up every time you go to an
unencrypted page. If we did physical risk communication like that it would be like
barrel, barrel. You would drive down the construction and your car would be going
crazy. Which third-party cookies? What do you have to enable--most people don't even
know how to even enable or disable script. Click here if you want to temporarily enable
cookies. But you shouldn't even say cookies, because they're like chocolate chip,
snickerdoodle. You've got to think of them as really busy brain surgeons, so I have three
models of participants. I think of elder people who have some normal cognitive decline,
but very smart. I think of brain surgeons who are very busy and I think of college
students who as we all know are what they are.
Clear and actionable, stay away. This is a flame, hmm. This is, I was so pleased when
this popped up. Cool I am still doing a screen brand. It wiped out everything--it took out
control of my whole screen and said okay, you can install our software or you can install
our software. Most people are not going to know how to stop a process. But we can see
that. So here we have the IEEE which is a great organization of which you all should be
members and switch incidentally. This is VeriSign. Pretty much, you accept it or you
don't buy anything on the web. Or this is on four websites. It is the Elbonian Secret
Police. You can accept it or not, or, and these are all things that this pop-up is telling
you, but it is not telling anyone in any actionable way; the computer knows what the risk
is but we are not going to tell anyone else because they are just users. They are not
participating with us. We are not talking to them. This website is a day-old. It is more
likely than others to be dangerous. Don't go, or don't enter information and don't
download anything. It is only a day old. How many fake antiviruses? There was this
huge antivirus installation on the Hill when I was there because there was this banner that
popped up on the Washington Post. Do you guys remember this? And the first thing you
do if you work in DC is you open the Washington Post to see what is there and it said
your computer is infected. Download this now. This was incredibly successful. It also
had a certificate that was one day old. So how do we tell people what to do? You can
leave. You can stay, good luck. I mean, but the thing is when this thing pops up, you
know why this page was blocked. And here is some great risk language. Some pages
intentionally distribute harmful software, but many are compromised without the
knowledge or permission of their owners. Now how helpful is that? It might not be their
fault.
>>: That is probably their lawyers.
>> L. Jean Camp: Yes, well. There, I think that that is a very good point because almost
all risk communication has not been engaged in assisting individuals, in participating
with individuals to mitigate risk. It has been engaged in shifting liability to the user. But
we can do better than that. We don't have to think of them as, you know, liability eating
users.
>>: I 100% agree that that is the goal and that is what a lot of stuff does but how can it
be otherwise? Because in a lot of [inaudible] situations we can give--if we know--there
are three cases. Something is no good with no wiggle room; something is really bad with
no wiggle room, or it is great. If it is bad we just pull the plug, pull the connection. If we
are really so sure that we are willing to go all the way to the Supreme Court to say this is
bad and I blocked it for good reason. There is no chance that I will be found wrong. If it
is good I just leave you alone. Everything we have to message to a user is really a shade
of gray. We don't know…
>> L. Jean Camp: But we know what shade of gray it is. You know that it might be a
phishing. You can say that…
>>: A lot of the time we are wrong.
>> L. Jean Camp: Or, you can say don't accept downloads.
>>: Right.
>> L. Jean Camp: Or…
>>: And then Google creates [inaudible].com which is not Google.com when they create
a new browser and the first day it is up Microsoft tells no one you should download
anything from Chrome.com. You don't think our lawyers should be concerned that some
companies might…
>> L. Jean Camp: They did not create it the day before they started distributing it. They
put a considerable amount of design thought and their certificate was not a day-old.
>>: But false positives happen.
>> L. Jean Camp: They do and that is why you can't do what Chrome did which is good
luck connecting to the airport wireless. You've got to tell people here is what risk we
think you are taking and here is what you can do to mitigate it. And, oh, by the way we
told you to disable plug-ins. Your plug-ins are still on. You are accepting risk. So go for
it.
>>: Recently with our smart screen application reputation you are [inaudible] reputation
but then I 99? I think it is actually a huge success story for warnings in that it used to be
a battle that you get more [inaudible] for almost any site [inaudible] serve or any
download where you would say any download might be harmful…
>> L. Jean Camp: Yeah, that's the barrel.
>>: The happier new story is that we actually built reputation on downloads all across,
by users downloading all across the internet. And we were able to classify over 95% of
these things as [inaudible] good, happy bucket…
>> L. Jean Camp: And the bad bucket.
>>: And the unhappy bucket, the stuff that we had never seen before and we would have
to ask the user what do we do. So the great thing here is we've taken 95% of the
decisions away from users. We can just do the right thing. There is always going to be
that stuff in the middle where it's brand-new and it might be legit or might not be legit…
>> L. Jean Camp: Let me hear David's and then I'm going to talk about how to do that
5%, when I talk about the two systems.
>>: I just want to comment that the early [inaudible] McAfee, they actually did expose
this.
>> L. Jean Camp: But they exposed it in a difficult to comprehend way.
>>: All yeah. I…
>> L. Jean Camp: Like who cares what phishing is? First of all, it is a stupid word. I
mean it is just a, we are going to tell you all about this risk and we are going to name it
phishing. Will they thought about naming cancer throat wobbler mangrove but they
decided that it wasn't optimal. So I want to talk to you about that 5%, which is the hard
5% because it is a race. You are racing to figure out if it is good or bad and as long as
they can get ahead of you, there is a nice window where they can commit crimes. So
what is it? Seven days, Clayton and Moore said seven days on average before major
phishing sites are taken down. Malware sites are a little slower because a phishing site
has a target. Bank of America wants this taken down; it gets taken down. Whereas
malware, you know, that's just everybody and some truly horrific sites may be up for
months.
Okay. This is a nice, I don't--I like Google, you know, I'm not like anti-Google. Here we
go again. Malware is malicious software that may harm your computer or otherwise
operate without your consent. So every second-grader in every middle-class school in
America has a computer and you know that you can look at the computer and observe
their behaviors on the client without like necessarily reporting to the mothership, and
figure out who that is. Or you can just say, look don't visit; how can I visit safely or I am
risk seeking. I like to drive drunk over to my girlfriend’s house and combine
pharmaceuticals and then unsafe sex. You can do it. Go for it, but know what you are
doing.
This is nice because they have a real domain name. Most people can't tell these apart. It
is obvious to you; most people can't tell them apart at all. Especially since there are
mildly obnoxious people that have started using that like lock icon. Whereas, here can
you even see these pictures, because I am having trouble? Which one of these is the
more established merchant? At which one of them are you at risk for overpaying
significantly, and it which one are you at risk at getting bogus goods? You can see the
risk. Again, which ones are you going to put your automatic deposit in? And so we want
to give them one story. So I want to talk about two examples and the first one addresses
yours. In the United States we know where the banks are. We know what their URLs are
and we know what the banks are. And we know what they are because we can ask the
FDIC which has an ever decreasing available database on the number of URLs, and it is
decreasing because of bank concentration, and The National Association of Credit
Unions. If you are going to a U.S. bank we know it is a bank. Why can't we tell people?
>>: But your text on the side here implies in addition that I know what the user is about
to do.
>> L. Jean Camp: Yes. No. But that doesn't continue.
>>: Which means that I can either tell the future or read minds, right?
>> L. Jean Camp: No, what you are doing is you are waiting for them to enter their
passwords and if you keep an oblivious copy of their passwords for their banks, because
we know what the banks are, then they enter the banking password. Then you can say
before we send this password, I just want you to know that it is not a bank. And maybe
you don't care. Maybe you have one password and use it for everything on the internet
because it is your internet password and there are definitely people who do that, because
they say, this is my password to the internet. This is how I make the internet work. You
probably work in a more [inaudible] environment than I do. I am going right out there on
that limb, but I have definitely heard a lot of people say I couldn't do that--freshmen--the
internet wasn't working, or they are at the other desk, 24 years old at Capitol Hill and
they would say, the internet is down. Jean can you fix this? So you have an intranet
password. And if you want to do that, you can do that. You can take the stupid risk,
because it is yours. It is your choice to make, or you can say this is a bank. You know,
you might have other stuff going on; maybe you want to kill some background processes
that, you know, you got at the other website that was unmentionable. Would you like to
do this? These are empowering messages that you, that the computer knows using realworld external information what you want to do and your own history information and
that--so one of the things that we built. Oh, do I have an unacceptable URL? [inaudible]
doesn't know. Oh, you can't see my happy face. We actually put a lot of work into this
because we have a lot of international students and one of our best one you hired, so good
call on that. We are trying to find like colors that make good and bad and we still ended
up using the colors. But it turns out that smiling is universally good and puking is
universally bad. So there are some universal attributes. And we said well, this is a bank.
This is not a bank. We tested this on undergrads and some of them kept using it at the
end of the test, but of course it was graduate student code and it crashed hugely shortly
afterwards.
So this just requires variables that are unique. My set of friends that I don't tell anyone
and our shared history, so if you just have a group of 10 people, what we did is we wired
up a dorm and we did an entire semester's worth of click streams from the dorm and you
guys could really do this, because you have lots of click streams and you don't have an
IRB. Oh, I just got all excited. And we found that with just 10 people in your group, just
10 self similar people, over 99% of your clicks were previously covered to sites that had
been in existence for more than a week. If you take that 70 mean and even if you
expanded it a little bit to 10 days, you still got 99% of your clicks. Which is if you only
have 1% of warnings and 1% of risk mitigation, you can put up with that. Here is your
uncertainty, and if you do it just by yourself, just your own personal self, you will end up
with about 95%. We also did it in a way that was very privacy maximizing and there is
an IBM Systems Journal article about that part.
You have never been here before. Think about that. I mean that is phishing. You have
never been here before. And that is all you need to know. And nobody else has been
here before, and we use that a lot in risk communication. Oh, these are so crappy and I
wanted to make good pictures. You can't see it but this is Mogadishu. It is an awesome,
nobody is here right now. Nobody is in any of these places. You have never been there.
History is powerful. You can use it to align with people's mental models and one of the
cool things about working with elders, they don't lie on forms. None of them reported
lying on forms. They think of it as, and we have worked with more than 1000 with
different surveys. None of them have reported lying on forms. So you know how old
they are. When they fill out their registration card, and nobody says they are 70. Every
13-year-old says that they are 18. We have found no incidents of it.
>>: I'm curious about the backend of this. So you say that you are able to detect, this is
not a bank, but looks like a bank to the user…
>> L. Jean Camp: No. We look at is this a bank, yes or no. Are you entering your
password into something that is not a bank? Hey, think about it.
>>: How do you know somebody is entering their password?
>> L. Jean Camp: Because you know what their bank password is because you know
when they went to a bank.
>>: So once they have entered a string that is…
>>: In a bank, because we know who the banks are, because we ask. We took external
physical context information and integrate that into our knowledge set. Otherwise we
just know if you or your friends have visited it before.
>>: So the first time I visit a site that you identify as a bank, you effectively save a hash
or something off a text input of the login screen or something.?
>> L. Jean Camp: Yes, just as you are connected.
>>: And then if I go someplace and I enter that same text…
>> L. Jean Camp: Something that matches, yes.
>>: Then you say, well, wait is he entering something at a site that was not…
>> L. Jean Camp: Yeah.
>>: I see.
>> L. Jean Camp: And then we say…
>>: I don't see how this is going to work because then the site can just use JavaScript to
send each character out, so by the time you type your whole bank password, it has been
detected and the site has already got it.
>> L. Jean Camp: So how do we want to--yeah, but we can hack around that too. We
can keep it locally. We can…
>>: No.
>> L. Jean Camp: Well, could we do that…
>>: There are some special engineering things that you could do…
>> L. Jean Camp: But the question is what is your overhead?
>>: [inaudible] during this login you need to enter a star between each letter.
>> L. Jean Camp: That is going to be harder to get people to do. That is just physically
difficult. I mean I am not--the difference between this solution and the solutions you're
looking for in this group, and this is the reason I wanted to come here, that and because I
knew there was going to be a blizzard and I really like national disasters. I have been
every natural disaster except a tsunami and I have a theory that there are places that
should pay me not to go there.
>>: How was the motel in Seaside by the way? [laughter]. I'm sorry.
>> L. Jean Camp: Yeah, it was beautiful.
>>: Sorry.
>> L. Jean Camp: Yeah, the one across the street.
>>: [inaudible] air travel [laughter].
>> L. Jean Camp: They closed the airport right after we landed, so we were like last.
>>: So Jean another example, if I'm trying to social engineer…
>> L. Jean Camp: Let me answer…
>>: Keyboard. We just upgraded our security features. You will not be typing your
password using your on-screen keyboard so those key loggers can't capture your
password.
>> L. Jean Camp: So you can come up with a lot of examples, and I basically have three
answers for that. One, you are looking for something that is perfectly cryptographically
secure, right? We are changing the race conditions. They have to come up with one new
social engineering mechanism and then we have to detect it and then it is detected across
all of the sites. Right now it is a game that you cannot win no matter how simple it is
because they have a time period when they are uncertain. And this way you know that it
is not the bank. You know that is not where you work. And yes they can do, but the
more they have to do social engineering, the more they have to make it unfamiliar, the
more people are going to be hesitant because it works because it is familiar. And the
more people, so you, if you can just get them, like we got the elders, we have
communicated two things as a community, don't lend attachments, but usually it's done
download attachments from people that you don't know, one. Then don't visit weird
websites. But we all do that. So if we can communicate, the only thing you have to
communicate to make this work is if you are not certain, call your bank. All of the things
that you are talking about are things that increase newness. People don't like change and
here is what you do about it. Yes, there will always be people who will get subject to
social engineering. If they are a subject and we don't have to stop it, right? All we have
to do is make it too expensive to be really worthwhile. If all we did was take all of the
amateurs out of online crime that would be a huge step forward. I mean just think about
the net without amateur criminals on it. It would be a different game. I'm not saying, and
we are changing the rules, so we are totally going to win this game. We are going to win.
>>: [inaudible] socially harmful. I might argue they are socially beneficial. They raise
everyone's threat awareness; they tell us that things are coming before the pros
necessarily adopt a new technique [inaudible]…
>> L. Jean Camp: Well, I think that I am disagreeing with you and I have never read a
paper, if you want to write that paper and convince me, you can, but I have never read it
and I think you are wrong, so we can just agree to disagree about that.
>>: Oh, that's fine. I was just about the preface this question by saying we are all in
violent agreement with you but thanks for [inaudible] [laughter] but I think…
>>: [inaudible] [multiple speakers].
>>: Flip over a few chairs.
>>: I think we are all in agreement. I would really like to talk because this is about what
I think most of us in this room talk to the best in Microsoft about in terms of improving
user interface warnings.
>> L. Jean Camp: Do you really have somebody here whose background is in usability
or psychology risk?
>>: Everybody here [inaudible].
>> L. Jean Camp: Good.
>>: My background is not in psychology and risk [inaudible] but anyway okay. The
hard thing I think that you are proposing though is that backend stuff of detection
[inaudible].
>> L. Jean Camp: Oh yeah, it's hard. And I am trying to say here is an example of how
this, how this is how we looked at a person and said you are not a user. You are a
participant with your own unique behaviors, your own unique community and we can use
that. We don't really use that now. We try to solve, I mean you've got to solve the
network problem; I'm not saying don't solve the problem of security on the network. I am
saying first of all stand on the shoulders of giants and not on their flipping toenails.
There are 40 years of risk communication and pretty much every risk problem I've ever
seen violates it.
>>: We are in favor of risk communication. We actually worked hard to do this here at
Microsoft. But what is often hard is having, there is all this knowledge and contacts that
the systems need to have in order to present a good communication to the user and part of
that is like you're saying is just the user’s banker and this is not the user’s banker but they
think it's the user’s bank.
>> L. Jean Camp: And some of it is easy and some of it is hard and you're not taking
advantage of the easy part.
>>: I don't agree with that.
>>: So I think something that might be helpful is to think about what is the [inaudible]
list of things that you need for contacts? You have a few of them up here. That is sort of
[inaudible]. But another thing might be take a note from your phone and walk into a
physical branch of the bank and if you had that list, then we could go through the
company and knock it down to like well, Windows Phone needs to have this. Windows 8
needs to have this.
>> L. Jean Camp: But we do know that. What I find frustrating and my point of
disagreement with you is that we can never know that for the whole network and if we
did know that for the whole network, people would subvert it because it would be hugely
privacy violating, but you can know that on the client. If you think of a personal area
network as consisting of, and this is not work that I am presenting here because it is very
inchoate; it is my phone. It is my laptop. It is my computer at work. And then if I start
deviating from my norm, it is detectable. But if you are just looking at the network and
saying we are trying--I did not mean to be diminutive. If you are solving the tremendous
horrific challenge of trying to identify these things on the network and you are doing this
great job and you have 95% of them solved. But the problem that, you know, I have this
incentive to protect my network. I want to protect my network. Help me.
>>: So I am just saying that having a clear list and saying these are all [inaudible] and
put the privacy thing in a box for second, which is very important.
>> L. Jean Camp: We don't put privacy in a box.
>>: But we don't even know [inaudible] the list because [inaudible].
>> L. Jean Camp: I think that is--I am here for I would say three reasons. One is to say
we--that is an active area of research. That is my active area of research. I am trying to
figure out not only how you tell people what their risks are and how you engage with
them and this is an example of my, one of my first efforts in that. They said okay, what
do you know, and then I am working with--do you guys know Steve Myers and
[inaudible]? And what that project is is your phone asks you for different amounts of
authentication based on its behavior, so if I pulled out my phone now and tried to call my
sister. I call her every day. It would be like oh, you are calling your sister. I don't care
where you are. But now I know that it is probably really you because… Or if somebody
picked up my phone here and tried to call the Caribbean, then it would say, you really
have to authenticate. You can't just make this incredibly expensive call from Seattle
where you never are.
>>: I mean, the credit card companies, they are the masters of this. Visa, I had Visa call
me on the phone one time when I was in the store and they said, we need to confirm that
this is you because you are shopping on a Tuesday. And I didn't know that I didn't shop
on Tuesdays, but they did. And when I went to the eye doctor and tried to buy expensive
glasses, they were like whoa. And we have, and there is even more information on a
machine, but trying to, that's what I meant about solving from Haemophilus problem of
the individual in addition to the heterogenic problem of the network. And I am trying to
say here is what, we know so much about risk and heuristics and biases and almost none
of it gets into products.
I mean what if Chrome came out 10 minutes ago? Almost every one of their warnings is
just a beautiful illustration of how to violate every single thing we know about risk
communication. But computing isn't unique in that, right? There is this classic article
about the steps in risk communication. And the first one is just tell people the risk. And
the second one is make sure we get the numbers right. And it goes all the way down to
the seventh one which is what I am trying to say is not easy, but we are doing it. Engage
the individuals as full participants in the risk communication. If you want to see a truly
epic fail in risk communication, you should see some of the briefings that people gave
communities about nuclear power plants. They brought the experts in and they explained
to them everything that can go wrong and why it wouldn't go wrong. And people left
going oh, everything can go wrong, right? I mean this risk communication they did in
fighting for nuclear plants is part of why this really exploded in the ‘70s because it was
such an epic fail. Whereas, the same communities would be like oh, coal plant. Great.
We need power. Right? They are clean. They are not like nuclear energy which is
dangerous.
So there is the big risk analysis conference. There is a huge amount on this about how to
do it and we mostly are not using it. So then like I said we talked about this, those have
been previous not, those are recent. We used the site advisor to say this is a bad site and
we looked at why it had been reported and we said this is a bad site. Do you really want
to go? Yes. Okay. Well do this as your risk mitigation option. Go. You know. You go
there.
And then we tested it with elders and it was kind of funny, because they were like
toolbar. Kill the toolbar. It is so tiny I can't read it. The undergraduates like the toolbar,
so this is about context and this is what I talked about. Elders, we have never had an
elder indicate they would ever lie on a form on the internet. Yeah, it is a different…
>>: [inaudible] on that is that it would be a good [inaudible].
>> L. Jean Camp: Yeah, we have not engaged in that component of risk mitigation. And
even boomers, boomers are mostly, they consider it--you know when I talked about
competence versus benevolence? They don't want to, they--and, you know, people
consider computers humans. I'm sure you have all seen that, that if you have to evaluate
a computer tutor on the same computer that the program was run versus a different
computer, people will give them higher evaluations on the computer that it was run on,
because they don't want to hurt the computer's feelings. That would be mean.
>>: So with the elders, do you know that they're not lying or is it just that they are
reporting that they are not lying.
>> L. Jean Camp: We have both, we--okay, so they report that they are not lying, but the
way that they report they are not lying, so there is this thing called mini university that
brings older people in and this is why I partnered with the gerontologist, and if I have one
message about how to go forward, is find somebody who has been doing this for 20 years
and partner with them. So one of the things I said is well, you know, you lie on forms on
the internet. And this elder woman stood up and said “I am not a liar. I cannot believe
you would impugn my integrity.” And I thought I am really glad that they don't have
metal forks by the end of this discussion, because I would not be here before you. And
not only do they say they don't lie, in the focus groups that we've had, so what we started
out with is we had a series of focus groups, about 8 to 10 people; we ended up with 80
focus groups so that was very open. Then we did a preliminary survey. We did a 1000
people survey, but in every focus group we had this very consistent finding that they
don't lie, and I think that I am more ready to reject the hypothesis that they are lying,
because they would all have to consistently lie about not lying and they would have to
have lied in the initial class that I gave with them in [inaudible] and all of the focus group
guys would've had to have lied.
>>: Yeah. Just make them…
>> L. Jean Camp: No. It is a good point.
>>: [inaudible] covering attitudes about lying rather than lying on forms rather than
actual lying, so there may be a difference between those.
>> L. Jean Camp: I think it would be extremely hard for you to make a case for the
hypothesis that they lie given our three years of experience working with them. I mean, it
is possible, but really I would say it is highly unlikely. So anyway they hated the toolbar
so we built this cool little cue that I adore that has little arrows that you can evaluate it
yourself and then it has two extreme points, one it glows bright green if you are in a
known bank and then it gets really red and flashes if you are about to go to a malware
site. I will say that we did a very small group examination of this. We had five people
because they are hard to build and so on. One of them tried to keep it. It was like, can I
buy this? No, you can't buy this. It is not going to work next week when we stop having
the graduate students sitting on the server. That shows you the up and down.
>>: [inaudible] familiar [inaudible].
>> L. Jean Camp: Had you seen the bunnies? Yeah, there are ambient bunnies and there
are, and the seal. The seal is fuzzy and you can use it for--yeah, we have a lot of…
>>: I mean you can buy…
>> L. Jean Camp: We have a lot of cool things in our lab. You know we don't have, and
active surface. Now that would be really useful. Hmm. And the other thing, and this we
are just starting this; I can send you the paper on this about risk at work. When you look
at the insider threat and you look at common insider threat training, a lot of it goes like
this. You see that guy? He is probably, he might be our enemy. I want you to watch
him. You see that guy? I mean a lot of it is really spy on the other employees. All of it
is organizationally flawed, and it treats the predominant insider threat as the malicious
insider threat, whereas how did operation buckshot yankee happen? I mean were those
malicious insiders? No, they were guys in the Pentagon parking lot that said oh look,
somebody dropped their USB. I will just take it in and plug it into my computer and see
whose it is. These were not people who have not had training. These are people who had
so much training that they are willing to go out and face death. I mean I am not big on
facing discomfort and they still did this. It was a huge issue and it was a massively
successful attack. Why? Because they didn't think about what risk they were doing.
They just thought about their task. And so we have this mechanism where we want to
inform employees. What is the organization's risk posture?
We looked at a way of aligning incentives and identifying changes in risk behavior. Now
this is where we used what I talked about looking at how people suddenly change. So if
you spend six months or a year as a risk averse employee, and then all of a sudden you
are just accessing everything, something has changed. Either you have changed or your
machine’s ownership has changed. You can't really solve that on the network, but you
can solve it on the machine and encourage users and enable them to get the job done. So
we are doing this with this risk budget and the thing is this is not a security metrics
proposal. So we can just, this is not a security metrics proposal. We are only trying to do
order of magnitude risk. Now the reason we have the super low risks about which we
pretty much don't communicate anything or only communication probabilistically to try
to get them to change their behavior, in that large risk, because right now you just get
access. WikiLeaks, the guy who brought down Barings Bank, the people at John
Hopkins that stole the information of 10,000 people in three months; all of them would've
been detected if you just had a little counter in there that said yes, this is normal. And we
also have this peer comparison, right, because sometimes it becomes normal. And if it
becomes normal for everybody at once, okay, it is the Friday after Thanksgiving and all
of a sudden all of your call operators are looking at 5000 records instead of 12. You don't
stop an entire class of employee because the odds of the whole class of employees
suddenly becoming malicious--did you see the onion thing about the snowman march on
Washington to prevent global warming? In the same likelihood, in the same… So we just
say are you really doing something different, in orders of magnitude different? And that
is where you get the real insider damage. Are you kind of systematically being risk
seeking, or are you being really risk seeking? The individual knows this is their job; the
person can generally tell, usually tell whether or not they are e-mailing something against
policy to the CEO. I mean they are obviously social engineering attacks, but they are
very hard to do effectively at a massively parallel level so you change the economics of
the attack.
Or if they are, you know, e-mailing this totally cool file to their friend because it is
hysterically funny. And the point is you can break the glass, but if you break the glass it
gets audited. And if you are really at risk of [inaudible], you can say you can break the
glass but we will stop your action and then audit it to make sure that it is okay. So you
know when somebody has done something; it is detectable. So if you call my bank,
[inaudible] Bank, they will say, instead of saying please hold, da doo da, they will say we
have the victim of a widespread phishing attack. Please stay on the line so that we can
determine if you have been phished. We will pay for this… But you can detect at the
company level.
And the organization can set these really broad risk goals and then change them and at
least have some idea about the level of risk that they are experiencing. We do very
simple budgets. We have them expire, so one of the early works on this with risk tokens
is they never expire. You can work someplace for six months and you are a risk token
billionaire. One of the punishments, that is really organizationally, look Microsoft is not
going to send you to Afghanistan if you screw this up. Oh, and incidentally, if you look
at continuing violations of the USB policy in the military--I was the military legislative
assistant for the district that had Fort Bragg, so I learned, I even learned which songs
belong to which branch. A lot about the--they are in Afghanistan, right? What are you
going to do to them? You sent them to Afghanistan. I mean, yeah they are going to
violate your network policy. You have to come up with another way to deal with it. That
is an area where this would be wildly inappropriate. You are being shot. Would you like
to accept this risk budget? So that is an extreme context, but you need to put it in
different contexts. Angela Sass says you do not want to make your currency traders into
your auditing clerks. So you have to be careful about it. I know I am running out of
time. I probably have run out of time 20 minutes ago. Sorry.
We did a really simple example of this, one that you said this is considered risky, one that
said this cost you, I don't know, 18 points, which was like $.20, or $.18 in our
experiment. And we just took self similar subjects and did the experiment twice. Yes, it
was statistically significant and yes, I am using a chart to show statistical significance so
shoot me. I know that that is wrong. This is how many risk seeking actions were taken
even though the information was only slightly quantified and we had 20 subjects in each,
but it was still quite significant statistically. These are per subjects, the risk budgets.
Look you can easily detect who is risk seeking and who is risk averse. It was almost like
we flipped it, with this very simple change. One of the questions is how much time does
it take? The regulatory friction was about 4.3 in this and we made them look at every
decision, so you don't want to implement something that is used by your employees and it
takes 20% of their time. They will disable it.
So security behaviors embed trust and risk. That is not a new finding. Translucent
security is not usable security. It is not default security and I will just go through the next
few. So this is what we get in the physical domain. This is what we get in computer
science. We can do better. And that is the equivalent of smoking kills. People try; they
need help. But sometimes this--I mean think about it. This is such a user cartoon. Oh
my God, the users; they are so stupid. I don't think I need a question slide. [laughter].
>> Brian LaMacchia: Let's thank the speaker.
[applause].
>>: A lot of the earlier part of the talk was about…
>> L. Jean Camp: Epic fails?
>>: Yes. Can you, is there anything we can be optimistic about, examples of things from
the industry that would be good as far as risk communications?
>> L. Jean Camp: Examples of really good risk communications?
>>: [inaudible] who has done well? That would be a good start.
>> L. Jean Camp: Can I get back to you on that?
>>: Yeah, sure.
>> L. Jean Camp: I can tell you that almost every industry has gone through this. The
pharmaceuticals, power, cars, every health domain, like the early warnings, everybody
knows, well, I don't know. We know. Don't drink if you are pregnant. So they started
out, they did exactly the same thing. This is fetal alcohol syndrome; it works like this.
Drink only moderate amounts, right? But the people who really drink, you know, they
are like, I cut down to a pint a day. I never drink a fifth everyday anymore. So then they
got to the really simple don't drink when you are pregnant, even though, I mean it was an
overstatement, right? Have a glass of wine. What the hell, you're pregnant. You deserve
it. But it works because the people who are extreme get the message and I know I have
been harsh to the industry on this, but in general, every other industry has done this, so it
is not like computing is unique.
>>: So ever the skeptic, wouldn't that argument apply, tell you that the just say no
campaign should have been a smashing success as drug policy? It is exactly the message
you just gave. The problem…
>> L. Jean Camp: I will say to you that it has been successful in a lot of ways. What was
not successful was this is your brain on drugs; this is your brain on drugs with a side of
bacon. Because, and what it has done is massive drug shifting. We have at IU which
was one year voted best party school, you know what? I won't tell you any of the jokes
that we tell about the graduates; that would be… Especially since we have such great
graduates. You should totally make them all interns. They have really changed their
drug use patterns. In the ‘80s you got a lot of ecstasy and a lot of coke and there was a
different array of substance abuse. Now we have almost drinking. Drinking is like
ubiquitous and pre-drinking, I guess you know, pre-drinking, as they call it pre-drinking
at IU because who needs a game? And binging has massively increased, so that is kind
of more than a bad risk communication. That is unintended consequences. Because we
have very, very few drug busts on campus. And they will now, you know, arrest, they
will now go to people’s rooms who are smoking pot in their rooms and instead of just the
RA saying dude, that stinks, they will take action against them, so we have records of
these things and they have changed their substance abuse, but they have changed their
substance abuse in ways that are probably more physically damaging. So is that a
success? I don't know. I guess it depends on your point of view. Thank you for coming
during the Seattle snowpocalypse.
>>: I have a question. [inaudible]?
>> L. Jean Camp: Well, no, the designs for the underlying system too. Not just what the
risk communication is, but I wanted to communicate that there are client side changes
that you can make. It is not just happy pictures.
>>: I wondered [inaudible]?
>> L. Jean Camp: What we built that trust. Camilo Viecco built that and Alex Tsow, so
those guys, we have that built. All we have for the insider threat is the design and the
Darper proposal.
>> Brian LaMacchia: We will see you later.
>> L. Jean Camp: Thanks for coming.
Download