1

advertisement
1
>> Lili Cheng: Thanks, everybody, for coming. This is Jeff Hancock from
Cornell University, and he is the co-chair of the I-School. And what's really
cool, one of the things you guys might know about Cornell is they're just
starting a brand new campus in Manhattan, and so he'll be there.
But just love the title of the talk, because it's
focuses on social media and one of his focuses is
interpersonal communication and the psychological
interaction. So please welcome him and enjoy the
a little devious. So Jeff
looking at deception,
effects of online
talk.
>> Jeff Hancock: Thanks, Lili. It's really, really a pleasure to be here.
It's my first time at Microsoft campus on this side of the continent. I've
spent some time at the Cambridge one and also in the U.K. So this is my first
time here. Kind of, I believe, it's the mothership kind of situation, right?
So very excited.
Let me give you a little more background. Lili is totally right about where
I'm at and what I'm doing. And I'm happy to talk about the New York tech
campus and Cornell's idea about what that's going to be. It's a $2 billion
campus located on Roosevelt island. So it's basically right beside Manhattan.
It's going to be all entrepreneurial oriented. So it's a completely applied
thing with the idea of creating sort of co-located corporate and
entrepreneurial space.
So a bit more about me is I'm a psychologist by training so all my degrees are
in cognitive science. But I live in these other departments so information
science and communication. Hopefully, although I've been in the states for
almost ten years now, you can still hear a little bit of my Canadian accent.
So I grew up in a place called Kelowna in the Okanagan, which is just north of
here a little bit.
And I was looking over the fuse lab projects when Lili and I connected. I was
sort of blown away. You know, oftentimes in academia, a lab will have a pretty
serious focus. And what I realized with the fuse labs is they sort of have a
similar approach, which is do really cool stuff they are interested in.
So I'm going to do a kind of talk I've never given before today, which is
rather than doing one big, long, deep dive on a program of research that we
have, I'm going to do three medium-sized dives.
2
And they're going to be on these three things. And so again, unlike most of my
talks, there won't be one over-arching theoretical perspective. There will be
theories associated with each of these phenomenon, but instead I want to talk
about a sort of a meta issue, which is, I think, about the distinction between
people's beliefs around social media and actually the dynamics as we sort of
can measure them.
I think there's a huge mismatch between those oftentimes, and a lot of it comes
down to fear.
So let's get started, and it's Friday afternoon, and if you read my abstract, I
did give some fair warning. I'm going to make you guys do some stuff. And
you're going to be the first to ever do this particular task. So I'm going to
start off with an experiment and you have to start by choosing a partner. No
singletons allowed, come on. We're talking about social media. No lonely
crowds out there. Choose a partner. If you're far away, get closer together.
And then you have to decide which of you has the darker colored shirt. So of
the two partners, who has the darker colored shirt. That's not the complete
task. There you go, and now the person with the lighter colored shirt, I'd
like you to close your eyes. Close your eyes very tightly. Do not open them
until I explicitly so. Remember, Lili told you, I do study deception. I will
find you if you open your eyes.
Okay. So those of you that are wearing the darker shirts, those of you with
your eyes still open, there are some instructions for you, as you can see,
those instructions. So anybody still need to look at those? Everybody's good?
Okay.
Dark colored shirts, thank you. Please open your eyes.
>>:
Light color the shirts?
>> Jeff Hancock: Sorry, light colored. Very good. I like the people -that's very good. I'd like to say that was a test, but you're just good.
Well, let's go ahead and tell your story, please. Tell your partner your
story.
[multiple people speaking].
3
>>:
Sorry.
>> Jeff Hancock: That's okay.
good idea. Chime in.
I feel bad for you.
Yeah, actually, that's a
Okay. Wrap up your story and let's pause here. Perfect. Thank you, guys.
Now, let's just see a show of hands for -- we have a different test. So who
was telling the stories again? Dark shirts. Okay. So light-colored shirts,
it's your turn now. I'm really curious. You guys had to close your eyes. You
sort of had like a meditative little experience there. Raise your hand, those
of you that are feeling really good today. Any of you feeling really good
today? Okay. So we've got one, two, three, four five on this side. And one,
two on this side.
All right. Perfectly done. Now, in this highly, highly scientific study,
we've just demonstrated a small something. Here's what the instructions were
for the room when you guys had your eyes closed. So the dark colored shirts
had to do something sort of sad on this side, and only two people raised their
hands. They were this very sad. On this side, five people, you know what? I
feel really great today.
Okay, now, how can that happen? Let me just say, obviously, we do things a
little more carefully than this in the lab. But what happened, in general?
What's happening there?
>>:
Contagion.
>> Jeff Hancock:
with your thing?
Contagion, right.
How does it happen?
How did it happen
>>: It seems like empathy, where it's like this person is kind of unloading to
you, and ->>:
You still look so sad.
>> Jeff Hancock: I almost feel like we should do some debriefing on this side.
Like hey, everybody, jump up and down. You're going to be all right. No,
exactly, you put yourself into their position, and it can make you feel really
upset. Very good.
4
Any other mechanisms that it could operate on? So a lot of work has shown that
as soon as you enter a room, without talking to anybody, you can start to feel
some contagion. In fact, most theorists on contagion have argued that there
are non-verbal mechanisms from facial expressions to posture to even olfactory
senses. So those of you, these people over here were telling things as if they
were happy today, and by doing that, they changed the way their body moved and
oriented itself and they may have even been producing some different smells.
Well, one of the biggest questions around emotion and text is that text removes
all of that. So text is this really, really non-verbally poor channel and when
we have somebody to say like this or like that, the fact that there's a screen
in between us gets rid of all of that.
And this assumption that emotion is communicated primarily nonverbally is a
very powerful, very old assumption and the modern legacy probably comes from
Paul Eckman, a very famous huge impact social psychologist. He was interested
in emotion and facial expression.
And so we have been studying and thinking about how the face, the body, and the
nonverbal cues express emotion. So what happens when we go into text? Well,
when we survey people, what we hear is that most people think that you can't
communicate emotion in text very well, if at all. And that this is the number
one reason why there are so many arguments in e-mail and why, you know when
lovers need to get together physically, et cetera, et cetera. This text has
this paucity. You can express emotion very well in text.
This is a very powerful assumption, as I said. When we sample people
throughout the nation, this is the responses that we get. But there's a number
of researchers in the CMC, computer media communication world that sort of
predated the social media dynasty, is that we adapt to the verbal channel.
So you were here, you were talking to each other. You not only had the words
that you could use to set yourself into an empathetic space, you also had your
bodies. And what people like Joe Walther have argued is that we can use our
words, our timing and our punctuation to express emotion in text.
And so I'm just going to talk about one really, really, really simple study.
And then the study too is actually an interesting study. I think it's the
first one of its kind to show emotion contagion in text. I'll tell you a
little bit about our -- the study that we're doing at Facebook right now that
5
sort of shows it out in the wild.
So we're going to start with a very simple first study. The very first simple
study is can we tell if somebody is happy or sad in text, okay? Very simple.
It turns out no one's actually done that before. You'd think they would. We
do a really easy paradigm where you have an actor. The person is supposed to
act happy or act sad. Normally, they're very trained, very good at doing this.
But instead of doing it face-to-face, they had to do it in text. None of them
had ever done that before. But we just simply asked them how they did it
afterwards.
The partner came into the lab and thought it's just a get to know you
experiment. We're looking at how people communicate online. At the end, we
asked how was your partner feeling using a questionnaire.
And here's really, I mean, it's interesting only in the size of the effect. So
we asked people afterwards, you know, how happy were they and what was the
relationship quality you think you would have if you were to hang out with them
again. So the people that were acting sad, they were perceived by their
partner as really sad. But look at these affect sizes. If we were to do that
as an affect size, that's greater than a standard deviation. That's about 1.5
standard deviations. It's not even close. It's not even close at all.
People had no problem telling when their partner was acting happy or sad.
Acting happy or sad.
And the people, when we asked them, how did you act happy? How did you act
sad. Happy people used a ton of punctuation, and we'll see this is important.
We'll see this is almost like the prosity of online communication. They agreed
a lot more. Whatever the person said, they just agreed with them. They
responded quickly, which is consistent with depression. So depressed people
actually tend to communicate at much lower rates, consistent with the word
count thing here.
So these ones here, we asked them what they did, and here's what they actually
did when we analyzed their language. So they spoke more. They used more -the sad people used more emotion terms, especially negative emotion terms, kind
of obviously and the sad people just disagreed. Happy people reported agreeing
more. And it's true, the sad people actually disagreed more.
6
So if we ask somebody to, hey, act happy or act sad.
and it's actually zero problem.
They can do it in text
Now, the fact that we had to do this study because it's never been done before
and the fact that it flies in the face of what we actually do is really
shocking to me and when I think about what my most important possessions are,
probably like the love letters my wife and I wrote back when we were dating
would rank amongst them. And they were these incredibly intense things.
And my wife and I are not alone in that. We can go back looking from
Shakespeare back to Elizabeth Barrett Brown. Letters, words can just drip with
emotion. And so this idea that emotion can express -- sorry, text can express
emotion is just one of these problematic beliefs that people have. And I think
it actually has nothing to do, necessarily, with increased disagreements or
difficulties with communicating, say, in e-mail.
There, I think, and we can talk more about this at the end, it's much more
synchrony. So when I send an e-mail, I have zero feedback with how the
person's interpreting it. It has very little to do with whether they
understood my emotions.
I'm also happy to talk about emoticons. I think they're one of the most poorly
named things ever. They have nothing to do, in my view, with emotions. But
they are important. They just have nothing to do with emotions.
Okay. And again, these are things -- and if we want to interrupt and ask
questions, that's totally fine by me too.
So the next question is, can we induce emotion? Now, in the actor case, it's
problematic to see if we can induce it, because that person is acting happy or
acting sad. He may not actually be feeling that at all. And his partner, she
might be picking up something that's his actual thing.
So we need to now induce emotion and then see if the people that are
interacting with that person can actually, if it actually changes their
emotion. So now we're not doing a detection task. We're going to see if we
can watch emotion spread from one person to another.
So following all the emotional contagion literature, and there's a fair bit of
it now in social psychology, we had three-person groups. Turns out that a
7
group is much more conducive to contagion than just a dyad, though we've shown
it in dyads as well. We have a person that we're going to call the
experiencer, and we do really bad things to them.
So first of all, we get them to watch this movie. Has anyone ever seen it?
"My Bodyguard." Yeah. It's an older one. A lot of times people think it's
the Whitney Houston one. No, no, exactly the opposite, basically. It's really
horrible. It's really a lot about bullying and things. We have to have
tissues in the room because a lot of people cry.
So they think this is study one, and their understanding is this is the study
they've come into the lab to do, and the lab right next door to mine is run by
Mike Shapiro who studies entertainment psychology. So they're told they're
doing the study with them.
At the finish of it, it's very fast, takes five, ten minutes. They're told if
they want, they can do another study with the social media lab, and that's a
multitasking study. Okay? And so after they've done this study where they
watch a thing and they answer some things about their emotions, they come over
to our lab and they do another study in which they're going to be talking with
some people about what are the most important and difficult things you face
when you first come to college. That's task number one.
Task number two is they're filling out anagrams. So the experiencers, they're
filling out anagrams that go to angry and malice and these, you know, very mean
anagrams. And they're listening to some of my favorite heavy metal, which is
actually really terrible stuff if you're not into heavy metal.
So they're multitasking. They're having to do these anagram tasks. They're
having to do this, and they're listening to really, really awful music for most
Cornell undergrads. The reason we need to do that is when we first did these
studies, we would get them to do something like the "My Bodyguard" and then
they would talk to somebody. By the time they were done talking to them, they
would feel fine. And it turns out this is an effect called the talking
therapy, which is quite common.
Now, the control person, there's two of them. They are partners of the
experiencer. They watch what is possibly one of the most boring movies ever,
and they fill out very sort of neutral anagrams, and they listen to, I think it
was Winton Marsalis. Who I have nothing against. But it's pretty, you know,
8
easy-going stuff.
And so this is the two conditions. They can be in the negative affect
condition, in which there's an experiencer and two partners, or there's a
control condition, in which all three partners are having this experience.
Okay? Is everybody good with that design? Okay.
So there's 22 groups of three, two conditions, tips for surviving freshman year
to be clear, we have a film clip they do. That's study one. They're asked if
they want to do some more. They do the multitasking study, which is an
activity, and then they fill out the final questionnaires.
So first of all, were our experiencers feeling bad? Yes. So using the PANAS,
which is a pretty common measure of emotion that looks at positive affect and
negative affect, you can see that people in the negative affect condition
overall were experiencing a lot of negative emotion. And particularly, they
were feeling really intense negative emotion. These people were quite upset
during the whole thing, which is what we want.
Now, on the end of it, we use another measure, another emotional measure so
they didn't get too savvy to the fact they were looking at emotion both times.
We used the circumplex, which we can also take a look at it. It's a little
different from the PANAS, but it can give us dimensions of emotion that are
similar.
What matters here is not the experiencer. We can see that the experiencer,
despite talking to other people that weren't upset, still is feeling quite
upset. Quite tense. The really key comparison is between these two. This is
how the people in the neutral groups were feeling when they were watching that
bland movie and bland anagrams, et cetera. Here's how they're feeling.
Now, these people had the exact same experience as these people, except they're
talking to this person. So our key difference is here. And although this is
rather slight, it is significant, and what we can see is that these people are
much more tense after talking to this person that this person.
Couple key points here. So these people aren't acting at all. They didn't
discuss how they were feeling. They didn't know that their emotions were being
manipulated. And yet, through text, these people were feeling much higher
negative affect afterwards than not. So this is, as I was saying, the first
demonstration of emotional contagion in a laboratory setting.
9
Yes?
>>: Can you just clarify real quick.
Was there verbal talking or texting?
>> Jeff Hancock: Sorry, texting.
when I say talk, I mean text.
>>:
So
No physical cues.
They didn't even see the other
Okay.
>> Jeff Hancock:
>>:
The world for me has merged into text.
There was no physical cues?
>> Jeff Hancock: Never.
person beforehand.
>>:
You said they talked and that they text.
Here and then there.
So how is tension different from negative affect?
>> Jeff Hancock: Yes, excellent question. So it turned out that with negative
affect, it's too broad and so we still see some difference. It's trending in
the right direction, but it wasn't significant. When we looked inside of
those, the intent -- and I'll show a few more. The intense negative emotion
ones popped. So intense negative emotion is above and this tension one goes
above.
The general negative affect was, I guess, too diffuse of a measure on the
circumplex. And so we don't see that. Tension, in particular, is about
feeling anxious. And so we think that it's partly contagion so you can see
these people are feeling very anxious as well. We think, and we're doing some
follow-up studies, this is also about the fact like what is wrong here? Like
something isn't right, which may be so much about the person saying earlier
about empathy, where you can detect something's weird and that makes you upset,
rather than necessarily this just flowing. Here and then Lili.
>>: When they pulled up to do their texting, were they talking about neutral
topics? Was there something they were supposed to talk about, or were they
getting indignant?
10
>> Jeff Hancock: No, right, good question. We do analyze the language
afterwards. I'll show you that in a second. We did a standard emotional
contagion task, where it's important they talk about emotionally meaningful and
typically negative things because you want the discussion to match the emotion
you're doing.
And so it was about the difficult things you encounter in your freshman year
and some survival tips for getting over those hard things. But everybody does
that, including the neutral groups. Lili?
>>: Do you find that negative feelings are more contagious than happy
feelings, or are they kind of similar?
>> Jeff Hancock: The literature suggests that negative are much more
contagious. But we have another study where we were doing a happy condition,
and it was contagious as well. We haven't replicated that. We've replicated
the negative emotion, but we haven't done the happy. But the whole literature
says if you're going to do an emotional contagion study, do negative emotions.
And the more intense, the better.
>>:
The neutral group, what did the partners get to talk about?
>> Jeff Hancock: Yeah, this is a little bit confusing. So, in fact, this
neutral group, this consists of three people talking on their own, and this
consists of two people talking to this person. So these people never interact
with these people. Good question. Yeah. And, you know, we really struggled
on how to do this, because this is being represented by one person, two people,
three people, and these two people are hanging out.
>>:
[inaudible] texting on.
>> Jeff Hancock:
>>:
Yes, they were texting on how to survive freshman year --
No, no, I mean the devices.
>> Jeff Hancock: Oh, it was G-chat. So they were sitting at computers in
separate cubicles in the lab after they'd just finished doing the big emotion
study thing. Yeah?
11
>>: So if the neutral group had [indiscernible], would that -- is there in he
study on how, like, two people get really neutral, just like what we saw from
the negative affect thing?
>> Jeff Hancock: Yeah, yeah. It's really difficult to know what to do with
this other group. You could rely on random assignment to have these people
just be acting neutral on average, but we decided to specifically hold them to
a certain type of thing that was the same as this.
If anything, that procedure works against us, because these people are getting
really nice, neutral, bland primes the whole time they're talking to this
person. If they weren't. If they were just like interacting and not getting
these continual primes, you could imagine this going up. But we wanted a more
stringent sort of, you know, control.
Great questions.
Then we did look at the language. So what are the mechanisms? How is it that
text, words can act as a conduit of emotional contagion? And the answer right
now is we don't have a very good idea. There are some promising directions.
So, for instance, here's disagreement terms. We use a program called Luke,
which probably many of you are familiar with. Very common in the psych
literature now. It counts the number of times a certain type of word occurs,
like disagreement, and then it plots it as a percentage of the total number of
words that the person used.
You can see the people that were feeling really upset used a lot of
disagreement words, which matches back on to the actor study that we had. But
the people that were feeling a lot of tension, the partners, used a lot of
disagreement words too. So it could be that linguistic style matching is the
mechanism by which this happens, but we're still working on this sort of
approach of linguistic style matching.
So I don't have the slides from Adam yet, but we were working with Adam Kramer
at Facebook, and they're doing two studies that build off of these experiments.
So these are very controlled, small-scale experiments.
So at Facebook, the data science team is trying to scale this up and one
12
study's been completed. Looking at the negativity/positivity of your status
updates that you see, okay. So what's important there is it's non-directed.
It isn't about, you know, you tell somebody about your trip. It's just like
hey, I'm in Seattle and I'm having a really fun time at MSR.
Okay. So
uptick on
this Luke
that come
any of you that get my updates in your feed would have just gotten an
positive words. Facebook can detect that using a modified version of
program. And then you can also measure the number of negative words
into the update, okay.
So what you can do for each person, and it looked at about 500,000 people, is
you can create an incoming positivity index for their updates. And it's a
tough one, because you don't even know if the person looked at that update or
not. Nonetheless, here's their update stream for a day, take all of those
updates that were coming in that I may have seen, create a
positivity/negativity index.
Then what I do is track me and I look at the number of updates I produce, a
much smaller amount, I look at the number of updates that I produce on day one,
day two and day three afterwards.
And what we're finding is a very small effect that in the statistics that we're
trying to figure out how to use, because I've never done, you know, studies
with a half million people, I'm an N equals 22 kind of guy. It's there. And
it's exactly the direction you would predict.
So people that had higher incoming but non-directional, so happier updates,
produced more happy updates for the three days afterwards. And the people that
had negative produced more negative afterwards. For three days, which is
shocking to me.
And as I said, the effect is tiny. We're talking like, say, 1.03% change for
the happy going up, and 0.97 change for going down. But when you multiply that
by the number of people, 500,000, and the number of updates that are produced,
we're talking about three or four hundred thousand more positive updates
produced by people by that 1.03. And about 600,000 more negative ones being
produced by that 0.97 change.
So it's like smoking, you know. Like smoking effects are tiny. But when you
multiply them out to the population, a lot of people can die. So not quite the
13
same analogy, but the same idea.
Okay. So that's emotional contagion. That's our medium depth dive. And we've
done a bunch of studies. The last one with the Facebook one, we're actually
manipulating some people's streams so that some people for a day will get no
positive status updates and some people will get no negative.
I didn't think they'd ever go for that, but they are.
let you guys know what we find.
>>:
So can't wait and I'll
[inaudible].
>> Jeff Hancock: That's been the really difficult part. So we ran it once
already in December and the data did not work out, because we didn't have the
proper sort of controls by when we needed to proportionalize by. But they've
just re-run it, and I'll get the data soon, I hope.
Okay. So that was emotion contagion. Going to switch over to another topic
that's somewhat related and perhaps now we'll start to see the theme that comes
through. You're going to see more and more in the news, this week it just
started, but next week is when it will probably hit the mainstream, but some
studies just came out in, I believe, JPSP, but maybe PSPB, showing this
contrast effect.
So we all feel worse as people because we look at people, other people on
Facebook, and the other people on Facebook are optimizing their lives. So they
all look like they're having an awesome time and we look at that and we know
we're not having as awesome a time, and therefore we feel crappy. Okay?
So there's this study coming out, and there's a number of other sort of thought
pieces that have suggested the same sort of thing. And it's compelling, and
the study, I think, is well done. They only look, however, at self-report is
one issue and they only look at a part of what, say, something like Facebook
is.
So another part of Facebook is looking at your own profile. Yes, we do spend a
lot of time looking at others, but we also look at our own. We wanted to look
and see, first of all, is it the case that 850 million people would use
something that would make them depressed on average, which probably you're
going to start thinking, oh, that's similar to the text and emotion thing.
14
Second of all, there are some very important things about the Facebook profile
that suggests that it should act quite differently.
So this is work that was really driven by two of my Ph.D. students that just
recently graduated. Amy is now a professor at Indiana, and Catalina is at the
University of Wisconsin. And Amy did this first study, kind of dragged me
kicking and screaming into it. Self-esteem. I wasn't very interested, but she
wanted to just do a real simple study. Let's get people to look at their own
Facebook profile in one condition, look at mirror in another condition and fill
out some surveys that were unrelated to Facebook or mirrors in a third control
condition.
It turns out this is very similar to a study done in the '70s where people,
when they looked in a mirror, they felt worse than if they sat in a room and
didn't look in a mirror. The thinking from that was this idea that when we
look in a mirror, it highlights all the ways we are not matching our idealized
self.
Amy's insight was but Facebook allows us to optimize that self a little bit.
Maybe when we look at our own Facebook profile, we'll feel better about self.
Maybe we're closer to that ideal self.
And indeed that's what she found. So it was a very nice, quick, light study.
Look at your own Facebook, look at some control material or look at a mirror.
And sure enough, Facebook had a higher -- afterwards, you reported, a
self-report of a higher self-esteem than if you looked in a mirror.
Okay. That caused Catalina and I to go into this in a little bit more depth.
And we wanted to come at it with a very big theory and, in fact, probably many
of you have heard of it. It's been tested hundreds and hundreds and hundreds
of times. And actually, there's a famous psychologist in the Netherlands that
was just found to be fraudulent on 19 of his studies, faking data or whatever.
It was on self-affirmation theory. And we cited some of his stuff. Yee-haw.
Okay. So we looked at self-affirmation theory. It's actually pretty
straightforward. It says we all want to feel good about ourselves. We've
built up a number of defensive mechanisms to make sure that that idea of us
being pretty awesome is in place. And the third is that when we are affirmed,
when we feel good about what we really care about in life, and I'll go over
15
what that is, then these defensive mechanisms get reduced, okay. So when we
really focus on what's important to us in life, then our universal need for
positive self-regard doesn't require as many defensive mechanisms.
And don't worry, though I thought about it, I was going to get you guys to do a
self-affirmation task. I'm not going to make you guys feel good about
yourselves. Don't worry.
Okay. So what really matters with self-affirmation, the way to do it is I
would get you guys to write down a list of the ten things that are most
important to you. And many of the things that would come up would be your
family, what you do here at Microsoft, the relationships that you have with
people here.
And when you write that list, I would then get you to say take the top three
and write a paragraph about how important those top one, two or three things
are. But really, social roles, relationships, values, identities, these are
the key parts of it.
So if you think about what's on Facebook, for example, a lot of the information
that is on your own profile actually speaks to all that. Relationships, social
roles, the personal relationship you have. I mean, this is the raison d'être,
other than advertising, of Facebook. Right? I mean, this is what we do there
is we visualize our relationships.
And it's really about identities. This is one of our honor students. She's a
dancer, she's a sailor and sometimes a banana. And she puts all of those
things up on her Facebook profile because those are different parts of her
identities, and when you talk to Kate, it's really important. She loves
dancing. She's an instructor now. Whenever she can, she goes sailing, and
she's actually pretty fun-loving person. And those are all important
identities.
So Facebook in a way, your own profile highlights, visualizes, if you will, the
very things that self-affirmation theory says is important. So what we wanted
to ask is that effect that Amy Gonzales saw with self-esteem, is it operating
through self-affirmation?
So to visualize self-affirmation, here's this thing, you threaten somebody, you
make them feel bad so you say you actually suck at X. And then what you
16
typically see is defensiveness kicks in. So if I were the person to say that
you were bad at something, then your defensive mechanisms could include
thinking that I was an idiot, thinking that the task was dumb thinking that the
actual advice I was providing was incorrect, those sort of things.
So we have these built-in mechanisms that can act to defend us against ego
threat.
And the very basic, simple idea of self-affirmation is when you go through this
self-affirmation process, it removes the need for defensive mechanisms to kick
in because they've been sort of built up already in focusing in on what is
important to you.
So self-affirmation decreases defensiveness. So that's what we're going to try
to measure by getting people to look basically at Facebook or at somebody
else's Facebook profile.
Now, in the standard way of doing self-affirmation, you would do this sort of
thing. Get you to write about important values, as I said. So here's one
person's actual thing. They say I'm always comforted by my relationships and
grateful I have such wonderful people in my life.
Here's one that's about the least important values. This is again taken from
the study. My least valued choice, it may be more important for others who are
more active in the community. So they're talking about politics here, okay?
So we can see this is a typical way that many, many, I mean, hundreds of
studies have manipulated self-affirmation. The way we were going to manipulate
self-affirmation is you came in and we gave you about five minutes to look at
your own profile. You weren't allowed to go off of your own profile page. But
you focused on your own profile.
Now, let's say I had Lili come in, and she was assigned to the self-affirming
condition. The Facebook self-affirming condition. She would come to this.
And let's say Scott was the next participant. And he was assigned to the
control condition, then he would see Lili's Facebook profile.
So just to be clear on that, the only profile that was ever looked at in this
case is Lili's, and in one case, Lili was looking at it. She generated all the
content. It's her profile. Or Scott came in and he looked at Lili's profile.
17
So the information that was viewed by the participants in the two conditions is
identical. That's why they're yoked. Only thing that differs is in one
condition, they've actually produced it themselves and it's about their loved
ones and their families and their values. In the other, it's not.
So just to step you through it, we had to threaten them somehow. So the way we
did that is they had to give a speech and then afterwards, we would give them
the chance to self-affirm. So they either went on Facebook and looked at their
own profile or they looked at somebody else's. Okay. So you guys understand
that. Or in the traditional condition, they either did it by writing one of
these two essays, so they would be assigned to one or the other. So there's a
Facebook self-affirming condition and there's a traditional, written-out
self-affirming condition.
And there's the two controls. Then after they'd done that, we threatened their
academic self-concept by telling them that they were really bad. And this is
what everybody got. These are Cornell students in psych or in communication.
This killed them. And so this is what -- like I said, every single person gets
this. So they've either been self-affirmed, Facebook or through the
traditional method, or they've just had a control. And then they get
threatened.
Afterwards, we want to know, we're really interested in how you thought this
goes, because we really want to make sure that when we do these online speeches
that this system works well. So what do you think? And we asked them what we
thought of the feedback, about the competence of the evaluator, the task, we
allowed them to make some attributions, and then how much they liked the
evaluator, okay?
And I could build up the slides for each of those independent things, but you
can see it's identical. Red is when they looked at their own profile, and gray
is when they look at a stranger's. Is that right? Stranger's profile. Yeah.
So feedback accuracy lower. Evaluator competence, not very competent. Tasks,
stupid. Attribution, dumb. And attraction to evaluator was low. So in each
and every condition, I have a star there, because usually that slide's a build,
but I didn't want to waste your time.
So yes. Facebook, when you look at your own profile compared to somebody
else's profile, reduces your defensiveness. But what's really, really, really
interesting and I'll tell you about the replication we've just done, is this
18
slide here. This has been done hundreds of times in psychology. It's made it
into science. Okay? So this is the classic self-affirmation effect. I mean,
it can make you less likely to drink, less likely to smoke, more likely to
improve your grades the next semester. I mean, it's insane, except for the
ones where that guy was lying about all of his data.
So what we see is this. This is the classic thing so opportunity to
self-affirm, they wrote an essay about their most important value and this is
how much they responded to the feedback. They were like yeah, okay, that was
interesting feedback. Maybe I should use disfluencies less often, et cetera,
whereas these people didn't really like the feedback.
And take a look at what's happening with Facebook. It's identical. There is
an interaction affect here. This difference is actually statistically the same
as that. But if I were to mask which ones these were, you would not be able to
tell which ones. So Facebook has the same self-affirmation effects, at least
in terms of defensiveness, as does the classic values writing thing. And there
was no values writing there. They literally looked at what is on their own
Facebook profile, and the people that are there. Yes, sir?
>>: So we've talked in the past, you know, as an industry about the
opportunity for sort of civilization hacking by -- I tap into the front page of
the New York Times and I can make the stock market collapse. I could totally
imagine that Facebook decides that they're going to have every one of 850
million people go back and look at their profile page right before a
presidential election and oh, hooray, status quo, or whatever. Or ah! The
world is all caving in.
>> Jeff Hancock: I mean, 850 million people. It's a lot. Especially if
there's these tiny patterns. You know, I ascribe nothing to the emotions that
I'm doing with them, about you can imagine why they're interested in this. If
there's a way for them to figure out how to make people happier, and when
you're happier, you tend to click on ads more, tiny effect, huge effect.
But, you know so yeah. I think that there's -- I mean, to me, this is crazy.
They didn't do anything. They looked at their own stuff. But it's all stuff
that we care about, our identity and the people we love and care.
In fact, when we asked them afterwards, how do you feel, bunch of adjectives,
this sort of semantic differential approach, you can see that it makes you feel
19
good, it makes you feel loved and supported and giving and grateful. And so
we've replicated this. I'm just going to tell you'll, I won't give you slides
-- yes?
>>:
Sorry.
>> Jeff Hancock:
No problem.
>>: I don't know if you mentioned this, but was there a mean number to how
many friends each person had?
>> Jeff Hancock:
>>:
Ah, yes --
Or activity.
>> Jeff Hancock: So that's what we don't know. And what we're doing right now
in the lab is we're doing some eye tracking studies to find out how long they
spend on different sections and then see if that affects self-affirmation. So
our guess is when you spend a lot of time -- timelines change and everything.
But when you spend a lot of time looking at your friends and who your friends
are, we think that's going to have a really powerful effect.
>>:
But the number of friends the average person --
>> Jeff Hancock: We don't yet know that. We're now recording all that sort of
stuff and doing that. In the replication of this study, we wanted to really
get beyond self-report. That was our first sort of approach, rather than
figuring out exactly what on Facebook is doing it. In retrospect, we probably
should have done that first.
But in this case, we did something called the IAT, where we did an implicit
association task. So let's not ask, see whether they're feeling happy or loved
or self-esteem or not. Let's look at whether, with an implicit task, they have
high or low self-esteem.
So using an implicit association task, after you look at your own profile, you
have significantly higher self-esteem after looking at your own profile versus
somebody else's. That's not self-report.
And those same people, we then get to do a math task afterwards, okay?
And the
20
math task is, you take 1984 and subtract by seven for three minutes.
know. Seems easy. After a while, you're like God dammit!
Yeah, I
What we did then is we can measure two things. You can measure how many did
they do and how many did they get wrong. You can do a bunch of other things by
combining those two numbers, but those turn out to be the one they're right.
They do worse if they've looked at their own Facebook profile.
When we looked at that, we're like oh, boy, okay, we're going to have to be
really careful we understand what's going on here, because you could imagine
the media, for example, jumping all over that.
What happens is the people that are self-affirmed, they have higher
self-esteem, they've looked at their own Facebook profile, they just choose to
do less of the operations. They subtract seven fewer times. So they get less
along. Whereas the people that haven't been affirmed, they're like, yeah, I
got to do well on this task. This task can define who I am. You know, the
Cornell really anxious students, whereas the affirmed people are like this task
is less important to me.
We think this has all kinds of implications. For instance, I play hockey a
lot. I play with the coach of the Cornell hockey team, and he doesn't allow
any social media for three hours before or after a game. And we think we're
just about to write a paper together. It's for this exact reason. If you're
affirmed, you're not going to look back at that game and think about how could
I improve myself.
Or right before the game, I'm not going to try really hard, because you know in
my life, I've got my family, everything's good.
So we think that it's actually important in some cases, like athletes, to not
feel affirmed before or after a game, but to instead have that drive. Yeah?
>>:
So how long does this positive self-esteem last?
>> Jeff Hancock: That's a great effect, speaking at this sort of societal
level. We are operationalizing it at the sort of state level. So minutes to
maybe hours. But there's a very good question here. How long afterwards? So
we often see, and when we show this stuff to our students, I have 180 in my
social media class, they all resonate with this idea of, like, before or after
21
a really hard test or task, like an assignment, they spend a lot of time on
there. We call it procrastination, but it may be serving some psychological
value.
So this, I think, is what I wanted to say to stand in contrast to this idea of,
you know, Facebook makes you feel stupid or not as good as other people. There
are these contrast effects that's coming out right now in a number of other
studies.
Okay. So that concludes our second dive. I need to switch over to a different
program to go into our last one. So are there any questions on the -- on that
so far, on the sort of self-affirmation stuff? I actually think it's really
rather intuitive and makes a lot of sense. And this fairly negative view,
psychological view has to be balanced by the fact that it's a multifaceted sort
of experience.
Okay. Here's our last dive. Hope everybody's feeling all right. And here I'm
going to go less into the details of the experiment. This is probably the work
I've done the most consistently over the last ten years so I'm going to give
high level stuff and you guys feel free to like ask me questions about the
methods.
But lies turn out to be really interesting, because there's many ways, where
when you think of the internet even more broadly that you can imagine
technology facilitating deception. So here's a company, an actual company
based in Chicago. And they're an alibi company. They provide alibis for
people. So you would call them up, and you could -- let's say I wanted Lili to
believe that I was in Vancouver tonight staying at the W there, not here.
So I would call this company up and say, look, I need this person to know that
I'm staying in Vancouver. Sure, no problem. Give her this number. So when
Lili dials that number, what she hears is hello, this is Vancouver, the W. How
can I help you? She says, I'd like Jeff Hancock's room. They say certainly,
no problem. And then they patch her through to my cell phone or to my room
here in Washington.
And they have all kinds of services like that. So they can do a bunch of
things by e-mail. They have some fax. Basically, what they're trying to say
here is they're multimedia. This is just coming off of their web page. And I
see, whenever I talk about this, people say truth alibi, okay. So you get this
22
number down. Here's the various kind of services and the current price list
that you can have. And this fits our beliefs around how technology can affect
deception.
So, you know, like this, it's like a rescue call service, you know? I mean
some of it's sort of like really? Really? People will pay for that? And
we've heard, you know, there's these lie clubs. So people form a club in which
whenever they need a lie told via text, they can send it to the club and then
somebody will say that.
So if I wanted to get out of work early, I could have somebody text and say
that somebody's happened to my wife and I need to go home and I could show
whoever I was meeting with that.
So we can see, there's lots of ways in which technology can do that. Here's
another example. So there's a lot of really great pages out there that have
allowed people that are ill or that have ill family members to generate money
for healthcare costs. Of course, now we're starting to see these fake pages
where people are faking illness or a baby's illness to get money. Here's a
very common one. So you've probably all said on my way and/or received a text
about that, and, in fact, you weren't on your way.
These are something we call butler lies. So, you know, right now, Lili or
Scott with my cell phone number, can get ahold of me literally. They could
call me at any time during the day. And this sort of 24-hour on connectivity
as we believe and started to document, has increased these things called butler
lies, which are statements we use to buffer that connectivity. So we see a lot
of people saying, I'm sorry I didn't get your call. My phone was dead. Sorry
I didn't call you back. I was in a bad reception area.
So we've done a lot of these butler lie studies. We call them butler lies,
because back in the day, butlers actually did this. We found a manual from the
18th century that was for butlers, and one section of it was about handling
visitors. And they actually talked about check with your master. And if the
master is busy, you go and provide some excuses. And they provided excuses you
would give visitors.
Well, we have to do that ourself now. We find that in all -- we've done about
four of these SMS studies. Ten percent of messages, text messages are lies.
So this is a retrospective identification task and about 20 percent of all the
23
lies are these butler lies.
So that means about one percent, one to two percent of text messages involve a
person providing some excuse or account about why they can't interact or
couldn't interact or won't be able to interact in the future. Again, sounds
small, but when you think about the billions of messages sent per day ->>:
Can you explain how you got your estimates?
>> Jeff Hancock: Yes, absolutely. So we -- it's perfect. Perfect example of
asking more about methods. What we do is we bring students. We now have a
community sample. They come into the lab, but we also have a web version of
this. Come to the lab and we explain, what we're interested in is deception.
Here's what it is. Sounds bad. Most people tell you don't do deception.
We're not interested in whether -- in judging you. Think of the lies as like
little birds. And you're a bird watcher. We just want you to count the birds.
Open
sent
that
what
up your phone and write into this web form the last 30 messages that you
and beside indicate whether it was a lie and then write something like
would be more honest. If you were to say something that was truthful,
would you say? And that way we can just code the kinds of lies.
And people aren't always perfect. So sometimes they'll say, like, oh, man,
that guy's head is as big as the moon. That's not a lie. And so what we do is
when we go through it, we can take those out. We call those jocularity. So
it's a retrospective task. If anything, this is an underestimate.
But we find them to be very, you know, we look through them, and it's pretty
impressive. They're very honest. The lovely thing about this, we see these
little lies all the time, "on my way," these little things. But we also see
people like what are you talking about? I'm not at the bar. And then they say
they were lying and I'm actually at the bar. Or I'm not with that guy. I'm
with that guy. So they're really fantastic lies.
>>:
What are the other -- is this the most common?
>> Jeff Hancock: No, this is about -- yeah, like it's a fifth and so a lot of
other ones are about we just finished this one. Content. So they can be about
facts, feelings, activities, and explanations. And these are text messages are
the most common are about feelings and about activities. And the activities
24
make sense. That's really what these are, lying about what you're doing and
who you're with and stuff like that. Because texting is really about
coordination, usually.
And then the feelings one was sort of a surprise to us. But the generation
that we're looking at right now, I think the community sample would be
different. They're using it for more than just coordinating. They're actually
expressing a lot of socio emotional and relational stuff. So yeah.
Now, those are sort of the smaller, mundane things, every day. We also have
been working with the D.O.D. for a number of years now looking at different
types of messages they'd like to detect or to triage so if they have -- if an
agency has, say, an internet tip desk, they want help in trying to sort the
messages into these people that are crazy, don't have to worry about them.
This person looks like it's really important that we look at. This one looks
like deceptive, maybe it's counter intelligence from some other agency.
And this is one of the ones that actually started us off. This is an actual
message that the CIA had. And it was in a bunch of other messages. The
question is could you detect it based on others, because as you might imagine,
they often don't say terrorist number one when they're in the chat room.
So it was a coded message. We would have no idea how to get this. Here's the
code. I have to get some glasses. And glasses refers to the city of glass,
which is Hong Kong, before I get incense, which is the big Bazaar in Bangkok.
And then wedding is the bombing of actually -- switch that up. Wedding is the
bombing event and Thursday is the anchor. So you have a three-day sort of
itinerary here.
And so you can see there's just so much text. In fact, I think we're at this
really amazing point in human history where for 60,000 years, we've been
talking to each other. And as we talk, everything disappears. It's what Herb
Clark calls evanescent. So our words just disappear as we say them. 60,000
years, a long time to be talking to one another. And in the last, say, 10, 20
years, that's becoming to be radically changed.
So now, a lot of what I say gets recorded. In fact, I'm just now looking at my
e-mail and comparing that to my academic output. And my e-mail to academic
output is about 15-to-1 so far, and we're still working on analysis. I do way
more e-mailing than I do actually writing what I'm paid to do.
25
So much of what I say, be it text messaging, e-mailing, blogging, on Facebook,
et cetera, et cetera, is now being recorded. Politicians have lived in this
environment for a while and have started to adjust to the fact that everything
they say is recorded and is searchable and copyable and analyzable. But we
regular folk are just now getting -- not actually getting accustomed to it.
We're now encountering it.
So it's a really fascinating time to be looking at text and deception.
So here is a definition we'll work with for the remainder of the talk.
Intentionally creating a false belief. Those two pieces in red are key. So
the intentional part is to make sure we don't talk about mistakes as deception.
So if Lili told me the talk was at 3:00 today, but she really believed that it
was at 3:00, she really thought it was at 3:00, that would be a mistake, not a
deception.
And when we think about, for example, false claims around the rationale to
invade Iraq and all of the Bush administration statements which we've analyzed,
this intentionality becomes a really key belief or question. Because the 9/11
commission showed there is actually, there was no weapons of mass destruction.
There were no links to Al Qaeda. So we have false statements that created a
false belief. But most of the administration said they really believed that
they had it, although some, like Scott McClelland, has a book out that says we
intentionally did this.
The false belief part is another key aspect so false belief has to be that
you're generating in someone else a belief that you know to be false. And so
this is the classic thinking of a lie, but it allows us to get sarcasm and
irony out so I could say, oh, Madonna's halftime show was really interesting.
And some of you who maybe know me would think that I didn't think that at all.
Actually, you know what? I actually did like the second half. But the first
part where they're walking out with all the Roman things. I just didn't get
that. Anyways, so you would know I was saying that I believed to be false, but
I was not trying to get you to also believe it to be false.
So this is the key. These are the two things. You got to avoid mistakes and
avoid including sarcasm. Any questions about the definition? There are
thousands of definitions and in philosophy, it can get a lot more intense. But
26
this, I think, is a very good pragmatic one.
Okay. So we're just going to go over one of our most recent text ones. We've
looked at chat. We've looked at online dating. We've looked at presidential
speeches. We're starting to look at insurance fraud. There's just so many
ways in which there's text that involves deception and truthful statements.
Here's a review of the hotel at Cornell, and the one question that we started
with about two years ago is can we fell the person had actually stayed there or
not? And can we do that not by looking at, say, who this person is or where
their IP is coming from or how many times they reviewed, but by looking at just
what the text is.
And so this is what we set out to do. We have a paper that will be coming out
for dub dub dub in April. But here's just a little taste. We can see that in
red here, this is hotels.com, that our estimate of deception and I can explain
to you how we get that, shows it somewhat increasing over time, okay? But Trip
Advisor just got sued, successfully, because a company said we think that up to
a fifth of all of your reviews are false. Which means 20%.
When we saw that, we're like that's crazy. A system based on trust like this
kind of a system is couldn't operate with that amount of deception.
So instead what we're finding is it's relatively low. And if you look at
they're error bars, even the very worst possible estimate is 7 percent. And
our latest estimate has the mean for something like Trip Advisor and
hotels.com, which allow anybody to -- Trip Advisor, which allows anybody to
post a review on there. You don't have to buy anything, is around 6 percent of
these kinds of lies, where you can pay someone who's never been at the hotel.
Other sites like hotels.com, they actually require you to buy the hotel and
then you have to have stayed there. You have to have paid for it. Then after
the date of the stay, you can then post it. So the cost is much higher to do a
review on hotel.com. And sure enough, we see a lot less deception.
But let me get to how we sort of do that. To do that, we have to do our last
activity, I promise. Then we'll wrap up. You're now going to do your second
activity. Same partners, okay? But this time, the people with the lighter
colored shirt, I want you to think about a trip that you went on recently.
People with the dark colored shirt, it's time for you to close your eyes. Go
27
ahead and close your eyes.
Remember, the light colored shirts closed their eyes and behaved very well.
Here's the instructions, okay? We have the instructions there. Very good.
Anybody need any more time?
Okay. So partner with the lightest colored shirt, you're going to tell you
story. Dark colored shirts, you're going to open your eyes. Light colored
shirts, tell the dark color the shirts about a trip you went on recently.
[Multiple people speaking].
>> Jeff Hancock: Okay. Wrap your trip up. Excellent. Thank you very much.
Now, dark colored shirts, you have a task now. And your task is to determine
whether your partner was lying to you or not. This is a non-interrogation
task, but actually it's very different than if you get to interrogate them.
Right now, you're not going to be able to interrogate them. You're going to
have to review what you just heard and decide now if you think your partner was
lying to you. And if you think your partner was lying to you, I want you to
raise your hand. Raise your hand. Okay. And keep your hand up. So we have
three people in the room. Hold on. Keep your hands up, guys. Here we go.
So on this side of the room, you were being lied to. Who had your hand up?
Very good. There's our lie detector right there. The only person on this side
of the room that got it right. Now, who are the two people who had their hands
up over there. Callous, mean, horrible people! You accused them of lying.
>>:
Callous, mean and horrible.
>> Jeff Hancock:
>>:
You're callous, mean and horrible?
Planting that in her --
>> Jeff Hancock: Aha, you caused her to think you were deceptive. Oh, played!
Now, normally what I do is I have to let everybody, especially on this side of
the room, debrief because you've been lied to. But just know that I made them
do it, okay?
But let's just ask ourselves a question. This side of the room, only one
person was correct. This side of the room, only two people were incorrect.
28
The majority were right.
Now, does that mean that this side of the room is much poorer relationally?
They have zero social sense, whereas this side of the room is much better at
that? I mean, there's a clear difference. This is almost a hundred percent
accuracy and this is almost zero accuracy.
>>:
Or as you just said, people trust.
>> Jeff Hancock: Exactly right. What you guys just demonstrated is probably
the -- in fact, not probably. The strongest single phenomenon in about 45
years of deception research. It's something called of the truth bias. It's
the most powerful effect in all of the deception studies. It is so powerful,
in fact, that a lot of philosophers have talked about the need for it.
We have a very hard time accusing somebody else of being deceptive. It's how
con artists make a living. And philosophers have argued that without the truth
bias, without this sense of being a cooperative listener, that you believe what
the other person is saying is true, language as the human species uses it
simply could not operate. Without that truth bias, without language, then
society as we know it doesn't work. So it's really foundational to who we are
as humans.
The other main finding here, I've just listed a few brief things up here is
there is no single cue. If I were to ask you guys how could you tell if the
other person was lying, most people would say eyes or something shifty about
the eyes or their face looked funny or they were twitchy. In fact, a recent
survey asked 77 different cultures the number one cue that people believe is
related to deception is eyes. And for humans, detecting deception in realtime,
eyes are completely non-predictive.
It turns out that if you get somebody to recount a lie backwards, they blink
more. But most people don't get a chance to ask them to tell the lie
backwards, because they didn't know they were being lied to. But that's a
little technique if you want to try that.
So the nonverbal cues are really, really poor reliability. There's almost
none. And recent meta analysis of all the deception studies in actually a
little more than 45 years now, almost 50, found the average detection rate of
54%. Yes?
29
>>: My question is [inaudible] one, maybe two. So much does risk play into
it? Because it seems like the risk, there's very low risk here and so,
therefore, there's very -- as an aside, I changed one fact out of the 40 or so
that I was telling.
>> Jeff Hancock: I heard your story. I could actually tell the way you were
doing, was taking a story that had happened and you just changed one thing.
>>:
Right.
I mean, so how much does that play into the study?
>> Jeff Hancock: Huge. In fact, it's bigger than you can imagine, I think.
What a lot of people like Paul Eckman and Mark Frank have talked about is the
importance of stakes. So I was a customs officer in Canada when I was in
college and for a year after that. And was on a place called Pender Island.
So a lot of people from Seattle on their pleasure boats would come up through
Pender Island. That's a high stakes situation if you have a gun and if you
have a lot of booze, drugs, kidnapping a kid, which I never did encounter,
thankfully.
But I did arrest 16 Americans for bringing a weapon into Canada. That's a high
stakes situation, and you are more likely to show nonverbal behaviors then.
It's still not at a reliable level. The highest they can get up to is around
75 to 80 percent. And it's because there's many ways that people can lie, and
there are some people that -- and the reason I say it's more important than you
imagine is what we've concluded in our lab is it's all about context. So I
might be able to lie in one context really well and show no cues. But in
another context, I will show some cues, and so will a whole bunch of other
people.
There's always going to be about 20, 25 percent of people for a given deception
task that can sail through it. And they're literally not giving any cues.
It's not a matter of like we haven't measured it. We've measured everything
now from, you know, heat sensors. The only thing isn't brain waves, but brain
waves would be very, very difficult because you have to stay still, you know.
Yes, ma'am?
>>: The truth part is that based on people really believing that they're
telling the truth or the fact that people don't want to admit that they've been
30
lied to and they fell for it?
>> Jeff Hancock: Right, so what you're getting at is something called
self-deception. And for years, I've avoided it because I find it too scary of
a beast. But I just started reading a great book. I highly recommend it, by
Robert Trivers, who is an evolutionary psychologist, and he has a book called,
happily, "Self-Deception." It's fantastic. He basically argues that
self-deception, the kind that you're talking about is, an offensive strategy.
So if you're deceiving yourself, it makes it much more easy to deceive others.
That's his mine hypothesis and I think it's quite compelling.
Okay. Well, for the purposes of our talk, though, the fact that there are no
nonverbals that really are reliable all the time is important, because when we
ask people across the country and again in national surveys where do you think
people lie more on the internet or face-to-face or, more specifically on the
phone versus face-to-face, or e-mail versus face-to-face, the mediated channel
always wins. People always believe that you lie more there.
And we found time and again now that if you're talking to somebody you know,
you're less likely to lie in a mediated form like texting or e-mail than if you
are talking to them face-to-face or on the phone.
So text happens to be, with people that you know, a very honest medium so far.
And you can tell one of the reasons why is the recordability of it. Think of
any political scandal over the last ten years, they screwed up by tweeting
something or some photo or they said something that ended up getting revealed
later.
Now, the reason for why this is important is because we're in this new phase,
we're leaving texts behind us all the time. It allows us and computational
linguists to work together to start to analyze this.
So here are two reviews. One of them is actually a real review from Trip
Advisor. The other is one that we played a Turker, an Amazon Turker to create.
They're the same hotel. I encourage you to try and decide which one is
deceptive.
While you're doing that, I will say that more and more work says that language
is not as well controlled as we once thought. So the reason that Eckman and
those folks focused on non-verbals is there was an assumption that our words
31
were highly controlled, whereas our body was not. In the last 15 years in
psycho-linguistics has shown that much of what we say, all of those little
words like the and of and very and we, these are function words. We have zero
control over them. We don't pay attention to them and we don't know when we're
producing them.
And so about half of our language is out of our conscious control. And that's
where we look when we're looking for clues as to whether something's deceptive
or not.
In this case, you guys are looking at it and probably one of the things you're
looking at like maybe one is more overly enthusiastic or maybe it's more
general. Those two things actually are related. But think psychologically now
about what we know when somebody visits a place. If you experience a space,
your body does most of the encoding of the spatial information. This is called
embodied cognition.
And if I actually don't experience a space, I have to make it up in my mind and
that's going to be much more impoverished. All right. So see if that helps.
All right. So you're going to stay at the James in Chicago, and you're trying
to decide whether you should believe these reviews. Who believes that the
first one is the deception? Okay. And who believes that this one is the
deception? All right. So just a very small number think the second one. The
second one, right? The second one is the deception. So most of you are wrong.
Only about five people were right.
It's very difficult. I actually do not do very well on the tasks when we give
it to ourselves. The reason for that is the algorithm that we use, we're
showing about the top six weighted features in the algorithm. It's a support
vector machine that's using anagrams, biograms and Luke's dictionary. Luke is
the thing I was telling you before that can look at emotion, it can look at
function words, et cetera. It's analyzing 70 or 80 things simultaneously. So
it's very difficult for a human consciously to look at these parts of language
that we don't pay attention to.
You can see how we break it down when we work on the theory around this sod
we're now looking at doctor reviews. And believe it or not, there's a lot of
doctor reviews that are faked, that seems wrong at a stakes sort of level that
you were asking about. And we believe theoretically, that we're not going to
32
see the exact same thing here. So if it's about visiting the doctor's office,
we may see some spatial information differences, but we're going to see other
things.
Just like when we look at online dating profiles. We can detect online dating
profiles. Not as well as this, this algorithm for detecting people that have
never been in the space performs at 90%, which is insane. I've never seen
anything like that. I made our team re-run the whole analyses because I didn't
believe it. But when we're looking a little online dating profiles, we can get
around 70% accuracy by looking at just the text to tell if whether they're
lying about their height, weight or income.
So what we're able to do now is look at these texts in very different contexts,
create model specific to the context, high stake, low stake, what is the
content about, and make a decision there.
And so, yeah, Scott?
>>:
All those features, how did you determine what those features were?
>> Jeff Hancock: There's two ways. One is there's a theoretical top-down
approach. So we were very interested in spatial information so we include a
lot of spatial features. And then there's the bottom-up empirical part where
we threw in a lot of anagram stuff. Most of that was unigrams and let the SVM
use those as it wanted to.
But we would include things from Luke that were theoretically, we believed, to
be important.
>>:
And what is [inaudible] in that type of application?
>> Jeff Hancock: Yeah, good question. So this was scraped from Trip Advisor.
It's possible that was fake. But according to our other estimate, we believe
that there's about a 94% chance that that's legitimate. And we sampled 10,000
of these and reran that with 400 at a time so we're feeling pretty good about
that.
This was created by Amazon Turkers who actually paid 400 Turkers a dollar to
write this review, this kind of review. There were 20 hotels in Chicago. With
a bonus of another dollar if they could fool our human judges. So we had
33
students in my lab and when they were judging these, they did actually a little
better than you guys. You guys did really, actually, spectacularly horrible.
They performed about 58%. Which is a little bit above that mean of 54%, but
most of our humans did very poorly on this.
Yes, you had a question?
>>: Well, maybe it doesn't matter because they seem very similar, but that
review isn't the same one as the next.
>> Jeff Hancock:
Oh, is that right?
>>: It's missing the last line that had the repeated references to James
Chicago.
>> Jeff Hancock:
>>:
Yeah.
>> Jeff Hancock:
>>:
Well, that was unfair of me.
Sorry about that.
So this one, this one repeated it?
Actually, the next one has a --
[multiple people speaking].
>>:
The view of Lake Michigan.
>> Jeff Hancock: They could be different. I'm sorry about that. I mean, so
this is one we did the graphics on. This one definitely is the fake one. But
I will be more careful on that.
>>: One thing I noticed is the typos. Is that a technique, because definitely
is spelled wrong. So do they intentionally introduce things?
>> Jeff Hancock: Yes. So we actually monitor Amazon Turk and a few of these
other ones where they requests go in. And we watch. So now they have reacted
to our algorithm. Which is really cool, because now they're getting them to do
certain things, which is way higher than should be done by chance. And so the
algorithm gets more accurate. We're now doing studies where we get people, we
train them with counter measures and the algorithm does better on them than the
34
people that weren't trained on counter measures.
Because humans have a very difficult time acting, like, producing randomly
natural language. Does that make sense? So when you do genre analysis, you
can just break this down and it's just a series of frequencies and people
perform very badly at doing that naturally.
>>: Are these Turkers, do we assume that they're people who are, like, go out
and do reviews on Turk for a -- because ->> Jeff Hancock:
Not necessarily.
>>: Because it could be a person that they refer to the name of the hotel
three times as a marketing kind of ->> Jeff Hancock: They do. In fact, a lot of times when they're instructed,
and this is before our research came out, a lot of the instructions would be
mention the name to that you're clear that you were there. They would also
tell them to talk about, you know to use really positive terms. So we see a
lot of positive adjectives. That's the simplest thing.
But the other thing for example that is very difficult for people to control is
you look at the amount of first person singular in here. So my, I, there's
plural as well with we. So first person pronouns go way, way up. Which is
actually the opposite of all of our work. We think because the person's
overcompensating, trying to put themselves in there.
Also, when you're lying, you think about narrative. That's the way
sort of evolved to lie. We think about the story. And so when you
story, you think about who and what. So we talk about, they talk a
who they were there with and they talk about what they were doing.
often business.
humans have
think about
lot about
So more
And as I said, these counter measure approaches are coming out, but
hilariously, it's actually helping us. Yes.
>>:
Are there similar indicators you could use for like product reviews?
>> Jeff Hancock: Well, we're beginning on that. Part of the reviews are going
to, we think, are going to be tougher. And -- pardon me?
35
>>:
Resumes and cover letters seems like it would be --
>> Jeff Hancock: We've done resumes. We've done resumes. So we're working on
the linguistic part. Here's a quick question for you guys, and maybe this will
give you a sense of where I'm coming from. So the three types of research have
one common thing. Where do you think people lie more often? Paper-based
resume that they would hand in or a LinkedIn profile? More lies on LinkedIn.
Why?
>>:
Paper.
>> Jeff Hancock:
More lies on paper?
>>: LinkedIn, your friends who are your [indiscernible] might see it.
then you would be embarrassed.
And
>> Jeff Hancock: Exactly right. So I have a piece coming out in cyber
psychology and network and social whatever that journal has like super long
name is called where we show that actually the exact -- the number overall of
lies are exactly the same. In the U.S., it's three in paper and three in
LinkedIn. This match as whole lot of other stuff. In the U.K., it's lower.
Seems like exaggeration is more okay in the U.S.
But the types of deception get squashed. So on paper, the lies are focused in
previous responsibilities and skill-sets. On LinkedIn, they get moved down
into interests and hobbies. And we think it's because responsibility and
stuff, the network that you have can actually detect those deceptions. Your
former boss says actually, you weren't working here in 2010, right. Whereas I
love kite surfing. Well, guess who can tell whether I love kite surfing or
not? Me. No one else gets to decide that.
And so what happens is when they go, when these resumes go public, especially
public in front of a network, the lies still happen, they just get moved into
different types. And it turns out that if I say I like kite surfing, that
actually still has positive self-presentation effects because it means that I'm
a cool person that's active and I like to travel, say. So yeah.
I can't remember what the last point I was going to say, but of the three
themes of research I was going to show you, there's a belief set in the
36
population that things that are online are worse in some way. They're
deficient. And what we end up looking at it much more closely is that it's -this is going to sound so academic. But it's more complicated. In many ways,
the reason that social media, if you want to include e-mail is that, to the
latest on, say, the plan is that it works. It provides people psychologically
and relationally valuable stuff.
Feel a bit weird saying that here, because you guys kind of work on that stuff
anyways. But hopefully to some degree, this is validating and it's a little
bit of a push-back. Hopefully not in a Pollyanish way of this idea that it's
hurting our lives.
So thank you guys very much, and happy to take general questions.
>>: I was wondering if you could go back to a comment about why emote icons
are a misnomer.
>> Jeff Hancock: Right. So it's for two reasons. The number one reason is I
don't think -- so it's broken down into emotion and icon. And in linguistic
theory, icon is a certain thing, which is a pointer to something, a physical
pointer.
I don't even want to get that technical on it. It means that what I'm doing
now is a representation or points to my current emotion. And when people send
smileys, they rarely mean I am happy right now. What they usually mean is
interpret what I'm saying in a certain way so they are helping frame or shape
the interpretation for that, or they're indicating like I got that you were
being funny in that last one. Does that make sense?
>>:
Yes.
>> Jeff Hancock: You know,
was asking people when they
like hey, by the way, can I
And never corresponded with
>>:
when I looked at emoticons, and for a while there I
send me something and they had an emoticon, I'm
just ask you a quick question? How you feeling?
the emoticon. And so --
You need to invent a new word.
>> Jeff Hancock: I have one. It will never work. It's based on Herb Clark's
stuff. He calls those kind of things emblems. So they're emblecons. That
37
will never stick. But anyways, that's -- I know, right? Yeah. But that's
kind of what's happening is it's almost like nonverbal language, which helps
you interpret the meaning of the message.
>>:
We'll help you spread it.
>> Jeff Hancock:
Appreciate that.
Thanks, guys.
>>: So you talked about three sort of meta, three small areas of presentation.
The last one being most of the [inaudible]. Of that section, how much of the
stuff you presented, would you say, is actually true?
>> Jeff Hancock:
>>:
Could you clarify?
So --
>> Jeff Hancock:
Because it's lab-based or because it's --
>>: No, the citations of the number of SMS lies and the existence of alibi
whatever dot com.
>>:
He thinks your talk is a deception.
>>:
I think you're lying.
>> Jeff Hancock: Meta deception. Very nice. I'd like to think -- I mean, I
could go back and go over it, but I'm going to go with 99% on that. There
might be one or two things in there that ->>:
Does [inaudible] count?
>> Jeff Hancock: Meta meta lying. Dang. Usually, I'm the one that gets to
work the audience over in that sort of way.
>>:
You need to do another test.
>> Jeff Hancock: Right, we'll do another test on you later.
in there that you have any concerns about?
>>:
Yeah.
I don't know, the first five or six examples.
Is there anything
38
>> Jeff Hancock:
What was one of them, for example?
>>: The alibi service thing, like 75 bucks to say that you're calling a rescue
service? I dispute the number. At least that. Like you were overly specific
in a ways that I am skeptical.
>> Jeff Hancock: I don't have internet access, but if anybody else would like
to go, you can go to alibinetwork.net, I believe. It's a real company. In
fact, I was on a documentary -- the reason I found out about them was the
documentarians came after they were in Chicago looking at the alibi network.
So it's a real one.
There is another one that was called Fake MySpace where you could rent friends
for 99 cents a month to make your MySpace page back in the day look better.
But when it got all the press, it folded.
>>:
If it's a lie, he set up the website --
>>:
I've heard that everything on the internet is true.
>> Jeff Hancock:
Very good, very good.
>>: So I just have a question like on the work that you guys did on Trip
Advisor and hotel.com. Is that information being passed back to them in trying
to remove those reviews? I'm an engineer so ->> Jeff Hancock:
>>:
-- browser plug-in for Amazon.com.
>> Jeff Hancock:
>>:
Absolutely, absolutely.
Right, right.
We were talking about --
Read all the fake ones.
>> Jeff Hancock: In fact, if you guys want, for those of you that are
interested, we have a small fun website called reviewskeptic.com, and it takes
our algorithm and you can copy and paste in there. We have an API for it.
Like I say, we're talking with Expedia and have been working with them. Trip
Advisor, although they're tough. This is very sensitive for them, because they
39
just lost this massive lawsuit. So they want to go through a lot of NDAs and
stuff before they look at what they call their plutonium, because they're
concerned about counter measures. They're very concerned about being perceived
negatively.
We thought they'd be excited by that, because there's absolutely no way that 20
percent can be there. Back to your point, you're an engineer. Yes. We have
developed the algorithm and we want to work with companies to use it to enhance
this sort of trustability of a lot of these reviews. Because if we're going to
go with reviews like this, amateur reviewing, it has to be trusted for it to
work.
>>: Seems like you weren't tying it to any user accounts.
that, you see a high percentage of ->> Jeff Hancock:
>>:
But if you add to
Exactly.
-- suspect reviews to block that user.
>> Jeff Hancock: There's tons of interesting metadata there, that they have
access that we don't. Which is one of the reasons we want to work with them.
We want the data. So we think we can take this linguistic information, and
that's one source, and you look at the IP, you look at other reviews. So, in
fact, if you look at people on Trip Advisor that have only post the once, then
you get this six percent very rough estimate of their deception.
And then this you take other people who only have reviewed once, you move them
out, the deception rate goes down. Because then ->>:
They build a reputation.
>> Jeff Hancock: That's right. So we've been talking with them and others
about how to do that. We think there's a lot of different ways. So, you know,
get them to describe what the hotel looked like in a sentence, then they can do
the review. And if we're right, then that should knock out most of the people
that have never stayed there. Because you can ask a question like where's the
restaurant in the W in Seattle? I've stayed in the hotel. It's impossible for
me not to know that.
>>:
Is the lobby on the first floor or the second floor?
40
>> Jeff Hancock: Exactly right. Very simple. So we think these are things
that can be implemented and we want to work with any company that wants to do
that.
>>: What he's saying in regards to identity and reputation, like systems that
actually include that either for long-time reviewers or identify them in some
way. Like you say, you could knock out everybody who's just done one review
[inaudible] goes way down.
>> Jeff Hancock:
Right.
>>: Is there any research into those systems when you apply identity and kind
of truth to their review.
>> Jeff Hancock: Yeah, so there's a great situation now that's happening on
Amazon naturally. The Vine level reviewers. And Trevor Pinch, my colleague at
Cornell, he's a sociologist. He's interviewed these folks and it looks like
it's not -- so identity is not a panacea. So these folks can be compromised
without their knowing, even.
So the Vine program, these people get free products sent to them, and then they
provide reviews. But they're really good reviewers. They've done hundreds and
hundreds of reviews. So Amazon believes they're trusted reviewers. And
Trevor's argument is that biases can come into that, because they're getting
all this -- they're all getting all this product and you see very few negative
Vine reviews.
So the New York Times one that just caused the recent dust-up was a company
that on the surface wasn't doing anything illegal. They were selling Kindle
covers and then when you got the Kindle cover, it would say hey, we really want
good reviews or we want lots of reviews and they would refund the price of the
Kindle if you would write a review. And they never said we want a five star
review. They said we strive for five star service.
And when New York Times found this out, they sent it to Amazon. Amazon pulled
the cover and then several days later, they pulled the company's other product
which weirdly as a stun gun. What company has Kindle covers and stun guns? I
don't know.
41
>>: [inaudible] like if you buy a car at the end of it, it's like here's the
review. One to five, anything less than a one or a five and I get fired.
>> Jeff Hancock:
Right, right.
>>:
That's for internal more than external.
>>:
Sure, but it's still kind of the -- they're enforcing a bias, I guess.
>> Jeff Hancock:
>>:
There's an incentive that's not for truth.
Let's take one more question.
>>: I just have one more question follow-up. Now, did reviews scale, like,
where people would -- if people were less likely to lie, if they were giving
like a three-star review out of five or like a one star or five star?
>> Jeff Hancock: We haven't looked at that yet. We looked at five-star
reviews and we've just finished looking at one-star reviews. We thought there
would be more on that space. Of course, now, they're being told to not write
five-star, one-star reviews, which makes it harder to manipulate the reviews.
We can do around 85 to 86 percent for the negative reviews.
>>: I know there are studies that talk about a usefulness of the review
because the average user isn't going to log on and write a review to something
they feel okay about. They're only going to log on and feel something that
they feel really strongly in one direction.
>> Jeff Hancock: Yeah, though there are people that just love it. They feel
compelled and they enjoy reviewing after every stay. I was just talking to a
woman who I thought was -- he's super bubbly, et cetera, et cetera. And just
like anything she ever did, she reviewed in her life. So anyways, I'll end it
there. But thank you guys so much. Been a really enjoyable experience.
Download