Document 17826851

advertisement
>> Jamie: There's a lot of great research going on right now in this space of gender diversity
and we're very fortunate to have over these five lectures that we have a lot of great speakers
coming. Our hope is that as we learn about this work we can think about ways to build on it
and help Microsoft Research and Microsoft in general be a real leader in the field. It's our
pleasure to welcome Eric Horvitz, who is the director of Microsoft Research, Redmond and
recent recipient of the Feigenbaum prize to introduce our two speakers.
>> Eric Horvitz: Thank you, Jamie. [applause]. Good afternoon. It's a real honor to help Jamie
and Scott and Rane introduced this five part lecture series on gender diversity. Thanks for the
great work that the three of you have done. It's really nice to see this organization focus. It's a
very important topic of deep interest to our lab's leadership, MSR leadership and it's also a high
priority for Microsoft's senior leadership team for the company worldwide. The goal of the
series is to bolster our understanding of the role of gender diversity in computer science and
other fields, of course. Many of us talked about gender diversity and we have a shared interest
in addressing these challenges over time both in the balance and the other issues of gender in
the workplace. But in reality if you look at some of the literature in this field most of us know
very little about the topic despite its importance in our lives, both our family lives and our work
lives. There is a body of research and a body of practice along with that research that we can
learn a lot from. It would probably take us a lot further in ways that take us beyond our daily
intuitions about what we think is right and what we think the actions or that would help. I think
some researchers have even shown that even caring people have uninformed bias and even
flawed understanding of the challenges and opportunities. The intent of this lecture series is to
make us all more aware, help us learn and grow. Over the next few months of this series will be
bringing in several leading researchers like we have today and hopefully we'll be more informed
and innovative with supporting a diverse and inclusive workplace. Beyond being the right thing
to do I believe that speaking competitively, I mean selfishly, it's the most competitively
energizing thing we can do for Microsoft Research and Microsoft more broadly in terms of our
workforce and our life here. I've covered this in the past and we did a lecture series once on a
breakout in creativity and I dove into literature and I pulled out some recent studies and older
studies that show the powerful value that comes with having gender balance. For example,
there has been some recognized work and just a recent study last week that came out that's
actually gotten some press on the value of diversity for problem solving and creativity in teams.
We will grow in innumerable ways as we adjust the bottleneck with women in engineering,
program management, research and even executive roles where I've heard over the years when
I've been on committees that we have had trouble retaining top executive female talent at
Microsoft. Today's first session will be on the topic of biases and stereotype threat. We have
Stephen Bernard and Catherine Ashcraft with us. Quick introductions to both of them and then
I'll ask Stephen to come up. Stephen is an associate professor in the Department of Sociology
and he is the Director of the Schuessler Institute for Social Research at Indiana University. His
interests include social psychology, gender work and occupations, conflict and conflict
resolution. His work has appeared in a range of publications including Administrative Science
Quarterly, The American Journal of Sociology and the Harvard Business Review. He speaks
widely on the topic of reducing bias in the hiring process and he's conducted lots of educational
outreach and he's testified before the.. Thanks for being with us. Catherine is Senior Research
Scientist of the National Center for Women in Information Technology, also known finally as
NCWIT and she's at the University of Colorado at Boulder. Before coming to NCWIT she was
Assistant Professor of Multicultural Education and Director of Diversity Learning at Western
Washington University. She obtained her masters in communications and a PhD in education
from University of Colorado. Her research focuses on issues on gender, diversity and
technology, change in organizations, curricular reform, popular culture media presentations
and youth identity especially as it relates to race, ethnicity, gender, class, sexuality. A lot more
in the bio's of both of these young people. These are passionate folks, but let's call Stephen to
begin sharing his reflections with us. Thanks Stephen for being here. [applause].
>> Stephen Bernard: Is my microphone working? Can you guys hear me?. Maybe it needs to
be switched on? Yeah, switching it on actually makes it work. Thank you so much for having
me. I'm really, really excited to be here. I was a little nervous at first because my suitcase was
apparently not excited to come to Seattle. My suitcase was in Detroit the last time I checked,
but when I got here Rane reminded me that you are a tech company so you guys dress cooler
than most places and so I don't have to wear a suit. So that's nice. I also, I have a shirt that my
daughter did not spit upon yet, so it's perfect. I'm going to talk to you about implicit bias and
let me just give you a general picture of what I'm going to talk about and then we'll go into the
details. I'll start with just an overview of what the concept is to get as basically on the same
page about what this term means and how it's used. I'll tell you a little story to give you an
introduction to it. I'll talk to you about some of the evidence that these biases exist. These
things have been studied by lots of different kinds of social scientists. We see people in
psychology and sociology and economics trying to see if these biases are real. I'll talk to you
about some of the different kinds of data that they look at to assess those biases. I'll then talk
about how those biases actually work. I'm a big believer if you understand the mechanism that
underlies something you can figure out how to turn it down or turn it off and so we'll think
about what is the content of these biases and how do they actually work. That will lead into a
discussion about how to reduce bias and think about what we can do to maybe make these
biases less likely to affect the way we make decisions. To give you a sense of what implicit bias
is hide like to start with a story. This is a story one of my students told me. I teach at Indiana
University. I teach lots of big lecture classes and I love my students. They're really interesting;
they're really fun and very idealistic. Sometimes a side effect of them being really idealistic is
that they don't necessarily want to believe that things like stereotypes or bias might affect our
behavior. They think that that happened in the past like in the '90s, but that it doesn't happen
now anymore. [laughter]. I feel a little teaching students but they are very inspiring in a lot of
ways. What happens to get them to think more about how these things affect their behavior is
they have some personal experience with it. One of my students came up to me after class and
said Professor Bernard, the stereotyping stuff he talked about a couple of weeks ago, it actually
works. Sometimes they come up to me and they say I didn't believe what you said to me last
week but then I saw this video on YouTube and now I believe it. It was at least better than that,
but he was actually very introspective and I said okay. What happened that made you think
about it this way? And he said I'm taking this calculus class and it's really hard and so I decided I
needed a tutor for my calculus class and I went to the place where the tutors have their names
and their e-mail addresses so you can contact them and I looked at the list of calculus tutors
and I got about halfway down the list and I found a name I like and I was going to contact that
person. And then I stopped and I thought wait. Why did I get halfway down this page before I
found a name that I liked? Why would some names on this list look better to me than other
names? Can anyone guess why he felt drawn to some names on the list of calculus tutors more
than others? Those are two of the most common guesses and those are probably really
affected, but it turns out into specific case what it was is a lot of the names on the list were
either Russian or Eastern European and they struck him as kind of forum and he felt a little less
comfortable with them and so he went down until he found what he viewed as maybe a more
typically American sounding name. From his perspective a John Smith kind of name and then
he thought I really love this student and he thought wow. That is really not a very rational way
to choose a math tutor, so what would be a more rational way to choose a math tutor? And he
thought there's a list here. Maybe what I can do is just go down the list in order and contact
people and when I get someone who I like and I feel like I can learn calculus from I will stick
with that person. That's what he did. He adopted this more systematic way of going through
the list of tutors and he got a tutor that he really liked. It was someone from Hungary whose
name he had originally skipped because it seemed a little too unfamiliar to him and he turned
out to be really happy and he was really pleased with his choice of math tutors in the end. I like
this story because I think it really illustrates what this implicit bias is. What implicit bias really is
is an error in our decision-making process. To social psychologist that's really all it is and I think
one reason why I run into this love of students, people get uncomfortable talking about bias
because if you read about bias, especially in the media, you get the impression that there's
these two groups of people and one group of people are kind of the bad racist, sexist people
and they do things we don't like. And then the other group of people are the good people that
never do those things and would like those people. But that is a very foreign way of thinking
about this. To a social psychologist that is not really how we think about this. We really think
about this as just an error in the decision-making process. You're trying to weigh some
different options and you're trying to think about a decision and somehow you end up wading
into your decision function some irrelevant information. That causes you to make a less than
optimal decision. We really think that is an error in our decision-making process and it's an
error that is correctable. Another thing the story illustrates is that this can really occur without
our knowledge. When my student went to pick a tutor, he was thinking I really need a calculus
tutor so I don't feel calculus. He wasn't thinking I really don't like Hungarians. He doesn't have
any kind of explicit bias about people from Russia or Eastern Europe, and in fact, he is a student
that likes to come to my office and chat about things. He told me in another meeting that
actually some of his family in a previous generation emigrated to the U.S. from Eastern Europe
and he was really proud of them and was proud of everything they had accomplished. He
actually didn't have anything explicit against Eastern Europeans. If anything, he liked them
maybe more than other kinds of Americans. Implicitly there was something about those names
that seemed off to him and he wasn't as drawn to them. What this illustrates is that this can
occur without our knowledge even if we don't think the stereotype is true. Even if we don't
think the stereotype is accurate it can still affect our behavior. This fact, I think, is the most
fascinating about implicit bias to me because it means that these are biases that we can also
apply to groups that we are members of. This stuff is also sometimes talked about as maybe
something that just men do to just women, but that's not really how these biases work. These
are things that anyone can be susceptible to, and they are ideas that are present in our culture
and so they can sometimes shade the way we make decisions even if we're not aware of it. The
final thing I like about this story is that it illustrates that a lot of these biases are correctable, so
they are biases in our decision-making process, but there are ways that we can reduce the
likelihood that we're taking that irrelevant information and in a way that we can make better
decisions. I wanted to start this way just to give you a sense of what these implicit biases are,
so maybe we're all in a similar place at the beginning. I'll come back more to this definition and
talk a little bit more about how they work, but what I would like to do now is just shift gears a
little bit and talk about how we try to measure these biases or how we actually know that they
exist. This is a popular topic across a lot of the social sciences. You probably know social
sciences. There are very, very different kinds of methods that get used and I want to show you
a few different ways that we try to triangulate whether these biases exist by using different
methods to hone in on them. One thing you might want to do if you wanted to see if these
kinds of biases actually exist is you might want to see how scientists are viewed by the people
that manage them. There was one study where they surveyed about 3000 scientists and
engineers. They all work in R&D in 24 of the largest corporations in the U.S. In addition to
surveying those 3000 scientists, they surveyed their managers and so they surveyed about 700
managers. They asked the managers to rate the scientists, so rate the quality of your
employees. They found when they asked the managers to do this but they tended to rate the
male employees more positively than their female employees. On the face of it you might think
there could be a lot of reasons for that. Is it possible that maybe the male employees are more
productive than the female employees? Maybe that's why they're being seen more positively
by their managers. What the researchers tried to do is they tried to account for all kinds of
different ways that the managers might be evaluating the employees. They measured, for
example, how many patents the employees had, how much education they had, how much
experience they had, how many publications they had. They even did things like give the
scientists personality tests so they would measure who really likes to take charge on a research
project and who is really good at getting along with other people. Even controlling for all of
those factors they found that the managers tended to rate the male employees more positively
than their female employees. That would be one way that you might try to measure whether
bias exists. One approach might be to look at how managers rate their employees. Another
approach you might take is you might ask scientists what do scientists actually experience in
their careers. If you just ask scientists what kinds of things have happened to them in the
course of their careers. There was another study. This had about 700 people where they
surveyed only people who had won prestigious postdocs. They looked at people who had won
prestigious, mostly NSF and NRC postdocs and they focused on this group because they thought
this is really an a complex group, so let's see what people who are among the most
accomplished scientists have experienced in their careers. One of the questions they asked
them was have you ever experienced discrimination based on your gender in your scientific
career. What they found when they surveyed the scientists is that about 73 percent of the
women and about 13 percent of the men said yes to this question. A couple of interesting
things you might notice about these numbers is that the number of women is very, very high.
Most women scientist reported experiencing discrimination in the example. Another thing you
might notice is the number for man is much lower, but the number for men is also not zero.
These are both interesting findings that we can talk about. Sometimes you might assume that
gender is something that never affects men, but gender can actually affect men to in interesting
ways. We'll talk about both these numbers and why is this number so high for women and then
what might be going on for that 13 percent of men as well. There are a couple of ways you
might try to measure bias, but what you would really like to do if you wanted to know if bias
exists is you would like to find out how exactly the same person would be treated if that person
was, for example, a man or a woman. If you could take exactly the same person and wave a
magic wand and turn that person from a man to a woman and see how the same person with
the same behaviors and accomplishments and education, if that person is treated differently
than that would be good evidence that this kind of bias exists. As social scientists we don't
have a magic wand like that. Maybe you guys are working on one and one of the labs in the
basement, but we don't have one, so what we use instead our experiments. We do things, for
example, like give people two different resumes or the exact same resume and for one group of
people we'll put a man's name at the top of that resume and for another group of people we'll
put a woman's name at the top of the exact same resume and we'll see if they are evaluated
differently. There are a lot of studies like this in the social sciences and I just want to show you
a couple of them. One of them is a study that was conducted in psychology. I like to use this
study that was conducted in psychology because psychology is the discipline that invented the
study of stereotyping discrimination. If there's someone that should know better, its
psychologists and so I like to pick on psychologists because if you like the Richard Feynman
thing that science is bending over backwards to prove yourself wrong, looking to see if the
people invented the study of bias and discrimination are sort of discriminatory is one way to do
that. What they did is they took the directory of psychology faculty and they took a random
sample from this directory of all of those psychology faculty in the U.S. They sent them CVs or
academic resumes and they asked them to evaluate the CVs. They said please take a look at
this person and see if you think they could be hired into your department. If they could be
hired what you think the appropriate salary would be and then please evaluate them on a
number of other dimensions. They varied two things about this resume. This was an
experiment with two different variables that they were manipulating. One variable is just the
name at the top of the resume, so half of the psychology professors in the sample got a resume
that said Karen Miller and half of the psychologist got a resume that said Brian Miller. They
picked these names because there weren't any real psychologist practicing under these names
at the time the survey was done, so they weren't going to be like oh I know this person, but it
does clearly signal a man versus a woman's name. The other thing they varied is whether the
person was earlier in their career like someone who had just finished their PhD and was looking
for a faculty position, or someone who is in their midcareer and the midcareer CV was an
exceptionally good career CV, so someone was highly, highly accomplished, lots and lots of
grants and publications in the presentation. Let me just first tell you I'll show you one of the
findings from the early career CV. This is how hirable the person was seen, for the early career
resumes based on their last name. If you are having trouble reading it, the tall bar is when the
CV is named Brian. Bryan was seen as hirable to the department about 73 percent of the time.
The shorter bar is Karen. Karen was seen as hirable about 45 percent of the time. Again, this is
in the field that studies this problem as a living. This is a pretty remarkable difference just
based on the name on the resume. Did you have a question?
>>: Do you have the split on the gender of the people that were doing the hiring? Because the
bias could come from mail or from female and we don't know there.
>> Stephen Bernard: I don't show that on this slide. They do talk about that in the paper and
they didn't find any difference between the male and the female raters. That's actually quite
common. We often don't see a lot of differences in terms of how men and women rate other
men and women. Sometimes we do, but in a lot of cases they are not that different. Was there
another question?
>>: What is the gender distribution in the psychology department? Was a more female looking
for male or the other way around?
>> Stephen Bernard: Psychology is an interesting field because it's broken up into a lot of
different subfields and they vary a lot on their gender composition. Social psychology is more
female. Bias psychology is, I'm actually not sure. Psychology has changed a lot in the past 15 to
20 years. It's tilted more female and I'm not totally sure what the gender composition of bias
psych is now, but another thing that they do in this study is in addition to just looking whether
the rater of the resume is male or female, they break it down by what field the rater is in, so if
they are in bias psych or social psych or cognitive. They didn't really find big differences across
the subfields of the rater. It seems like at least for this study, subfield differences didn't make a
huge difference.
>>: What are the sample errors in this?
>> Stephen Bernard: I didn't put the standard error bars in the graph. The total sample size is
about 240 people, but there is a lot of variation. There is a lot of variation. We do see that the
differences are statistically significant. There is no overlap in the confidence intervals if that
makes you feel better. That's why it's fun talking to a more research oriented audience. This
finding a lot of people find a little bit surprising. Maybe they find it a little depressing. But I did
want to show you a less depressing aspect of this study, or at least somewhat less depressing
aspect. It's also illustrative of when we think about reducing bias, and that is they didn't find a
lot of differences in the ratings of the more experienced person, so the second set of CVs they
sent out was a very highly accomplished midcareer person. They didn't find any significant
differences in the way those CVs were rated by gender, so they didn't find that the female CVs
were less hirable or seen worthy of being paid less. The only thing they noticed which is a little
interesting is that on the female CVs people were a lot more likely to write something in the
margins. They started counting the margin, and said they were four times more likely to write
what the researchers called cautionary statements on the experience female CV versus the
experience male CV. And cautionary statements were things like this person looks good but I
would have to see some evidence that she published these papers without someone helping
her. People were a little more skeptical of the female CV, but overall there was less bias in the
more experienced set of CVs. I think this is useful because it helps us start to think about what
are the wages we can use to try to reduce stereotyping. This illustrates that one time we start
to fall back on stereotypes are times when we need shortcuts, so we're tired, distracted, rushed
pressed for time. Those are all times when we are more likely to fall back on stereotypes as a
shortcut. Another time we might be more likely to fall back on stereotypes is in more
ambiguous environments or low information environments, so early career individuals we tend
to evaluate those individuals more in potential than performance just because there's less
performance to go around compared to someone who is another 15 years into their career and
they have more of a track record. Thinking about stereotypes as a shortcut will be useful later
on when we think about how we try to reduce some of these biases. We know that if your
name is Karen versus Brian, people might look at your resume differently. What of your name
is say Lakisha versus Emily or Jamaal versus Greg? We've also done these kinds of studies for
race. This is a study that was done by some economists. It was conducted about 10 years ago
but it kind of bubbled up in the news again in a few places so you might have seen it. This is
Marianne Betrand and Sendhil Mullainathan's paper. What they did is they applied to about
5000 jobs in Chicago and Boston. Unlike the past where they are sending these to psychology
professors and saying how would you evaluate this person if they were applying for job, in this
case they were applying for real jobs. They made fake resumes with real voice mailboxes and
telephone numbers and e-mail accounts and they apply for jobs. What they are measuring is
who gets called back for an interview. That's what they are measuring. Who gets called back
for an interview they looked at a really wide range of industries. It's a really fascinating study
where they looked at all kinds of alternative explanations. If you are interested in this topic you
might want to take a look at this paper. It's pretty interesting. What they do in this study is
they systematically vary, and white or African-American names and the way they did that is
they took a random sample of birth certificate data and they looked at names that were
statistically more common for either white or African-American individuals who would be the
same age as their take job applicants. What they found was a pretty similar pattern. If you had
a name that was statistically more likely to be a white name, you were called back for an
interview about 10 percent of the time. If you had a name that was statistically more likely to
be an African-American name you were called back about 7 percent of the times. Again, there
is a substantial increase in the callback rate for the white name resumes versus the AfricanAmerican name resumes. These are, again, the same resume. These are a couple of studies.
There are a lot of studies like this in the social sciences. In case you guys are interested I put a
list of seven or eight more studies people tend to find interesting as sort of appendix slides. I
don't know if we can make the slides available somehow, but you guys could check those out.
What I'd like to do now is talk a little bit more about what implicit bias is, how it works, what
are the content of those biases because that will lead us into thinking about how we reduce
those biases. The thing I really want to emphasize is that explicit bias is a specific case of a
more general process. It's a specific case of a more general process. That general process is
that our brains tend to develop associations between concepts that we see paired more often.
If the concepts are paired or used together frequently, we often develop an association
between those concepts so that when we think of one the other concept tends to become
more available in our working memory so we are more likely to use it when we make decisions.
Let's say, for example, I say to you the words peanut butter and. See, how easy was that? Your
brain automatically fills in the word jelly. Or if I say rum and. All right. So you guys also like to
have a good time. This is great. It's really easy. Your brain automatically does that. And
generally it's pretty good that your brain automatically does that because that means when you
want lunch, well maybe this doesn't work here at Microsoft because you guys have a really nice
café. But if you teach at a state university and your lunch comes from the refrigerator, you
might go to the refrigerator and think like what do I have? I have peanut butter and, you know,
automatically jelly comes to mind and you have a plan for lunch. You don't have to
systematically go through every item in the refrigerator and say, okay. I have peanut butter.
Could I have it with salmon? No that's weird. What about more peanuts? Know that's
redundant. Your brain is solving a useful problem for you. The problem is sometimes we tend
to have these overly general associations with not just objects but also people. I'll talk in a
moment about where some of those associations come from, but these implicit bias or implicit
associations when we developed these associations we think groups and traits are feelings. A
lot of times people might associate women with being kind and nurturing and warm and might
associate men with being strong and assertive and leader like. And you can probably see right
away that these are pretty dramatic over generalizations. You could probably think of a lot of
men who are also kind and warm and nurturing. That's probably most of the men you know
and you can think of lots of women who are also strong and assertive and leader like. When we
start to develop these associations, they can start to bias our decision-making processes in
interesting ways. The thing that makes these implicit is because they are not under our
conscious or direct control. When I said peanut butter and, you weren't like, what is he doing?
You automatically thought of jelly. Jelly just came to mind. You didn't have to want to answer
it. It just happened. This can happen often unconsciously because these things are so deeply
encoded. Some people wonder why is it that this can happen unconsciously. One theory for
this is that it happens so early that we develop these associations often before we have the
cognitive capacity to realize that maybe they are stereotypes that we don't agree with. For
example, I have a little daughter. I listen to a lot of children's music these days. Most of the
female characters in the children's songs that I have, I need to go shopping for more children's
music, but the female characters are mostly moms and the characters that are doing active
leader like things are so far 100 percent male. There's one female zombie character in one of
the songs, but there are like 99.9 percent male and they are almost all animals. It's not clear to
me why all the ducks and frogs and roosters all have to be male. So from early on we're sort of
shaped by these kinds of influences that we see without even being aware of it and so I have to
like loudly sing over the music to change the pronouns. [laughter]. As a result, because this is
implicit it can happen even when we disapprove of the stereotype. We probably all have had
the situation where you realize maybe a stereotype affected your judgment, like maybe you
met someone and you found out they had a stereotypical male occupation and it was a woman
and maybe you were surprised. And you think why am I surprised. Crap. That's why I'm
surprised. These things sometimes affect us without us being aware of it even if we don't agree
with the stereotype, and that's why I think it's good for all of us to think about it. I'm not
denying that there are people out there that are explicitly sexist and racist. For sure those
people exist, and you have probably met them, but I think for most people it's not what we
need to think about. We need to think about these more subtle biases that might affect our
behavior even if we don't want them to. What I'd like to do is give you a sense of what a
content of what a lot of these biases are. If we think about the content we can be more aware
of them when they happen and we can also think about how we might try to reduce them. One
of the most common types of content for stereotypes are stereotypes about competence. And
women in particular are often stereotyped as less competent especially in traditionally male
domains. The way these stereotypes about competence start to break down is if a task is
stereotypically male like say coding or barbecuing, people often have this sort of implicit
assumption that men are better. If it's a general task and there is no gender typing, people also
assume men are better. We make up fake tasks all the time in lab experiments and people
assume to think that men are better at them. If a task is explicitly or stereotypically female,
people tend to assume women are better, so sewing, most people would assume women are
better. One of my favorite recent examples is a Computer Engineer Barbie. Did you guys see
this? So Computer Engineer Barbie is amazing. She is a computer engineer and she can't code
and she also cannot e-mail and attachment without crashing not one but two laptops.
Fortunately, her friends Brian and Stephen are there to write all of her code for her and fix her
and Skipper's laptop. Again, you wonder where these implicit biases come from. Mattel's
response is that was 2010. That doesn't represent who we are as a company anymore. Mattel
was a company formed by three people, two men and one woman and Mattel comes from
Matt and Elliott which were the two men's names put together. I don't want to pick on Mattel
too much, but I do like Computer Engineer Barbie. The stereotype that women are less
competent comes out a lot in task groups so it's less of a problem in social situations. It's more
of an issue when people are trying to complete an important task together. In those kinds of
settings women often don't get as many chances to speak especially as they are interrupted
more often. They tend to have less influence on tasks, so the same suggestion from a woman is
less likely to be listened to then an identical situation from a man. We do experiments. We
bury who makes the suggestions and see if people listen to it or not. Women's performances in
general tend to be evaluated less positively. This is obviously bad for women. If women are
systematically perceived to be less competent, this can affect women's careers in a lot of ways.
But it's also actually really bad for teams. Let me give you an example of how this is bad for
teams. There was a study where they asked individuals to complete a task called the Australian
bushfire survival task. This is sometimes used as a teambuilding exercise. Did you guys ever do
this as a teambuilding exercise. Yeah, so Rane has done this. What you are supposed to do is
get together in a group and come up with a strategy for surviving an Australian bushfire. There
are right and wrong ways to behave in an Australian bushfire and so this is something that can
actually be objectively scored and some people are better at this task than others with no
experience. What they did is before they broke people into groups to have them work on this
task, they would had everyone try the task by themselves. Try to solve this problem on your
own and then they scored it and then they put everyone in groups. People didn't see their
score before they were put in groups. They could know in each group who was actually the
best at the task when they could see how well the person performed by themselves, then they
could see who is the most expert person in this team. What they found was that when the
most expert person in the group was a man the groups tended to be pretty well and that's
because the teammates would listen to this person. The expert would have a lot of influence
and as a result the team would perform really well. This is really logical. If there is someone on
your team who is really good at solving a problem that you are trying to solve and you take that
person's advice you are probably going to be good at solving the problem. But what they also
found is that when the most expert person on the team was a woman the team tended to react
quite differently. They would tend to not take her suggestions. She really didn't have much
influence over the group and as a result the team would not perform very well. Again, this is
very logical. If you have someone on the team who really knows what they are doing and
knows exactly how to solve a problem you are trying to solve and no one takes their advice you
are less likely to perform well on that task. It's also bad not just for women. It's bad for team
performance in general. Another side affect of women sometimes being seen as less
competent than man is that women are sometimes held to different standards and the
standards can sometimes shift around in really funny ways. There was another one of these
resume studies where they gave people two resumes to evaluate, a man's resume and woman's
resume and they said here is a man and a woman's resume and here's a job description. Look
at these and tell us who you think is more qualified for this job. In one condition of this study,
the man had more education and the woman had more experience and in the other experience
it was split. The woman had more education and the man had more experience and that's a
pretty normal situation that you might run into in a lot of hiring situations. One person maybe
has a more danced degree or more postdoc experience. Maybe another person's degree is not
as advanced but they have been in the workforce longer. What they found is that people
generally chose the man has the more qualified applicant. And when they asked them to
explain why they chose the candidate they chose, they tended to choose not just the man, but
they would also choose whatever criteria tended to favor the man. When the man had more
education they would say education is the most important criteria for this job so I just chose the
candidate with more education. And when the woman had more education they would say
well, you know, experience is the most important criteria for this job so I chose the person with
more experience. This happens not just in these experiments that we run, but it happens in
real world settings. There was a study, you might have seen this. This was in Nature. The
Swedish Medical Council has this competitive postdoc fellowship and you submit your
applications and your CV and they can look at your grants and publications and a peer review
panel looks at your application materials and they give you a competence score. Social
scientists love this kind of stuff because we have input. We have an independent measure and
we have a dependent measure. We have this competence score and we can measure how one
affects the other. Researchers looked at the data and what they found is that women had to
publish about 2 1/2 times more papers in order to get the same competence score from the
peer review panel that a man got, so 2 1/2 times more papers. Maybe you guys publish papers
faster than I do but it would take a long time to publish 2 1/2 more papers than someone else
who was my peer. Why would this happen? One explanation as to why this happens is that if
we have this implicit expectation that women might be less competent, we need to see more
evidence before we are convinced. So if you are skeptical about something you need to see
more evidence. This might be a case of people being a little bit skeptical and then needing
more evidence. I have a good boxing story about this too but I don't think I have time to tell it
so I'm just going to move on. We often try to think about how to solve these problems. Let me
think about one common solution for a minute. One common solution and I think it's a very
intuitive solution is that maybe women should act more like men. Maybe the problem is that
women aren't assertive enough. Maybe they aren't confident enough, so maybe women
should be more assertive and self-promoting at work so they get recognized. It turns out there
is sort of a catch 22 for women especially in traditionally male, that's a social psych term, male
type jobs, so jobs that are traditionally held by men. Here's the catch 22. When women are
assertive at work, when they are self-promoting, when they trumpet their accomplishments
even when they're really successful in traditionally male type jobs, people recognize their
competence and their competence does seem to be recognized in this situation, but they also
tend to be disliked. They tend to be seen as pushy, hostile, domineering, selfish and because
they're seen in all of these negative ways, they also tend to be seen as less hirable. If you think
about who you want on your team you like someone who is competent, but you probably also
want someone who can get along with you, can get along with others and who is not going to
destroy the team dynamic that you have worked so hard to build. When women are assertive
they are also seen as having these negative qualities. Now when women aren't assertive, when
they don't self promote or trumpet their competence they aren't disliked in this way. They
tend to be seen as very warm and friendly and nurturing. Problem is they do tend to be seen as
less competent in those situations. There something about a Catch-22 or what is sometimes
called the double bind for women where when women are assertive they tend to be disliked,
but seen as competent, but when they are not assertive they tend to be liked but seen as not as
competent. There's a trade-off that's really tricky in terms of a career where you need to be
seen as both competent and at least somewhat likable. This wouldn't really be a gender bias if
people saw him in the same way, but it turns out when men are sort of self-promoting or when
they trumpet their own accomplishments that tends to be good for men's careers. We don't
seem to be penalized. That's just acting like how you would act in a traditionally male
occupation. One thing you might think is what about men? Do these kind of gender
stereotypes ever cause problems for men? Again, I point you back to that survey of scientists I
mentioned at the beginning where they found that about 73 percent of women and about 13
percent of men report experiencing gender discrimination. The number is a lot higher for
women, but it also is not zero for men. What is going on for men so that that number is not
zero. There are a couple of things we can think about. I think both of these issues, the fact that
women tend to be disadvantaged by these biases more than men, but there are still some men
who might experience disadvantages. This is one factor that squares both of those things. That
is the fact that the stereotype of men tends to be that they are ideal workers, that men are
assertive. They are leader like. They are competent. They will work as many hours as you want
them to and that is why men tend to be favored in a lot of these kinds of settings. The
challenge is a lot of times when men depart from that stereotype they tend to be seem very
negatively. There's been a lot of interest in the past five or six years on flexible work
arrangements and how asking for flexible work arrangements might affect men and women.
There's evidence that both men and women can be penalized sometimes when they asked for
flexible work arrangements or let's say you wanted family leave to take care of a parent or a
spouse or a child. That can penalize both men and women, but there's at least one study that
finds men in general tend to be penalized than women when they request family leave. They
tend to be seen as worse organizational citizens or people that care less about the organization.
Interestingly, I mentioned a lot of these biases men and women seem to have an equal levels,
this is one of those where men do it more. It seems like there's something about men taking
family leave that men don't like. This is one study, so I'd like to see more studies come out to
examine this, but I think we can think about some real world situations. At least one man I
know talk about taking family leave at work and he was made fun of so much that he decided
not to. There's some reasons to think that this can be real. Another thing is when men are
modest and deferential at work they are sometimes perceived as on masculine or not manly or
not dominant enough for the workplace. So sometimes men who are more deferential or
modest they can be penalized more than modest women mostly in terms of how likable they
are. Even though most of these biases accrue to women in the workplace and we see larger
disadvantages for women, I think it is maybe better to think of these as a more cognitive
system more broadly that is shaped the way we see both men and women and it can be limiting
for both men and women. With that in mind I'd like to think about how we can reduce some of
the influence of these implicit biases. What I'd like to do first is mentioned some general
principles and then turn those general principles into some specific ideas. You might be
thinking how do I limit the influence of something that can happen unconsciously, and I don't
necessarily know that it's affecting my behavior. I agree it's very challenging to limit the effect
of something that is unconscious. Here are some ideas. They stem from the fact that we tend
to rely on these stereotypes when we need mental shortcuts. What we need to try to reduce
the effect of these stereotypes on our behavior is we need what are called cognitive resources,
which is basically just mental energy. You can imagine at the end of a long day maybe you go
home and you don't want to work on a complex problem. You want to have a drink and watch
something stupid on TV. That's the case where you are low on mental energy. You don't have
as many cognitive resources. Also, motivation. You also have to want to do this. Think about,
again, my student using a calculus tutor. When he looked at the list of calculus tutors he had to
be motivated to stop and say why did one person on this list look better to me than the others,
and he had to take the time and effort to think about a better strategy. He takes time and the
motivation. We're more likely to use stereotypes when we're tired, when we're distracted,
when we're rushed. Those are all cases where we don't have a lot in terms of cognitive
resources. You are probably immediately seeing the problem here. How often are you tired,
distracted, or rush at work? This is the way most of our workplaces are most of the time. I was
telling Rane and Catherine at lunch that I went to this conference on how to make work more
family friendly and less incredibly tired and rushed all the time, and the whole conference was
really rushed and they were bringing us snacks at the tables so you don't have to get up and I
was like you guys are the ones teaching us how to do this and you are not even letting us -- so
this really is really hard, but still it's an important principle. Let's see if we can build things into
our processes that at least make it less likely even if they don't make it easy. Another thing
that's really interesting is we tend to be more likely to fall back on these stereotypes when
we're angry or when our feelings are hurt or when there aren't consequences for our behavior.
Have you ever gotten in an argument with someone and at some point in the argument in the
back of your mind you start to realize that you are the one who is wrong? But you keep arguing
anyway. When we're upset there are times when we aren't good at being introspective about
our thought processes. So it's also good to think about this aspect. In academia, one way this
affects us is students tend to rely on race and gender stereotypes of their professors more
when their professors give low grades then when they get high grades. Everyone likes people
who give high grades, but once you start giving low grades things change. There's a lot of
stereotypes that come into play in that case. One approach, and that's kind of what we're
doing now, is learning about bias and so learning about this research on implicit bias and how
bias works is one way to think about how to reduce the effect of this bias on your behavior.
There are, for example, some studies that show -- have you guys done the implicit association
test? Some of you guys have done. There actually are studies where they show that people
can do like over eight weeks actual reductions in their IAT scores by doing certain kinds of
exercises. Not necessarily will everything that is called diversity training do this. There's a lot of
stuff out there called diversity training and it might take different forms, but at least learning
about implicit bias seems to be effective. It does require motivation. If you drag people to the
stuff and they really don't want to be there you can sometimes get a backlash effect, so it does
require some motivation. Another approach, and this is something that I think is a little bit
easier to build into a team concept, is something called accountability. That means a lot of
things in a lot of different settings, but in social psychology what accountability means is just
expecting that you are going to explain your decision making process to someone else. If you
are going to have to explain to someone else how you made your decisions, that can reduce the
likelihood that stereotypes influence those decisions. I'm going to skip the description of the
study for time reasons, but there was another resume evaluation study where there was
accountability and then no accountability condition and the no accountability condition has
zero gender bias and so that's very encouraging. Why does accountability work? When you
think you are going to have to explain your decisions to someone else, you tend to think more
about those decisions and do a better job of reasoning through why you are doing what you are
doing and it makes you less likely to take these mental shortcuts that lead to stereotyping. If
you look at the accountability research literature, it's a huge literature and there's only a small
corner of it that is about stereotyping. Has because taking the time to think through what you
are doing and imagining how you would explain your process to someone else is really good for
explaining lots of different kinds of decisions. It something you see talked about a lot in
organizational behavior research outside of the stereotype literature. A lot of things make
accountability work better or worse. One is being accountable during the decision-making
process, not just after the final decision has been made. As humans we're really good at
coming up with post hoc rationalizations for our behavior. If you've already made the decision
and someone asks you to explain it, you can definitely come up with a reason. But thinking
about that during the decision-making process is more helpful. Another approach is something
called transparency. Transparency is just agreeing on the standards of evaluation before you
evaluate the candidates. Maybe you have been in a hiring meeting or talking about which in
turn you should hire and you notice that maybe different pros and cons are brought up for
different people. People aren't always evaluated according to the same benchmarks and so
things you might use our things like education versus experience or performance, what you
have already accomplished versus potential for the future and they are all valid criteria, but
sometimes we shift those criteria around in such a way that they are not equally applied to
everyone. In that education versus experience study I mentioned where they gave it to the
man regardless if he had more education or experience, they did another version where there
is no gender bias at all where you have people state which criteria is most important to them
before they see their resumes. Knowing what criteria you are going to use and holding
everyone to the same bar is really useful. There is a Deloy and Tuess [phonetic] self-study that
they published in the Harvard business review where they found that they tend to evaluate
their male consultants based on their potential female consultants based on their performance.
Early in your career you got a lot of potential but not as much performance, so you can imagine
how that could be disadvantaging. A few other ideas about hiring effectively and these might
work slightly differently in different organizations. Different organizations organize their hiring
differently. One is being clear about the parameters of your search and so transparency and
being clear with the evaluation criteria are, that's important, but also doing things like in my
field, for example, we have subfields that are mostly male and some subfields that are mostly
female and some that are more gender balanced. If you are not careful and you write an add
that only advertises in your mostly male subfields, you will probably get a mostly male applicant
pool. And if you have broad hiring needs, maybe you don't need to do that. Another one that
falls from the fact that we tend to fall back on stereotype when we are low on time, try not to
read those 60 resumes an hour before the meeting. That's when we are more susceptible to
bias. We're busy and so this stuff is hard. I'm not saying it's easy, but allowing sufficient time
can be really helpful. Encouraging everyone to contribute -- to have another minute or so?
Two minutes, okay cool. I promise I will only take two minutes. I mentioned women are often
seen implicitly as less competent given fewer opportunities to speak and so one thing you
should find ways structurally in your organization to encourage everyone to contribute. You
might, for example, have everyone e-mail feedback before the meeting. You might make sure
everyone in the meeting talks. In my lab meetings I like to have the undergrads talk early on
because sometimes they get more intimidated by the grad students and don't want to talk if
the grad students go first. But they can have really, really good things to say and so it's good to
structure the discussion so you make sure everyone contributes. The final thing I would
encourage you is critically analyze your supporting materials that you get, for example, letters
of recommendation. I mostly talk about how all of these biases work in an organizational
context, but they can really come in anywhere along the pipeline. There is a study that when
people write recommendation letters for women they are more likely to come up with or
mention irrelevant personal details. They tend not to do that as much for men. Having
irrelevant personal details in the recommendation letter might make you seem a little less
professional. Think about how these things might have affected materials before they get to
you. I'll just wrap up there. I thank you so much for having me. It's so exciting to talk to you
guys. I think we might have time for one or two questions.
>> Eric Horvitz: Yeah, let's take one or two questions real quick. [applause].
>>: I used to work at a British [indiscernible] policy to getting a [indiscernible]. They were all
handled by recruitment and they removed any gender or age or race related information. They
affect if we removed the opportunity for gender bias. In some respects it is not training the
workforce to think about these things, but it meant that the pool that came typically for
interview was incredibly diverse, much more diverse arguably than would have happened if
they had allowed the CVs to be seen earlier. Is that good counsel or is that something that you
would push against and say no? You need people in the process recognizing what's happening
at least?
>> Stephen Bernard: That's also how I graded my students' exams. There's a cover sheet so I
don't see anyone's name when I grade them, so I think there are a lot of advantages to those
kinds of approaches. I think in a lot of workplaces people say it's quite challenging to
implement and one of the reasons is because a lot of times we hire through social networks.
We tend to hire people we know or we get calls from people or maybe there are people who
are interns for us and so I think there can be advantages in trying to blind yourself to that
information. I also think that in a lot of workplaces it is challenging to actually pull that off.
>>: We did it over in another group with our interns and we got an incredibly diverse pool but
every now and then some information would creep through like Eagle Scout, a dead giveaway.
>> Stephen Bernard: It's tough to scrub that. And some of those are so culturally dependent
too. You might not know what signals what, but actually, interns are probably a good example
because hiring for a higher-level position, certainly there is a social network component, but
maybe that's less true for interns.
>> Eric Horvitz: Okay we have time for one more question.
>>: What does the literature say about psychological news for the competence biases?
>> Stephen Bernard: The psychological what?
>>: The roots of the competence biases. Is it all things that we see at childhood? That seems
strange to me. Or is it something else in play there?
>> Stephen Bernard: The argument that it stems from these childhood exposures is in some
ways hard to test because it is assuming a lot of stuff that's hard to observe by the time you
actually get someone who is in your lab studying. But there is evidence. For example, you can
reduce the biases by exposing people to more what social psychologists call counter
stereotypical examples. Basically, if you expose people, for example, two very successful
women, then that can reduce their stereotype that women are less competent. It suggests that
even if it does happen earlier on you might be able to have more proximate influences that can
reduce it.
>> Eric Horvitz: Let's thanks Stephen one more time.
>> Stephen Bernard: And thank you so much. This has been so much fun. [applause]
>> Eric Horvitz: We're going to have some time for joint Q&A after Catherine's talk and then
also both of our speakers are going to be our wine down today at four, so you can corner them,
I mean, find them [laughter] then and asked more questions. Without further ado, Catherine,
take it away.
>> Catherine Ashcraft: Thank you. Can you hear me okay? Is my microphone on? Thank you.
I'm also really excited to be here to kick off your diversity lecture series, in particular on what I
hear is a special weekend. There is some kind of big sporting event going on. You may have
noticed that I'm from Boulder. I actually lived in Denver we are just trying to forget that that
last Super Bowl even exists. It's a little difficult to explain. You're going where? Which
weekend? But the good news is I actually lived here for three years, so I'm secretly rooting for
you. Just don't tell anybody back home. As you can see from the title of the talk, I'm going to
be talking about two different but related kinds of things and drawing on different bodies of
literature in doing so. First part of the talk is going to focus on stereotype threat and I'll be
looking at a lot of research that has been done over the last 20 years in this space. It's about
how stereotype threat affects our performance in general and our intelligence and
performance and ability in levels of confidence and those kinds of things. And then especially
how it shows up in the technical workplace. Then I'm going to switch gears a little bit and draw
from a study that we did at NCWIT where we studied male advocates and Microsoft and
Microsoft Research were both parts of this study. I'll tell you a little bit about that study and
what we learned that male at the kids can do to advocate for gender diversity and other types
of diversity in general, but also how they can help mitigate the effects of stereotype threats, if
that makes sense. But first I want to back up and say a little bit about the problem, the exact
problem that we are trying to solve at NCWIT. How many people know what NCWIT is? Good.
If you don't know, Microsoft is also a member of NCWIT and working on these issues with us
and I know that I'm sure that you are all aware that women and minorities are
underrepresented in technology, but in case people aren't sure of the exact numbers I just
wanted to do a quick run through to get everybody on the same page. As you see here women
comprise about 57 percent of U.S. professional occupations, so more than half. Professional
occupations are ones that the Bureau of Labor categorize as requiring a four year degree. The
comprise more than half of those professional occupations, but they hold only about a quarter
of computing occupations and those are actually computing related occupations, not just using
technology, actually designing tech technology. Then you can see the numbers when you
consider certain kinds of technology positions the numbers decrease even further. For
example, they hold only 19 percent of U.S. software developing positions and the numbers
decline further as you get into leadership. When it comes to racial or ethnic minorities, this
also comes from the Bureau of Labor Statistics black and Hispanic men hold about 9 percent of
U.S. computing jobs, black men holding about 4 percent and Hispanics about 5 percent. Black
and Hispanic women together hold only about 4 percent, so it's 3 percent black and 1 percent
Hispanic. These are all U.S. numbers from that U.S. Department of Labor. We do have
numbers internationally if you are interested as well that we're working on capturing. Yes?
>>: The numbers on the previous page it's fairly easy to guess which fraction of the U.S.
population is women. I'm not a demographer, so in terms of fractional representation is this
better or worse than women are doing?
>> Catherine Ashcraft: For the black and Hispanic categories? The representation of African
American population in the total U.S. population is about 12 percent, and Hispanic population is
about 18 or 20 percent and rapidly on the rise and is scheduled to be about 37 percent in the
next 5 to 10 years. If that helps. It's also not that people aren't applying for these jobs. It's also
that we're losing women in particular who are already interested and in these jobs currently or
in the past. In one of the largest studies on this topic they surveyed a wide array of women in
computing who had either left or some were still in it and 74 percent reported loving their work
and what they did and yet 56 percent, so more than half had left between 10 and 20 years in
their career. Obviously, one of those things that people first wonder is when they're leaving is
when they have children too. So why are they leaving to opt out to have children to take care
of family responsibilities and so they looked into that as well and it turns out that only 25
percent of the 56 percent who left, only 25 percent of those, that's a quarter, were opting to
take time out, maybe that is not the best word to use, that they were leaving the workforce to
take care of family kinds of responsibilities. 75 percent stayed full-time in the workforce and
half of those actually stayed in tech jobs. They just left large private sector company jobs and
they went to either startups, nonprofits or government kinds of tech jobs. There was a
remaining quarter as I am keeping up with the breakdown here, the remaining quarter went to
non-tech jobs in other areas. Even the 25 percent that did leave to take care of family
responsibilities indicated at times that they might have made other choices had there been
other options to stay or come back and that sort of thing. At NCWIT we're trying to work on
both recruiting more women into the field, but also in addressing this issue of making it a, and
easier place for them to stay. Just wanted to quickly point this out in case you are interested in
finding more of this research, why does this matter. You could say well it doesn't really matter
that there are not that many women in tech. Maybe they just don't want to be or maybe they
just want to do other things, but of course we know increasingly and is Stephen touched on,
that diversity does bring benefits to innovation, problem solving, return on profits and we know
a lot more about this in the last 10 years, I would say, about the benefits diversity brings to
work teams and to companies over all. We have been saying this for quite some time. You
have all heard that argument. But in case you don't know some of the latest research,
summarized on the slide for the benefits, but you can also find a research summary. I could
give a whole talk on this one particular topic, but since we don't have time for that, you can
check out the research summary at our website there that delineates all of the specifics about
the benefits that diversity brings to teams and company production. With that in mind, a quick
overview of what I am going to talk about and like I said, I am going to first start off talking
about some of the studies on stereotype threat research and looking at how stereotype threat
affect performance and confidence. Then kind of drill down a little bit more specifically about
how these things actually show up in technology kinds of environments and workplaces. And
finally switch gears and talk about this study we did at NCWIT looking at the role of male
advocates in diversity efforts at large but also, in particular, addressing stereotype threat. First,
how this stereotype threat affects performance, and I think one of the key things to remember
here is that intelligence and ability are much more fragile things then we often think of for as
one of the leading researchers in this field, Joshua Aronson says than the makers of the SAT
would have us believe. We have this idea that we have our SAT score and people that have this
relatively static level of intelligence. But we know from a lot of research that there are all kinds
of things that can affect intelligence. There are physiological factors, the reason they tell you
and your parents told you, that you should be drinking juice at the beginning of the morning,
having a good breakfast and all those kinds of things. Those kinds of things can affect how you
score on a test, but also there's a lot of social factors that affect how you can score on a test as
well. Intuitively, you probably have experienced this yourself or have some sense of this where
you just kind of feel smarter on certain days or with certain people or certain groups of people.
Wow, I am just incredibly witty and funny and everything I'm saying is right today, and then you
go around another group of people and you can't argue your way or come up with a decent
thought of a paperback kind of thing. You probably have all had those kinds of experiences and
the key factors that play in some of these social factors that affect intelligence is stereotype
threat research has shown. First of all, it's always good to start with a definition, what is
stereotype threat? How many of you have heard of the term before? A few people. Basically
it's a fear that our performance, something we do in a given situation where a stereotype about
an identity group we belong to is relevant that our performance in that situation will confirm
that negative stereotype. You have to belong to the group about the stereotype is made and
the stereotype has to be relevant to the current situation. What are some examples of that?
I'm going to talk little bit more about these when I talk about some of the studies, but just to
give you a quick example of what this looks like and they have shown this with all kinds of
populations in the past 20 years. It can be an elderly person taking a memory test, especially if
reminded about stereotypes ahead of time about age and senility they perform less well on
those tests. African-Americans taking an IQ test, a woman called upon in a computer science
class when they're the only woman, especially if they are the only woman or one of the few
women in the group, and also and remembered this one for later, because I'm not actually
going to talk about this study, but I'm going to refer back to this when we talk about male
advocates. They've shown that men when reminded about stereotypes about night being may
be the most socially sensitive or socially skilled abilities, perform less well on tests of social
sensitivity then men who aren't reminded of those stereotypes. And an example from popular
culture, it's actually kind of difficult to find examples in popular culture about this, so if you
think of one while I'm explaining this, let me know afterward. I think that this is an ideal
example. How many of you, so it's a little old now, 8 Mile, have any of you seen the movie from
about 12 years ago? For those of you who don't remember it verbatim, it's loosely based on
Eminem's biography and so Eminem where the character that is resembling him in his first rap
battle he gets up on stage and he is battling an African-American rapper and the black rapper
starts out and the audience is also mostly African-American and it becomes Eminem's turn and
they are shouting all of these insults and stereotypes and white men can't rap and get off the
stage kind of thing. And of course he eventually kind of just chokes up and freezes and is
unable to really continue. And of course as we know he got over that and he went on. It was
something he was actually pretty skilled at, even at the time he was quite skilled at it and went
on to prove that he was very skilled at it. I don't know what your personal assessment of
Eminem is, but a critical mass of people think he's skilled at it and it he in the moment froze up
and was unable to do that. It is kind of an example just to keep in mind about how this works in
everyday life. This is maybe an extreme example where they are actually shouting at him the
whole time while he's trying, and so the stereotype is absolutely being invoked. Researchers
have wanted to study examples where maybe the threat hasn't been quite have extreme and
you are not being reminded of it every 30 seconds or every 5 seconds. It's important to
remember that even when you don't have an audience yelling at you and reminding you of the
stereotype that it can still have this effect. These two things are very important to keep in
mind. It dovetails into what you said Steve about biases, unconscious biases, that you do not
have to believe the stereotype for it to have this effect on you. If you were an African American
you wouldn't have to believe that there were stereotypes about racial intelligence were true.
You would just have to know that they exist. Others in the room do not have to believe it
either. It kind of tends to make it worse if that's the case, but it can have that effect anyway.
Let's take a little bit of time to look at what some of the research says and like I said they have
been doing studies in this space for about 20 years. It's one of the most replicated social
science findings I think to date. Don't worry; I'm not going to go through all 357 studies today.
I'm just going to highlight three of them. You can also find almost all of them categorized on
the reducing stereotype threat.org. That's a convenient name and easy to remember website
and you can find the ones that you are more interested in. You can see as a sample of the kinds
of populations that they've been this with, all kinds of different populations. It started out with
mostly of college students around race and gender, but then they have extended this concept
to all kinds of different populations. First I want to look at, this was really the very first series of
studies done in this space and the researchers were interested in looking at the difference
between black and white participant performances in intelligence tests. In one of these studies
they took black and white participants and they gave each of them a test on verbal ability and
they took items from the GRE verbal test, basically. I'm getting over a really bad cold so I'm
hoping that I don't have a really unpredictable coughing fit here. they put the participants in
two different kinds of conditions. In the first condition they told all of the participants that they
were taking a test that had been found to be a very accurate measurement of intellectual
ability and that they were part of a study to kind of figure out how personal factors such as
race, ethnicity factored into people's scores on the test. Hence that is theoretically expected to
soon to kick in the stereotype threat if students know that. In that condition you can see that a
black participant scored much less well. You can't read this, number of items solved correctly
on the test and you can see that they perform less well. When they were told in the second
condition, they weren't told anything about race and the stakes were also changed a little bit
and they were told that it was more of a problem-solving exercise and they were just trying to
study psychological factors of how people solve problems and race wasn't mentioned at all. In
that test you can see the same exact test and they corrected the standardized scores that the
participants had coming in, so they were comparing people who had similar scores coming in in
both of these studies and you can see in this study the black participants obviously improved
quite a bit, pretty much equaling the scores of white participants. They did a couple of tests
testing this from a number of angles and a number of other researchers have tested this and
other contexts. These particular researchers also wanted to look at what the effect would be if
the stereotype was not so obvious and they had noticed also in some other studies that
indicating race, just indicating race before taking the intelligence test had an effect and so they
wanted to test that out. They did another test where they put a similar kind of test and they
put half the participants in a condition where they activated stereotype threat only by giving
them demographic questions to answer before the test such as age, parent education and then
the last one they answered right before the test was race. You can see how the black and white
participants scored when they had to give the racial identity and ahead of taking the test. The
second condition was the same exact test, same exact instructions except they did not indicate
race before they took the test and you can see the correction scores and how significant a
difference it made, basically on par. It's not a statistical significant difference there. Yeah?
>>: How do they differentiate the hypothesis that what's going on is that it's the mention of
the stereotype and the theory of confirming it versus just anger over what the heck is a race
question doing on this test that might be distracting if you are in a more discriminated against
race and less distracting if you are not?
>> Catherine Ashcraft: I think there are maybe two answers to that. In this case a would have
been like a normal test that they would have naturally expected. They do this traditionally on
standardized tests. You get asked your gender, your race and just your demographics, so I
don't think in this particular condition it would appear out of the ordinary. It would just be just
something that you regularly do when you go and take standardized tests. The absence of it
was more the anomaly, right, but they don't notice that either. It's just that it's not
subconsciously triggered. I think maybe your question might -- you still looked confused
though. But I think your question might apply also to the other study where they were told that
they were part of a study, you know, because then they might think well why are people
studying that and they could have a negative reaction to being told that. But I think in some
cases it doesn't actually matter because that's how you go through life. The whole point is you
know that some people have these kinds of prejudices and when you hear that it is the anxiety
and the fear that can make it worse, if that makes sense. They've also done similar studies with
gender. The studies I've just mentioned were done primarily in the laboratory, kind of
experimental environment, bringing participants in and giving them these tests. This study was
done as a field study in partnership with the Educational Testing Service and they wanted to see
if this same kind of phenomenon applied to gender. Half of the participants took the AP
calculus test the way it's normally taken when gender is asked right before you take the test.
And it's not like anybody, it's not like anybody is trying to trip women up doing this. They just
didn't know that this was the case. They were just asking demographic information because
that's what you do before a lot of kinds of tests and things. When gender was asked before the
test you can see the results where girls score significantly less well than boys. And then in the
other half of the participant group they simply move the demographic gender question to the
end. It made a significant difference as you can see there. In fact, girls' scores went up a little
bit and in this case the boys' scores actually went down a fair amount and it is still relatively
close between them now and it is a statistical significant difference. But the boys did go down
from their prior score and there is additional research that has been done since this study to
show that if you are on the positive end of the stereotype threat, they actually call it a little
stereotype left affect and that research has just started to have been done in the last five or so
years, but they have seen initial evidence that there is, in fact, a stereotype lift. Some of these
researchers, I guess, jokingly give advice to like if you have a daughter that is going in to take
test, you want her to go into the test chanting I'm smart, I'm smart, I'm smart. But if you have a
son you might want him to go into the test chanting I'm a boy, I'm a boy, I'm a boy. [laughter].
Just to kind of manipulate the stereotype and whatever favor you are. Like I said, this is been
replicated across studies over the past 20 years and on average when it comes to women it
tends to cost women about 20 to 30 points if the question is asked before on a math or science
kind of test. And for black and Latino students it costs roughly 40 points. We actually have
worked with College Board to get some of these rules changed and they are working on
changing some of that for the math and science tests. Yeah?
>>: [indiscernible] what the scale, or is this a standard SAT?
>> Catherine Ashcraft: Yeah, like a standard SAT score.
>>: Is it like 200 to 400 to 1600?
>> Catherine Ashcraft: Yeah, something like that. I wanted to highlight one more study in this
area of research that took a very different turn. Researchers wanted to know if for the one
instance of exposure to stereotype threat could produce a similar kind of effect. They wanted
to know if people that don't normally walk around being subjected to the study, to a
stereotype, if you put them in a condition where they are for that moment subjected, will it
have that effect. They took white male engineering students from Stanford, all of whom have
had very high math scores, all of whom thought of themselves as being pretty high scoring, high
math ability folks, and they put, I see some people know where this is headed. They put half of
them in a condition where they tested their math ability and they just told him that they were
taking a math ability test. And then the other half of the participants they put into a condition
where they told them they were taking an ability test but it was part of a study to figure out
why white students were storing so much less low than Asian students. So you can see that the
ones who were oblivious to this fact and just knew that they were taking a math ability score,
you can see how they scored. When they were in the other condition this is how those men,
with very similar scores to the man in the other group performed. Just from the one instance
of being put into that condition. It helps kind of illustrate how powerful this effect can be if one
instance of it can happen, then how would a lifetime of exposure to that threat can also build
up over time and create all of this kind of anxiety that diminishes your performance in a variety
of contexts. You can probably begin to see how -- yeah?
>>: I was just trying to figure how long the effect lasts, like if you gave the ones who had taken
the test compared to the Asians and brought them back a week later and gave them a test and
didn't say anything, would their score go back up?
>> Catherine Ashcraft: Like did we scar them for life? [laughter]. Is that what you are
wondering? Did we willing like a bunch of careers? I don't know of any that have done that. I
think that judging from what the rest of the studies say that that effect would probably not
continue afterward into another condition, unless it were to be over time, you know, that they
were exposed to that. Probably one affect wouldn't be permanent or long-lasting. Even the
folks who are subjected to stereotype threat repeatedly, as I'll say in a minute, there are ways
to minimize that and when they're not in that condition, they do better. So even though they
had a lifetime of exposure to that stereotype, they still do better when they're in a position
where it's not invoked in that moment. Switching focus of little bit here to kind of look at how
this plays out. I think you can see from those kinds of studies that were mostly done with, the
ones I talked about were mostly done with students and how that can affect the pipeline and
how students at a very young age even can come to see themselves as good at science or not
good at science or math. It shapes how teachers see them. It shapes the kind of advice they
get. It shapes the kind of career advice that they get. It shapes how they perceive themselves.
You can have a profound effect in that regard. But I want to shift focus a little bit and look at
how it also plays out in the workplace where people have already arrived in these positions and
are working and how it can shut down innovation in that context. Some of this will dovetail and
complement nicely, with what Steve was talking about as well. A point to remember that is
particularly relevant in technology environment is that stereotype threat is particularly salient
and significant in a majority minority environment. That is one of the conditions that can
actually trigger stereotype threat. If you are the only man in the room or the only woman in
the room you tend to notice this more and other people seem to notice it more. But when
there is more critical mass it doesn't become as salient, gender. This alone kind of tends to
trigger it. The way it shows up in every day kind of subtle interactions is this sort of reduced
performance or this reduced confidence. A hesitance to speak up in meetings, because, again,
remember it's the fear that if you mess up or if you say something not so brilliant it's going to
be attributed or will reconfirm the stereotype about a group to which you belong. Sometimes
this can be conscious, but most of the times this also is unconscious and so it's not like people
are in meetings sitting there thinking well, I'm a victim of stereotype threat, so I don't think I'm
going to speak for a few minutes or not at all. [laughter]. It's mostly unconscious and can also
help make you appear less confident because you are not speaking up in meetings or because
you are hesitant when you do or because of other things such as you are reluctant to take
leadership positions, reluctant to apply or volunteer for things that you think you might not be
qualified for and we know from other studies that women, because of other socialization
effects, in addition to the stereotype threat are already reluctant to leadership positions and
tend to be harsher critics of their work, but stereotype threat exacerbates so that you don't
want to take it unless you know, know, know that you are ready, because if you mess up then
you might confirm this negative stereotype. Similarly, you might be in sort of a research
context then you might be reluctant to approach an advisor or to network to find an internship
or because if you are the only woman it makes that salient again and it is just more difficult to
relate and to go and say I wonder what they're thinking when I go and ask them to be an
advisor and those kinds of things can just be circulating unconsciously even in your head. Like I
said, sometimes it is conscious, but you are not necessarily thinking of it in terms of stereotype
threat. Again, hesitancy to take any kind of risk because, again, you're worried about
reconfirming the stereotype. You can see how some of these behaviors or these conditions can
prevent team members from contributing fully to the team. Even when they are physically
present in the room, if you can't get their ideas out or they can't get their ideas out, then it
hinders the very goal of the whole endeavor in the first place. It hinders innovation of creative
problem solving in that way. The good news is we know quite a few things that work to
mitigate some of the effects of stereotype threat. I'll say more about these in a second when I
get to the e-mail advocates part, but I wanted to say something quickly right now about the
growth mindset interventions. Has anybody here heard of the distinction between fixed
mindset and a growth mindset in some of the research that has been started? A few, okay.
This relates very well to stereotype threat. What is a growth mindset? You kind of think of
these as a continuum, so a fixed mindset tends to think that intelligence is relatively fixed,
relatively stable, static. You either have it or you don't, natural born talents. A growth mindset
tends to see intelligence as something that can be developed, exercised like any other muscle in
the body. If you use it you improve it and also recognizes that it's affected by a lot of these
social circumstances and it can change in various given settings. Think of these as a continuum.
It's not really like anyone is probably all the way over here saying that intelligence can never
change and nobody can grow and improve. Most of us, I believe, are somewhere in there, but
if you hear things like I said natural born talent or to be successful in this field you really can't
teach it. You've just got to be born with it. You just have to have a gift. That would be things
that would reflect more of a fixed kind of mindset to float those kinds of comments. They have
found that educating students or managers even about growth mindsets can go a long way to
mitigating the effects of the stereotype threat. Sometimes they are given whole courses on it,
but they have also given students instruction for the test, the standardized test that they would
take about growth mindset and that intelligence is like a muscle and to talk with them openly
about stereotype threat and the growth mindset. And that knowing that is half the battle and
reduces the effect of the stereotype threat, just knowing that you can exercise and that you are
not necessarily paralyzed by this stereotype or other people's views of it. These researchers
wanted to look at this on a larger scale. Could a growth mindset actually help explain under
representation in different disciplines? Is a very recent study that was just completed last year
and they surveyed 30 academic disciplines. As you see there are 12 STEM disciplines. They
surveyed scholars in the academic disciplines and ask them a series of questions similar to
these and asked them to rank themselves to where they were in that continuum. Being a top
scholar in the field requires a special aptitude that just can't be taught, or are you more along
the lines of when it comes to math or whatever the field was the most important factors for
success are motivation and sustained effort; raw ability is secondary. They would rate
themselves on a scale of like 1 to 5 and so they found that if the average score for the scholars
in the field was on the higher end, so this is ranging from 3.7 to 4.7, so the closer they were to
the 5, being the most fixed mindset, in general, the lower the representation of women. The
lower the score on the fixed mindset, the higher the representation with math being a little bit
of an outlier. Math has the highest fixed mindset, still kind of low on women, but not as low as
engineering, computer science and physics being the next highest of the fixed mindset. Brent
new research, but really interesting to see. You can see how that might be because in other
studies that have looked at a more micro level had managers and asked managers similar kinds
of questions about ranking themselves as to where they fallen this continuum, they found the
managers with fixed mindset relied more on first impressions, because if you either have it or
you don't then I can size you up pretty quickly. That makes it more likely that they are going to
stick with unconscious biases that are a part of those first impressions. They fail to notice
improvement because, again, they are not primed to see that, and they were less likely to
mentor because, of course, if it's fixed there's not a whole lot of difference that you are going
to make. You can kind of see how some of these things would relate and might produce the
effect that saw in the past study. What can we do about all of this? I want to switch gears a
little bit to identify the role of male advocates, in particular, but I will also be talking about
things that everyone can do, that women can do as well, but I want to talk about male
advocates in particular because they have a role in a majority minority setting where they tend
to be in more positions of power and have more influence and things like that. The draw from
our study of male advocates in technical workplaces when we interviewed 47 men all working
in high-tech companies or in other industries in the tech department for that industry. Like I
said, Microsoft and Microsoft Research participated in this study. The interviews were 45 to 90
minutes long. Half of them were done with a female interviewer and half were done with a
male interviewer. We wanted to see differences there as well. We followed standard kinds of
procedures in recording and transcribing the interviews and coding them using multiple
researchers and coding schemes. If you are interested in the full report we actually look at a lot
more than what I will be talking about today. It's available at the URL, but we looked at kind of
the experiences that man had in terms of what motivated them to participate in these efforts,
what kind of personal and professional experiences shaped their thinking, shifted their thinking,
motivated them into action, what kind of arguments they found compelling and what they used
when talking to other men or other people about these things. And then probably the bulk of
the report focuses on the top 10 ways that men advocated, were already advocating and the
successes and challenges they faced in doing that. That's the part I'm going to kind of draw out
the things from those top 10 ways that men can do to advocate for mitigating stereotype
threat. First three caveats about the study and actually all of this research that we have been
talking about. When we talk about it in terms of male and female employees this male-female
binary that reinforces those traditional ways of identifying because historically that's the way
that the majority of the population has identified and so that's what researchers have studied,
but I just want to mention that it does not really take into consideration people who identify as
other kinds of nonconforming gender identities. The second thing is not all of these strategies
are necessarily limited to male advocates, but like I said, they are important and they tend to
sometimes have a different effect when men do them if they are in positions of power and
influence. It's also that right now I am talking in terms of the gender diversity component, but
that some of these same things apply to other kinds of majority minority dynamics, so it's
important for majority group advocates to step up in general for these kinds of efforts and they
can make a real difference in terms of race also or other types of majority minority group
dynamics. One final caveat before I get to the things that men can do, is to remember that
earlier part where I said the men having reduced performance on social sensitivity tests. You
can imagine that there might be some stereotype threat that can apply here in sort of the
reverse as men try to think about being male advocates and stepping into this gender free of
issues that might raise some anxiety. In fact, our men in the study talked a lot about this,
several of these reactions which I decided the best way to capture this was through a series of
interpreted means. They talked about the anxiety that was raised by the thought of all if I say
the wrong thing what am I going to do? If I don't say everything right or I don't know what to
say, and so they talked about this a great deal, actually. I'm going to bring up an example here
in a second of one person who had a very poignant story. I think there are a couple of good
pieces of good news here. One is that in a way this can increase empathy across groups. If you
are a male advocate or a potential male advocate thinking about this and you experience these
kinds of stereotypes, you can actually create empathy for what it must be like on these other
conditions and luckily, hopefully, this doesn't take up much of the work day that you experience
these things. But you can imagine if subject to a stereotype that was more central to the work
day, how exhausting it would be to have these kinds of reactions all the time, or a lot of the
time, this kind of paralysis. And the flipside to, for the minority group it could also make you
more sympathetic if the majority group screws things up sometimes or doesn't always do well.
I think it can increase empathy in that way, realizing that we are all just victims of stereotype
threat in various ways. The other piece of good news is that a growth mindset is also important
here and can work to reduce some of these kinds of anxiety and to make you a better advocate
eventually. This is key in efforts to involve male advocates, making it okay to take risks and
explicitly talking about you probably might make mistakes and let's talk about that and what we
can do to fix that. I'm going to skip this example. You will just have to get the full report. This
was a story of a man who had made one of these mistakes and talked about the experience he
had with this female mentee and how they negotiated it and it was actually a very powerful
experience for both of them. The last thing that I want to say you for running to the strategies
that we can use to mitigate stereotype threat is that it's important to remember that it's not
just a women's issue or a person of color issue. And expanding these gender norms improves
these conditions for everyone, kind of like you were saying earlier about men who can also be
victims of these things, so expanding gender norms is important and can then be seen as men
and women advocating for each other in that regard, so it avoids the man riding in on a white
horse to save the woman kind of phenomenon and apparently going down in a ball of fire. I
don't know what that part was about. What can male advocates do? I can run through these
pretty quickly because I'm getting short on time. First, a word about what not to do. This is not
about lowering standards. It's about considering how you might unwittingly be creating
conditions where people are actually underperforming and you are not getting to the value of
both their true talent or ability and thinking about what some of those might be. The first thing
that I can do is talk to other majority group members and question assumptions. I think this is
one of the most important, is making others aware of stereotype threats and some of the
strategies to reduce them. Helping them question, this one I think is huge. Questioning
yourself when you think this and helping others question when you think so-and-so just isn't
very confidence or so-and-so just isn't a risk taker. I don't think we can give her that job. She
wouldn't be up to it. And thinking that it's not about necessarily a deficiency in that person, but
it could be these factors at play. Women and minority group members can also think about
how stereotype threat might be. A lot of people talk about how this is very empowering
knowledge because they maybe did think it was something about themselves, like maybe I'm
just not very confident or maybe I'm just not a risk taker. Finding this out, that maybe it isn't
about me. It's about the situation and then I can maybe try to get around that. The second
thing is to solicit the opinions of quieter employees in the team. That overlaps actually a lot
with what Steve was talking about also in terms of strategies to get opinions of quieter
members of the team. Also to make sure that people get credit for their ideas, if you see
somebody getting credit for making a comment and then somebody gets credit for that, later,
making castle comments about that as well in the moment, being sort of a interrupter of that
stereotype threat or the bias. The third thing is to promote a growth mindset explicitly with
your team and your unit or your colleagues if you are not a manager. Educating team members
and making the whole growth mindset idea explicit and that can happen in moments of
interrupting talk about natural talent or when people make comments or if you are in a
performance evaluation meeting and people make those kinds of comments, you either get it
or you don't. Let's step back and look at this because that's not really the way we want to think
about these kinds of mindsets and developing employees and that sort of thing. The fourth is
to provide legitimate encouragement and I mean that you don't want to be encouraging them
to make them feel good but things when people have done really well. We underestimate this
all the time and it has been shown to be one of the most effective things in mitigating
stereotype threat. We have countless examples of women who have talked about just a male
colleague or a female colleague who encouraged them to apply for a promotion or to apply for
an award that they wouldn't have applied to because they were hesitant and then they get the
award and that helped boost the confidence as well. That, though it seems like a simple thing is
huge in mitigating these kinds of effects in the majority minority environment. The other cool
thing is that a lot of these are relatively inexpensive. It doesn't take an enormous company
program. The fifth thing is to, and this can be a little more challenging in the current
environment, is to have an eye towards this, creating critical mass. When you have critical
mass and tip the scales then the stereotype threat becomes less relevant. In the meantime,
whenever possible forming diverse development teams because we know that those are more
productive. I know that's not always possible but to the extent possible. Finally, I'll just close
with a quote from one of the men in our study who talked about the growth mindset and the
stereotype threat that he experienced and that you just kind of have to take the plunge. Every
person that becomes an advocate had to go through that door where they take the first risk
and realize that wasn't so bad. I would talk about the risk-taking that you take the first or
second time with other men he was mentioning and how all of a sudden it is no longer risktaking. That's it. Thank you. [applause].
>>: We have a little bit of time now for group Q&A.
>>: Personally, I have no doubt that there is stereotypes and all of these things are true, but
how exaggerated do you think they are because in our field I tried five things and four of them
look like I am not going to publish them. I'm not going to be able to publish them. One gets
published and everybody gets done it. When you select the examples, probably you selected
the examples that were like the most obvious. Is this, it's not clear to me is this a huge problem
that we deal with or is this a problem we should be aware of, just something we should take
action or we should just be more aware of it in general. Because there is a [indiscernible] when
you publish this thing. We know from other fields [indiscernible] the rights of people cannot
replicate that thing and they don't publish the page. [indiscernible].
>> Stephen Bernard: I think this is probably a problem. Probably a lot of you guys are familiar
with this question. Is it possible there were holes in the literature because positive findings get
published more than null findings. I think one thing that will probably make that better in the
future is more people are happy to register your data, so we are aware of the findings whether
they are positive or negative. I have tried to avoid talking about stuff where there aren't lots of
replications of it just because even things like a positive publication about the bias aside,
something where there is just one study on a topic that maybe it's a false positive. That's why I
say for that finding that men are penalized more for taking flexible leave, I think it's logical and I
think that this finding supports it. It's kind of a new research topic, so that's why I mentioned
that I would like to see more studies confirming that. Some of the studies, things like women,
for example, getting less weight from their opinions than men, that is something I feel
confident about in those kinds of findings, because we have been doing that kind of research in
social psychology since the late 1960s and there is a pretty decent track record and there's a lot
of meta-analyses like looking at those kinds of findings. I tried to focus on stuff that is pretty
replicable or that there is a lot of replications of. I think you raise a great point in that even if
there are a lot of replications you wonder is it possible that there is even more non-replications
that are never published? One thing that helps a little bit, I think, are these audit studies where
they send out the resumes to companies. They are extremely time-consuming and expensive
to do those. They can take one or two years just to get the data for like a small number of
findings. If researchers were getting lots of null findings and not publishing them, there would
be big career problems with that. You couldn't, if you are only getting like that affect like 30
percent of the time like you wouldn't have gotten tenure. The other thing that can be helpful
about the process is that a good way to get famous in most fields is to prove like some other
famous person was wrong. I think that puts a check on it. Again, I think you have a good point,
that part of the nature of the problem publication wise is that you wonder like what maybe you
are missing. I think there will be less of that going forward as more places have you do things
like register your data. I know when you submit an NSF grant now you talk about what your
data management plan is. Where is your data going to be stored so anyone can look at it? I
think it's a great question. I tried to pick stuff with a lot of replication, so hopefully that's less of
a case. It's hard to know for sure, but hopefully, it's at least less of a problem going forward.
Does that answer your question? That was a great question.
>> Catherine Ashcraft: I think when it comes to stereotype threat similar things kind of apply in
that there have been the 350 studies over time. I think when it first came out there were and
are still are questions about the nuances of it and there are variations. They have found that
actually the most affected by stereotype threat are the moderately identified with the subjects.
If you have no interest in side of it you are actually not affected by stereotype, but if you have a
moderate interest and you are the most affected then the high identified with science is in the
middle, like the second most effective. So they are kind of working out those kinds of things,
but I think similar to what Steve said, the findings that I was talking about have been replicated
in quite a few contexts in various ways. I think we feel fairly confident about those findings and
the fact that some of the solutions are not really that hard to implement. It seems like it would
be a good thing to do.
>>: Maybe it might be helpful if you talk about the pacesetters, because I think this may
answer your question is this just knowledge or is this something that you should think about
and start doing and what the pacesetter program is and how a lot of these things are being
implemented in companies and academic institutions.
>> Catherine Ashcraft: The pacesetters program is a program we have at NCWIT with our
members of our workforce alliance who agree to sort of a higher level commitment working on
diversity and actually tracks their numbers. It's a cohort that works on these things together
and shares this research and brainstorms strategies to implement these kinds of things in the
workplace, so there are companies that are working on implementing these kinds of strategies
to mitigate some of these more common biases and stereotype things that we know about.
>>: I was wondering if you could comment on what is actually going through the mind of the
test taker in the first set of studies you showed when they are taking the test? You showed
some very powerful results. Are these people being overcome by anxiety and if that's the case,
do some of the studies control for test taking anxiety and what that is? Other ways that I can
create anxiety.
>> Catherine Ashcraft: They do know that it is anxiety that is about the stereotype that goes
through their head. They do some follow-up when people take the test and they do some exit
questionnaires with them and they have them kind of do surveys on what they were thinking
during the test and rank so they can kind of get a test like how confident they were feeling and
why. They have actually done some questions about racial identity as well and kind of made
correlations between how racially identified you are and how much the stereotype affects you.
The reason that they did the second study in the race that I mentioned is they were trying to
isolate what you were talking about. The first study both factors were at play. They told them
that it was about race, but they also told them that it was an accurate test of their ability. And
the other one the group got the message that this isn't really a test of your ability. Don't worry
about it. We're just doing this other study. So they wanted to isolate and see if it was more
about general test anxiety or if it was the stereotype threat itself of anxiety and so that's when
they have done the second tests where they don't really tell them about race and they told him
that it was in that second one where they were just asked to indicate race but they weren't told
that the study was about race. That test was positioned as not really measuring ability. This
was a low stakes test, but then they went ahead and asked them their race before they took it
and it still had that impact. They were trying to see if it only had the impact on the high-stakes
test or if it would also have that impact on a low stake test.
>>: What was the result there.
>> Catherine Ashcraft: That was the second study, so it had a similar effect. The scores were
very low at the beginning for black participants and then they increased quite a bit, almost, a
very similar effect for the first test.
>>: We will take one more question.
>>: We discussed today a lot about biases and how we perceive people based on our, but what
about more structural things? Could it be that workplace is designed to more support males
than females? For example, we're talking about academic [indiscernible] for myself when I
want my scholarship, what is the effect of that as compared to that effect that I see every
woman is less capable than men?
>> Catherine Ashcraft: Yeah. I think that's a huge issue. I think that there are different ways. I
think maybe today we focused a lot on individual kinds of biases, but there are all kinds of ways
that biases manifest in institutional barriers and in systems and I think that is another aspect
where we do know a lot about biases that surface in policies, that are designed to fit a certain
population more so than another and it wasn't really intentional. That's actually another thing
that pacesetters works on is sort of that both levels, the institutional dynamics that have to
change. You could actually, if you could magically correct everyone's individual biases, you
would still have these sorts of in coded things that have been historically encoded that would
have caused the kinds of problems that you are talking about.
>> Stephen Bernard: Can I just add a little bit? I think a really interesting factor is that these
structural factors interact with the personal factors. I was kind of joking. The psychologists
should really know better than to discriminate. But we also wonder like maybe there are cases
when you are confident. You're not the kind of person that does that because of the kind of
organization or the kind of field that you are in, so maybe you are less likely to introspective
look at your own behavior. So in some cases maybe psychologists are rigorous and maybe they
are not because maybe they don't feel like they are going to be guilty of it. I think like the
personal aspect and the organizational structural aspect probably interact in some pretty
interesting ways.
>>: If we have to focus on one thing, what do you think would be the most important one to
focus on?
>> Stephen Bernard: Curant Leeuwen, [phonetic] the famous social psychologist just said you
always have to look at both, so I think I would have to turn in my social psychologist card if I
picked one. I guess one thing, in some ways I like focusing on organizational aspects because
those are things that you can design. I feel like I was an individual people can tell you you have
these biases, but then it's hard to know what to do. It's like maybe you know that you
shouldn't procrastinate, like you shouldn't be playing Mind Craft. You should be getting your
work done, but still, just knowing that you shouldn't be doing it doesn't make you not do it.
You have to create a situation where you can't play Mind Craft until 11 PM or something. In
some ways the organizational factor are the things that we actually have more control over, so
in terms of what we study, I think both are really important, but maybe the organizational
factors we have more input over.
>>: All right. Thank you very much. [applause].
Download