Uploaded by adriwood

A sad thumbs up: incongruent gestures and disrupted sensorimotor activity both slow processing of facial expressions

advertisement
Cognition and Emotion
ISSN: 0269-9931 (Print) 1464-0600 (Online) Journal homepage: http://www.tandfonline.com/loi/pcem20
A sad thumbs up: incongruent gestures and
disrupted sensorimotor activity both slow
processing of facial expressions
Adrienne Wood, Jared D. Martin, Martha W. Alibali & Paula M. Niedenthal
To cite this article: Adrienne Wood, Jared D. Martin, Martha W. Alibali & Paula M. Niedenthal
(2018): A sad thumbs up: incongruent gestures and disrupted sensorimotor activity both slow
processing of facial expressions, Cognition and Emotion, DOI: 10.1080/02699931.2018.1545634
To link to this article: https://doi.org/10.1080/02699931.2018.1545634
View supplementary material
Published online: 15 Nov 2018.
Submit your article to this journal
Article views: 16
View Crossmark data
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=pcem20
COGNITION AND EMOTION
https://doi.org/10.1080/02699931.2018.1545634
A sad thumbs up: incongruent gestures and disrupted sensorimotor
activity both slow processing of facial expressions
Adrienne Wood
a
, Jared D. Martinb, Martha W. Alibalib and Paula M. Niedenthalb
a
Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA; bDepartment of Psychology, University
of Wisconsin-Madison, Madison, WI, USA
ABSTRACT
ARTICLE HISTORY
Recognising a facial expression is more difficult when the expresser’s body conveys
incongruent affect. Existing research has documented such interference for
universally recognisable bodily expressions. However, it remains unknown whether
learned, conventional gestures can interfere with facial expression processing. Study
1 participants (N = 62) viewed videos of people simultaneously producing facial
expressions and hand gestures and reported the valence of either the face or hand.
Responses were slower and less accurate when the face-hand pairing was
incongruent compared to congruent. We hypothesised that hand gestures might
exert an even stronger influence on facial expression processing when other routes
to understanding the meaning of a facial expression, such as with sensorimotor
simulation, are disrupted. Participants in Study 2 (N = 127) completed the same
task, but the facial mobility of some participants was restricted, which disrupted
face processing in prior work. The hand-face congruency effect from Study 1 was
replicated. The facial mobility manipulation affected males only, and it did not
moderate the congruency effect. The present work suggests the affective meaning
of conventional gestures is processed automatically and can interfere with face
perception, but does not suggest that perceivers rely more on gestures when
sensorimotor face processing is disrupted.
Received 16 May 2018
Revised 28 October 2018
Accepted 5 November 2018
The human body provides multiple channels for communication. In addition to information communicated
verbally, faces display expressions and hands produce
meaningful gestures. However, much research on the
perception of facial expression neglects the communicative nature of the human body beyond the face. This
is true in spite of the fact that observers perceive communicative bodies and faces as parts of a single whole
(Aviezer, Trope, & Todorov, 2012b): a facial expression
may take on new meaning as a function of the information conveyed by the body (Aviezer, Trope, &
Todorov, 2012a), or vice versa (Burgoon, Buller, Hale, &
de Turck, 1984). Since most faces encountered in everyday life are accompanied by fully or partially visible
bodies, faces and bodies must be studied together in
order to fully understand how meaning is communicated nonverbally (de Gelder & Hortensius, 2014).
Emotion perception; gesture;
sensorimotor simulation
Research on how perceivers process expressive
bodies, separately or in combination with expressive
faces, has focused on body postures or movements
with evolved affective or functional meaning, such
as cowering in fear or jumping for joy (Atkinson, Tunstall, & Dittrich, 2007; de Gelder, de Borst, & Watson,
2014; Tamietto et al., 2009). But much communicative
body and hand movement is learned, culturallyspecific, and at least partly symbolic (i.e. not derived
from a functional motion). We will refer to these movements as conventional gestures. Some conventional
gestures have emotional connotations (e.g. thumbsdown; Redcay & Carlson, 2015), and should in theory
facilitate or interfere with processing of an accompanying facial expression, just as emotional body
expressions do (Meeren, van Heijnsbergen, & De
Gelder, 2005). On the other hand, given that
CONTACT Adrienne Wood
adrienne.wood@wisc.edu
Supplemental data for this article can be accessed at 10.1080/02699931.2018.1545634
© 2018 Informa UK Limited, trading as Taylor & Francis Group
KEYWORDS
2
A. WOOD ET AL.
conventional gestures are learned and culturallyspecific, they may not be strongly correlated with
facial expressions of emotion in people’s social
environments. Thus, observers may not process conventional gestures as a part of a cohesive unit along
with the face. To date, it is unknown whether conventional hand gestures are processed separately from –
or holistically with – facial expressions of emotion.
In the current work, we ask whether conventional
gestures, like bodily expressions of emotion, automatically influence the processing of facial expressions,
and vice versa, whether facial expressions automatically influence the processing of conventional hand
gestures. We first review evidence that observers
combine the messages conveyed by faces and
bodies, processing them as a cohesive whole. We
then build the argument that such holistic processing
might also include conventional hand gestures, which
play an important role in daily communication but
have been largely neglected by emotion perception
research. We then step back to review evidence for a
psychological process thought to support face and
body perception generally, known as sensorimotor
simulation. Given the evidence that sensorimotor
simulation assists in facial expression perception, conventional gestures may have even greater influence
on the automatic processing of facial expressions
when sensorimotor simulation is compromised. In
Study 1 we ask whether people are slower to categorise the meaning of a dynamic facial expression
accompanied by an incongruent, versus congruent,
hand gesture. Study 2 tests whether information conveyed by an expresser’s hands may exert even stronger influence on the perception of their facial
expression when sensorimotor contributions to face
perception are disrupted.
Combined processing of face and body cues
Expressive faces and bodies are attention-grabbing
and difficult to ignore, even when they are task-irrelevant (Van den Stock & de Gelder, 2012; 2014). Facial
expressions and bodily signals that are universal and
related to functional actions – such as the erratic,
domineering movements associated with anger or
the cowering, submissive movements associated
with fear – are known to elicit early preferential
neural and physiological responses in observers (De
Gelder, 2006; Eimer & Holmes, 2002) whether or not
people are consciously aware of them (De Gelder &
Hadjikhani, 2006; Tamietto et al., 2009).
Since emotional faces and bodies are difficult to
ignore, each influences processing of the other
when they are combined. When the signals are congruent (e.g. an angry face with an angry body) they
are both better-recognized and judged to convey a
more intensely emotional message (de Gelder et al.,
2014; Martinez, Falvello, Aviezer, & Todorov, 2016).
Emotions expressed by a congruent face-body stimulus are also more evocative of neural and physiological
responses than emotions expressed by either the face
or the body in isolation (Kret, Roelofs, Stekelenburg, &
de Gelder, 2013; Poyo Solanas et al., 2018). When the
body and face convey distinct or even competing
messages, the perceived meaning of each signal
changes. A smile, for instance, is interpreted differently when paired with an engaged, forward-leaning
body posture compared to a disengaged posture
(Burgoon et al., 1984). Indeed, people are worse at
recognising facial or bodily emotion expressions
when paired with an incongruent body or face,
respectively (Aviezer et al., 2012b; Meeren et al., 2005).
Conventional gestures can be learned affective
signals
The work on bodily contributions to facial expression
processing reviewed above has focused largely on
body gestures that resemble functional behaviours
(e.g. cowering in fear), but much nonverbal human
communication involves the exchange of symbolic
conventional hand gestures. These gestures are
learned and vary in meaning from culture to culture:
for instance, the forefinger-to-thumb gesture that
means “A-OK” and signals approval in the United
States might start a fight in Turkey. A metaphoric
origin for some conventional gestures can be identified – thumbs-up as good and thumbs-down as bad
presumably map onto the metaphor of higher as
better – but these gestures are sufficiently removed
from functional actions that the relationship
between the gesture and its referent is largely arbitrary (Archer, 1997).
Despite the differences between bodily
expressions grounded in functional behaviours and
conventional hand gestures, people learn that
certain conventional gestures constitute positive or
negative social feedback and should be prioritised
similarly to emotional expressions. Symbolic gestures
like a thumbs-up or a middle finger capture and
sustain visual attention, much like biologicallygrounded emotion expressions (Flaisch, Häcker,
COGNITION AND EMOTION
Renner, & Schupp, 2011; Flaisch, Schupp, Renner, &
Junghöfer, 2009). As with facial expressions of
emotion, people process the affective meanings of
hand gestures extremely rapidly. Magnetoencephalography recordings of people viewing hand gestures
indicate they encode the emotional valence and selfrelevance of the gestures just milliseconds after
onset (70 and 100 ms after presentation, respectively;
Redcay & Carlson, 2015). Together, such studies
suggest conventional gestures are meaning-laden
social signals that are processed early and are
difficult to ignore.
The learned emotional relevance of conventional
gestures is underscored by their potential to serve as
“unconditioned” positive and negative stimuli. In one
study, participants were conditioned to associate pictures of neutral faces with pictures of negative (middle
finger), neutral (pointing), or positive (thumbs-up) gestures (Wieser, Flaisch, & Pauli, 2014). Faces associated
with the negative gesture were rated as more unpleasant and arousing. Those faces also elicited larger
steady-state visually evoked potentials (measured
with electroencephalography), which reliably occur
when people view aversively conditioned stimuli
(Moratti, Keil, & Miller, 2006). This study suggests
valenced hand gestures automatically influence how
associated faces are encoded. However, the study
paired gestures with neutral faces, and these faces
were not physically attached to the images of the
hands. To our knowledge, no work has examined
how valenced hand gestures influence processing of
the whole-person signal.
Sensorimotor processing supports visual
perception of facial expressions
We have proposed that conventional hand gestures,
like biologically-grounded body expressions of
emotion, can modulate processing of facial
expressions, and vice-versa. However, it stands to
reason that the relative importance of the hand
gesture and the facial expression will depend on
which signal is clearer in its meaning or more accessible to the observer. In accordance with this logic,
bodily expressions override facial expressions when
faces are expressing extreme emotions, as people’s
faces tend to collapse into ambiguous scream-like
expressions during intense emotions (Aviezer et al.,
2012a). The relative influence of the body expression
on the perception of face-body stimuli can also be
decreased by administering oxytocin to participants,
3
causing them to attend more to the face (Perry
et al., 2013).
It is possible that we could increase the influence of
gestures on face perception by making it more
difficult for participants to process facial displays.
One technique for disrupting facial expression processing is to experimentally compromise the observer’s
face-related sensorimotor processes (e.g. Ipser &
Cook, 2016; Rychlowska et al., 2014; Wood, Lupyan,
Sherrin, & Niedenthal, 2016). Substantial behavioural,
clinical, and neural evidence suggests that when
people perceive an action, like a facial expression,
they engage their own sensorimotor systems to simulate the experience of producing the action (Blakemore & Decety, 2001; Niedenthal, Mermillod,
Maringer, & Hess, 2010). Such sensorimotor simulation
contributes to their ability to process the meaning and
intention underlying the perceived action. When the
ability to engage in sensorimotor simulation is disrupted – for instance, by having participants generate
incompatible facial movements or distorting the
somatosensory feedback from their faces – emotion
perception speed and accuracy is reduced (for a
recent review, see Wood, Rychlowska, Korb, & Niedenthal, 2016). Disruptions to face-specific sensorimotor simulation processes may then increase observers’
reliance on other sources of emotional information,
such as hand gestures, a hypothesis we test here.
The present studies
We examine how perceivers process dynamic hand
gestures paired with dynamic facial expressions that
are either congruent or incongruent. Participants in
two studies categorised stimuli according to valence
(whether they convey negative or positive emotions)
and we therefore operationalised “congruent” facehand pairs as positive face-positive hand or negative
face-negative hand.1 Studies 1 and 2 both tested the
prediction that perceivers are slower and less accurate
in categorising the valence of either the hand or face
in a hand-face pair with messages that differ in
valence (Hypothesis 1). Confirmation of this prediction
would suggest that observers automatically process
the affective meanings of conventional gestures, interfering with categorisation of the accompanying facial
expressions (and vice-versa). Study 2 added a
between-subjects manipulation of facial mobility to
address two goals. First, we sought to replicate previously-observed effects of facial sensorimotor disruption on speed and accuracy in categorising facial
4
A. WOOD ET AL.
expressions. Second, we predicted that participants
are slower and less accurate in judging the valence
of facial expressions when their facial mobility is
restricted (Hypothesis 2). The mobility manipulation
also allowed us to test a further prediction, that incongruent hand gestures have a greater influence on
judgments of facial expressions when sensorimotor
simulation is disrupted (Hypothesis 3). Finally, considering prior evidence for gender differences in emotion
perception (Scherer & Scherer, 2011) and sensitivity to
disruptions to sensorimotor processes (Niedenthal
et al., 2012), we included gender as a moderator in
all of our statistical models; however, we did not
make directional predictions about the potential moderating influence of gender.
Study 1
Method
In the following sections, we report how we determined our sample size, all data exclusions, all manipulations, and all measures. The experiment and stimuli
files are available online (https://osf.io/9xs48/).
Video stimuli
Four White actors (two female, two male) were
recruited from the University of Wisconsin–Madison
undergraduate theatre programme and paid to be
filmed. Actors signed appropriate release forms and
were coached on how to produce each facial and
hand movement. The facial expressions were happiness, positive surprise, fear, sadness, disgust, and
anger, and the hand gestures were thumbs-up, AOK, thumbs-down, and a fist raised as if in anger
(see Figure 1). The dynamic facial expressions began
from neutral and ended at the apex of the expression.
The dynamic hand gestures began with the hand offcamera, then the actor raised their hand, emphasised
the gesture with 2 superimposed beats (i.e. pulses),
and held the position still for the remainder of the
video. The facial expressions and hand gestures for
each actor were filmed separately on green screen
backgrounds and were then overlaid on one another
with a black background, creating all possible hand
gesture-facial expression combinations for each
actor (4 actors * 6 facial expressions * 4 hand gestures
= 96 unique videos). “Congruent” face-hand pairs
were combinations of positive facial expressions and
positive hand gestures or negative facial expressions
and negative hand gestures. “Incongruent” facehand pairs were combinations of a negativelyvalenced channel (either face or hand) and a positively-valenced channel (hand or face, respectively).
For use in the Baseline Phase, we also created handonly (16 in total) and facial expression-only (24 in
total) videos for each actor. The face-only stimuli
showed the actors’ heads (with hands not visible),
and the hand-only stimuli showed the actors’ hands
(with faces not visible).
Participants and procedure
Figure 1. Still frames from Actor 1’s video stimuli. The dynamic facial
expressions began from neutral and ended at the apex of the
expression. All 4 actors posed 4 negative facial expressions, sadness
(A), disgust (B), fear (C), anger (D); and 2 positive expressions, positive
surprise (E), and happiness (F). For the dynamic hand gestures, all 4
actors started with their hand off-camera, then raised it, emphasised
the gesture with 2 pulses, and held it still for the remainder of the
video. The negative gestures were thumbs-down (G) and fist (H),
and the positive gestures were A-OK (I) and thumbs-up (J). Sample
congruent (K) and incongruent (L) stimuli for the Face-Hand Phase
are also shown.
We aimed for 60 participants for the fully withinsubject design. 62 participants were recruited from
the University of Wisconsin–Madison undergraduate
Introduction to Psychology subject pool to participate
in exchange for course credit (nmale = 25, nfemale = 37).
The study was conducted with approval from the Institutional Review Board. All participants gave their
informed consent.
The design included a Baseline Trial phase followed
by a Face-Hand Trial phase. On all trials, participants
watched a video of a dynamic hand, face, or facehand combination video and were instructed to use
a keypress to categorise the target stimulus as positive
or negative. Participants pressed the “f” and “j” keys to
indicate “positive” or “negative”, and the key assignment was counterbalanced across participants. In
the Baseline Trial phase, participants categorised
stimuli that contained facial expressions only and
COGNITION AND EMOTION
stimuli that contained hand gestures only, presented
in randomised order in a single block (40 trials in total).
After the Baseline Trial phase, participants completed 6 blocks consisting of 32 Face-Hand trials
each (192 trials in total).2 Before each block participants were instructed to attend to and categorise
only the faces or only the hands in that block, ignoring
the other signal. Instructions alternated across blocks.
Each Face-Hand video appeared twice during this
phase, once per attention instruction condition. Participants were told to “respond as fast as you can
while still being accurate.”
Results
We hypothesised that participants instructed to categorise the valence of either a hand gesture or facial
expression would be unable to ignore the other
channel, resulting in slower reaction times (RTs)
when the valences of the hand and face were incongruent (e.g. a positive gesture with a negative facial
expression) compared to congruent. Because we
expected participants to have near-ceiling performance on what is a straightforward task–categorizing
a clear and recognisable hand or face signal as negative or positive–we did not anticipate effects on accuracy, although we did analyze the accuracy data. An R
Markdown document (RStudio and Inc, 2014) containing complete code and output from our data processing and analyses for both studies is available online
(https://osf.io/9xs48/).
In all our analyses we used linear mixed-effects
models with the lme4 package in R (Bates, Mächler,
Bolker, & Walker, 2015). Degrees of freedom and p
values were estimated using Satterthwaite approximations with the lmerTest package (Kuznetsova,
Brockhoff, & Christensen, 2017). We adhered to the
“keep it maximal” approach to random effects structures and therefore included by-item (stimulus) and
by-subject random intercepts and slopes for all predictors that were repeated within-item or within-subject
(Barr, Levy, Scheepers, & Tily, 2013). In the case of convergence failures, we removed the random slopes for
main effects, leaving the random slopes for interaction
terms, as recommended by Brauer and Curtin (2018).
In all models, we allowed participant gender to interact with the experimental variables of interest, since
previous work documents gender effects on emotion
perception (e.g. Donges, Kersting, & Suslow, 2012;
Korb et al., 2015; Montagne, Kessels, Frigerio, de
Haan, & Perrett, 2005). Before conducting our analyses,
5
we removed trials on which participants’ RTs were 3
standard deviations above the mean (1.56% of trials)
or faster than 150 ms (0.44% of trials).
Accuracy
Descriptive statistics for participant accuracy in the
Baseline Phase suggest that the valence (positive
versus negative) of the facial expressions and hand
gestures was unambiguous. Participants made very
few errors in categorising the valence of the facial
expressions (proportion correct M = 0.99, SD = 0.11)
and the hand gestures (proportion correct M = 0.94,
SD = 0.24). We did not have any predictions regarding
the Baseline Phase, which was included to adjust the
Face-Hand Phase RTs.
To examine whether participants were more accurate in the Face-Hand Phase when the valence of the
face and hand matched, we calculated each participants’ proportion correct for congruent and incongruent face-attend and gesture-attend trials (resulting in
4 accuracy scores per participant). We regressed
these Accuracy scores on unit-weighted, centred variables for Attention Instructions (attend to the hand
gesture = −.5, attend to the facial expression = .5),
Face-Hand Congruency (incongruent stimuli = −.5,
congruent stimuli = .5), Participant Gender (male =
−.5, female = .5), the three-way interaction between
the variables, the three two-way interactions, and all
main effects. We initially included all relevant
random effects: the by-participant random intercept
and random slopes within participants for the interaction between Attention Instructions and FaceHand Congruency and the two main effects. By-item
random effects were not applicable since accuracy
scores are aggregated for each participant in each
cell of the experimental design, and not calculated
on an item-by-item basis.
Statistically controlling for all other variables, accuracy was significantly greater on face-attend trials
(proportion correct M = 0.98, SD = 0.15) compared
to gesture-attend trials (proportion correct M = 0.93,
SD = 0.26), b = 0.051, t(180.00) = 7.093, p < .001. Supporting Hypothesis 1, the main effect of Face-Hand
Congruency was also significant, with greater accuracy
on congruent (M = 0.96, SD = 0.20) compared to
incongruent (M = 0.94, SD = 0.23) trials, b = .015,
t(180.000) = 2.139, p = .034.3 Neither Participant
Gender nor any of the interaction terms were
significant.
In post hoc Supplementary Analyses, we checked
for emotion and gesture-specific effects and found
6
A. WOOD ET AL.
that the Face-Hand Congruency effect on accuracy
was stronger for trials involving positive facial
expressions and negative gestures, compared to
trials involving negative facial expressions and positive gestures. Given the small number of different
actors we included in our stimuli, we also checked
for moderating effects of Actor and found, among all
possible comparisons between actors, that only
Actors 2 and 4 differed in the size of their Congruency
effect, with a stronger effect of Face-Hand Congruency
for Actor 4 (see Supplementary Analyses, https://osf.
io/9xs48/).
Reaction times
In the RT analyses, we included only correct-response
trials. Since the stimuli were dynamic, some facial
expressions and gestures have slower onset times
than others, resulting in stimulus effects on RTs. To
remove some of this variance, we calculated each participant’s average RT for each hand or face stimulus
during the Baseline Phase. We then adjusted the RTs
on the Face-Hand Phase by subtracting the average
Baseline Trial RTs for the relevant participant and
stimulus. These adjusted RTs now indicate how
much faster (negative value) or slower (positive
Figure 2. Study 1 model estimates from linear mixed-effects model in
which adjusted RTs in the Face-Gesture Phase were regressed on the
interaction between Attention Instructions, Face-Gesture Congruency,
and Participant Gesture. RTs were adjusted for participants’ Baseline
Phase RTs for each stimulus. Points are individual participants’
average adjusted RTs. Participants were faster when categorising
the valence of the face compared to the hand, and they were also
faster when the valences of the face and hand were congruent compared to incongruent. The significant 3-way interaction indicates that
female participants were particularly disrupted by incongruent facial
expressions when categorising the valence of a hand gesture.
value) a participant was in categorising a given
gesture or facial expression when accompanied by a
signal from the other channel, compared to when
they categorised the gesture or expression in isolation.
Note that the by-item random effects account for the
variability in RTs across face-hand stimuli (some facehand pairs may tend to be processed more quickly
than others), whereas subtracting the Baseline RTs
removes some unexplained variance due to the properties of the isolated target face or hand stimulus.
These two steps, subtracting Baseline Trial RTs from
Face-Hand RTs and including by-item random
effects, account for unique sources of otherwise unexplained variance in our analyses.
We regressed the adjusted Face-Hand Trial RTs (in
seconds) for correct-response trials on unit-weighted,
centred variables for Attention Instructions (attend
to the hand gesture = −.5, attend to the facial
expression = .5), Face-Hand Congruency (incongruent
stimuli = −.5, congruent stimuli = .5), Participant
Gender (male = −.5, female = .5), the three-way interaction between the variables, the three two-way interactions, and all main effects. We included all relevant
random effects: the by-participant random intercept,
the by-participant random slopes for the interaction
between Attention Instructions and Face-Hand Congruency and the two main effects, the by-item
random intercept, and the by-item random slopes
for the interaction between Attention Instructions
and Participant Gender and the two main effects.
The intercept of the model revealed that participants were, on average, 258 ms slower in the FaceHand Phase compared to Baseline Phase, controlling
for all predictor variables, b = 0.258, t(73.750) =
10.578, p < .001. The main effect of Attention Instructions was significant, controlling for all other variables,
such that participants responded more quickly on
face-attend trials (adjusted RT M = 136 ms, SD =
378 ms) compared to gesture-attend trials (adjusted
RT M = 383 ms, SD = 323 ms), b = –0.241, t(118.72) = –
9.942, p < .001. The main effect of Face-Hand Congruency was also significant, controlling for all other
variables, such that participants responded more
quickly when the face and hand valences were congruent (adjusted RT M = 226 ms, SD = 365 ms) compared to incongruent (adjusted RT M = 285 ms, SD =
380 ms), b = –0.059, t(106.74) = –3.354, p = 0.001.
Finally, the 3-way interaction between Attention
Instructions, Face-Hand Congruency, and Participant
Gender was significant, such that incongruent facial
expressions slowed the valence judgments of hand
COGNITION AND EMOTION
gestures more for female participants than for males,
b = 0.073, t(89.03) = 2.210, p = 0.030 (see Figure 2).
As for accuracy, we conducted several post hoc
analyses to check for potential moderating effects of
Facial Expression and Gesture categories, as well as
actor identity, on RTs (see Supplementary Analyses,
https://osf.io/9xs48/). To summarise the additional
analyses, the effect of incongruent Face-Hand pairs
on RTs was stronger for some negative facial
expressions compared to positive facial expressions.
Specifically, the Face-Hand Congruency effect was
stronger for anger, fearful surprise, and sadness compared to positive surprise, and stronger for anger compared to happiness. The only moderating effect of
Gesture that emerged indicated that the Congruency
effect was weaker for the A-OK gesture compared to
the Thumbs-Down gesture. No moderating effects of
actor identity on the Face-Hand Congruency effect
emerged. We also examined whether the Congruency
effect might be due, least in part, to “carry-over”
effects of task demands from one block to the next
(e.g. the possibility that incongruent gestures interfered with face categorisation because in the prior
block, participants were instructed to attend to the
gesture). There was no support for the existence of
such carry-over effects.
Study 1 summary
Despite the near-ceiling accuracy across trials, Study 1
participants were more accurate and faster on the
face-attend (compared to gesture-attend) trials. Supporting Hypothesis 1, participants were more accurate
and faster on trials on which the face and hand signals
were congruent compared to incongruent. Finally, the
unexpected significant 3-way interaction in the RT
model provides suggestive evidence that female participants are less able to ignore task-irrelevant facial
expressions than are male participants.
Study 2
Study 1 provided evidence in favour of Hypothesis 1,
that individuals are slower and less accurate in categorising the valence of faces and hands accompanied
by incongruent compared to congruent hands or
faces, respectively. Study 2 sought to replicate this
finding and added a between-subjects manipulation
of facial muscle mobility to test our other two hypotheses. We predicted that individuals would be slower
and less accurate in judging the valence of facial
7
expressions when their facial mobility is restricted
(Hypothesis 2), and that incongruent hand gestures
would have a greater influence on judgments of
facial expressions when sensorimotor simulation is
disrupted (Hypothesis 3).
Method
In the following section, we report how we determined our sample size, all data exclusions, all manipulations, and all measures. The experiment and stimuli
files are available online (https://osf.io/9xs48/). Study
2’s procedure was the same as Study 1, with the
addition of a between-subjects manipulation of
facial mobility, which allowed us to test Hypotheses
2 and 3.
Participants and procedure
We aimed for 60 participants in each of the two Facial
Mobility conditions, or 120 participants in total. We
ultimately recruited 128 participants from the University of Wisconsin–Madison undergraduate Introduction to Psychology subject pool to participate in
exchange for course credit (nfemale = 88, nmale = 40).
One male participant did not complete the study
and was excluded from analyses. The study was conducted with approval of the Institutional Review
Board.
After participants heard instructions and gave their
informed consent, they were randomly assigned to
one of two Facial Mobility conditions: the face tape
condition or the control condition. Participants then
completed the 40 trials in the Baseline Trial phase,
followed by the 192 trials of the Face-Hand Trial
phase. See Study 1 Method for details.
Facial mobility manipulation
Participants were randomly assigned to a face tape
condition or control condition. The present facial
mobility manipulation is identical to a procedure
developed and employed in previous research
(Carpenter & Niedenthal, under review). In the face
tape condition, three kinds of stiff and inflexible
medical tape were applied across the width of participants’ foreheads in three layers: the first layer was 1/2
inches wide, extending vertically from the bridge of
the nose to the hairline; the second layer was 2
inches wide, extending horizontally across the forehead from one temple to the other; the third layer
consisted of two 1 inches wide pieces of tape
applied on top of the second layer, to maximally
8
A. WOOD ET AL.
disrupt upper facial mobility. In order to disrupt mobility of the lower part of their faces, participants in the
face tape condition received boil-and-bite mouth
guards, a manipulation shown to be effective in previous research (Rychlowska et al., 2014). Participants
prepared their own, single-use mouth guards by first
submerging their new mouth guard in boiling water
for 7 s. Then, participants fitted the now soft mouth
guard to their teeth for 10 s in order to conform to
the shape of their mouths. Finally, the mouth guard
was placed in cold water for 20 s to ensure that it
maintained the shape of the participant’s mouth.
In the control condition, participants also had tape
applied to their faces but in a location and extent so as
to minimally impact facial mobility. Specifically, small
1/2 inches wide pieces of tape were placed on the participants’ temples. By also having participants in the
control condition receive some form of facial taping,
we experimentally controlled for the potential confound of receiving face tape. This allows us to draw
stronger inferences regarding the experimental
effect of facial mobility restriction. Participants in the
control condition also made a boil-and-bite mouth
guard but were told to place it on a paper towel
beside the experimental computer for later use. In
fact, participants in the control condition never wore
their mouth guards during the experiment.
Results
We tested 3 hypotheses in Study 2: (1) participants are
slower when the valences of the face and hand signals
are incongruent compared to congruent (replicating
Study 1), (2) participants whose facial movements
are disrupted by the face tape are slower on faceattend trials compared to control condition participants, conceptually replicating past work (Ipser &
Cook, 2016; Wood, Lupyan, et al., 2016), and (3) the
effect of Face-Hand Congruency depends on facial
mobility, such that incongruent hand gestures are
even more disruptive for participants whose facial
mobility is restricted than for participants whose
facial mobility is not restricted. As in Study 1, we did
not have specific predictions about effects on categorisation accuracy, since we expected participants to
perform near ceiling.
Besides the addition of another moderator – Facial
Mobility – the models we report for Study 2 are identical to those used in Study 1. Here we also included
analyses of the Baseline Phase, since these trials now
pertain to Hypothesis 2. The complete code and
output for the following analyses can be found
online in the same R Markdown file as the Study 1 analyses (https://osf.io/9xs48/). As in Study 1, before conducting our analyses, we removed trials on which
participants had RTs that were 3 standard deviations
above the mean (2.23% of trials) or faster than
150 ms (.79% of trials).
Accuracy in the baseline phase
Figure 3. Study 2 Baseline Phase. Model estimates from linear mixedeffects model in which RTs in the Baseline trials were regressed on the
interaction between Stimulus Type (facial expression vs. gesture), Participant Gender, and Facial Mobility. Points are individual participants’
average RTs. The 3-way interaction term is significant, suggesting male
participants were slower to categorise facial expressions compared to
gestures when their facial mobility was disrupted, but this was not the
case for female participants.
Using a linear mixed-effects model, we first regressed
participants’ proportion correct scores for the Baseline
Phase on the 3-way interaction between Stimulus
Type (hand gesture = −.5, facial expression = .5),
Facial Mobility (control condition = −.5, face tape condition = .5), and Participant Gender (male = −.5,
female = .5), the three 2-way interactions, and all
main effects. We included the by-participant random
intercept and the random slope within participants
for Stimulus Type. The only variable to significantly
predict accuracy in the Baseline Phase was Stimulus
Type, such that participants were more accurate on
facial expression trials (M = 0.97, SD = 0.11) than on
hand gesture trials (M = 0.93, SD = 0.15), b = 0.042,
t(368.30) = 3.786, p < 0.001.
Reaction times in the baseline phase
We next ran an identical model with RT (in seconds)
on correct-response trials as the outcome variable.
COGNITION AND EMOTION
The only difference in the specified model was the
inclusion of by-stimulus random effects: the
random intercept and random slopes for the 2-way
interaction between the Facial Mobility and Participant Gender, and the two main effects. Stimulus
Type was again a significant predictor, with faster
RTs on facial expression trials (M = 1213 ms, SD =
471 ms) than on gesture trials (M = 1144 ms, SD =
420 ms), b = 0.086, t(46.46) = 2.045, p = 0.047. The 3way interaction between Stimulus Type, Facial Mobility, and Participant Gender was significant, b = –
0.169, t(77.63) = –2.411, p = 0.018 (see Figure 3). The
key test of Hypothesis 2, if it were not moderated
by Participant Gender, is the Stimulus Type * Facial
Mobility interaction term, which was not significant
here, b = 0.042, SE = 0.035, t(114.30) = 1.21, p = 0.229.
This null effect is unlikely to be due to a lack of
power. A power analysis using the simr package for
R (Green & MacLeod, 2015) suggested that we had
acceptable power (94.80%) to detect a Stimulus
Type * Facial Mobility interaction term, averaged,
over males and females, of equivalent effect size to
the male-specific interaction effect (see below; b =
0.127). Given the significant 3-way interaction, we
next consider the Stimulus Type * Facial Mobility
interaction for males and females separately.
To unpack the 3-way interaction and test the
hypothesis that the face tape specifically slowed
RTs on facial expression trials (Hypothesis 2), we
recentered the Stimulus Type variable (facial
expression = 0, gesture = 1) so that we could interpret the effects of Participant Gender and Facial
Mobility for facial expression trials specifically. In
this recentered model, the interaction between Participant Gender and Facial Mobility was significant
for facial expression trials, b = –0.296, t(94.45) =
–2.219, p = 0.029. The difference in RTs between the
face tape and control conditions for males (M =
1291 ms, SD = 516 ms vs. M = 1184 ms, SD = 436 ms)
was greater than the difference between the face
tape and control conditions for females (M =
1160 ms, SD = 409 ms vs. M = 1171 ms, SD = 475 ms)
in females. Recentering Stimulus Type over gesture
(gesture = 0, facial expression = 1) revealed that the
Gender * Facial Mobility interaction was not significant for gesture trials, meaning males’ and females’
RTs for gesture stimuli were not differentially
affected by the face tape condition, b = –0.127,
SE = 0.105, t(93.23) = –1.213, p = .228. Finally, we
recentered the Participant Gender variable over
males (males = 0, females = 1) and then over
9
females (females = 0, males = 1) to examine the 2way interaction between Facial Mobility * Stimulus
Type for each gender. For male participants, the
face tape manipulation slowed RTs on facial
expression trials significantly more than on gesture
trials, b = –0.127, SE = 0.057, t(114.73) = –2.206, p =
0.029. The Facial Mobility * Stimulus Type interaction
was not significant for female participants, however,
b = 0.043, SE = 0.038, t(113.34) = 1.114, p = .268. To
summarise, in the Baseline Phase, the face tape
manipulation slowed RTs on facial expression trials
specifically for male participants.
Accuracy in the face-hand phase
Since the Face-Hand Phase included an additional
within-subjects variable – Face-Hand Congruency –
we ran the same model as for Baseline Trial accuracy
scores, but with the addition of Face-Hand Congruency as a moderator. Thus, the highest-order
term was a 4-way interaction between Face-Hand
Congruency, Attention Instructions, Facial Mobility,
and Participant Gender, and there were also by-participant random slopes for the 2-way interaction between
Attention Instructions and Face-Hand Congruency
and the two main effect terms.
Once again, participants were significantly more
accurate on face-attend trials (M = 0.96, SD = 0.06)
than on gesture-attend trials (M = 0.91, SD = 0.11),
b = 0.050, t(124.30) = 6.794, p < .001. In support of
Hypothesis 1, participants were significantly more
accurate when the valences of the face and hand
were congruent (M = 0.95, SD = 0.09) than when
they were incongruent (M = 0.92, SD = 0.09), b =
0.028, t(131.24) = 4.721, p < .001. Unexpectedly, the
Facial Mobility×Attention Instructions×Participant
Gender three-way interaction term was also significant, b = −.059, t(124.30) = –1.986, p = .049. To
unpack the three-way interaction, we recentered Participant Gender to look at the two-way interaction
between Facial Mobility and Attention Instructions
specifically for males, which was just above the
threshold for conventional statistical significance,
b = .047, t(124.47) = 1.928, p = .056. This interaction
can be interpreted as indicating that male participants were more accurate on face-attend than
gesture-attend trials, and this difference was even
stronger when they were in the face tape condition.
Since this two-way interaction was not moderated by
Face-Hand Congruency, was unexpected, and was
not below the threshold for significance, we do not
draw conclusions from it, but simply note it as
10
A. WOOD ET AL.
something to follow up on in future work. No other
variables significantly predicted accuracy in the FaceHand Phase.
Reaction times in the face-hand phase
Recall that in Study 1 we adjusted RTs in the FaceHand Phase using participants’ RTs from the Baseline
Phase trials. In Study 2, adjusting the Face-Hand
Phase RTs using participants’ own Baseline Phase
RTs would essentially subtract out any between-participant differences due to the Facial Mobility
manipulation. For instance, if a participant were
slower due to the face tape, they would be slower
on both Baseline and Face-Hand Trials and subtracting the former from the latter would remove the
effect of face tape on their Face-Hand Trial RTs. We
therefore computed average RTs on correct trials
for each of the 96 stimuli using Study 1 participants’
Baseline Phase RTs, and then subtracted these bystimulus RT averages from the Study 2 Face-Hand
Phase correct-trial RTs. Although an imperfect solution, this reduced some variance in RTs due to
differences in stimulus onset speed.
We initially regressed the adjusted RTs on the
same 4-way interaction as in the Baseline Phase
analysis, with by-participant random intercept and
random slopes for the interaction between Attention
Instructions and Face-Hand Congruency, and the two
main effects. We also included the by-item random
intercept and random slopes for the 3-way interaction between Attention Instructions, Participant
Gender, and Facial Mobility, and all main effects.
As before, participants were significantly faster on
face-attend (M = 237 ms, SD = 433 ms) compared to
gesture-attend trials (M = 473 ms, SD = 357 ms),
b = –0.229, t(122.50) = –19.280, p < .001. Replicating
Study 1’s support of Hypothesis 1, they were also significantly faster on trials on which the valences of the
face and hand were congruent (M = 325 ms, SD =
406 ms) compared to incongruent, (M = 379 ms,
SD = 422 ms), b = –0.051, t(95.90) = –2.309, p = 0.023.
The 2-way interaction between Attention Instructions
and Face-Hand Congruency was also significant, with
the Congruency effect being somewhat weaker on
face-attend trials (i.e. incongruent gestures did not
slow RTs for facial expressions as much as incongruent facial expressions slowed RTs for gestures), b =
0.028, t(752.90) = 2.903, p = 0.003. No other main
effects or interaction terms were significant;
thus, we did not find support for the hypothesis
that the face tape manipulation would exaggerate
the face-hand congruency effect (Hypothesis 3; see
Figure 4).
Study 2 summary
We replicated the finding from Study 1 that participants were less accurate and slower to correctly categorise the valence of a facial expression or hand
gesture when it was accompanied by an incongruent
signal from the other channel (Hypothesis 1). We
found mixed support for the prediction that, relative
to controls, participants would be slower and less
accurate in categorising facial expressions when
their facial movements were disrupted by the face
tape (Hypothesis 2). On Baseline Phase trials, on
which participants saw facial expressions or hand gestures in isolation, male participants’ performance was
slowed significantly more by the face tape manipulation compared to females, whose performance
appears to have been unaffected by the face tape.
However, this gender-specific effect was not present
on Face-Hand Phase trials. Thus, we partially replicated
previous work that disrupted facial mobility results in
poorer facial expression perception (Wood, Lupyan,
et al., 2016). Finally, we did not find evidence supporting the hypothesis that the face-hand congruency
effect would be exaggerated when participants’
facial mobility was disrupted (Hypothesis 3).
Discussion
In two studies, we asked how participants perceive
actors producing dynamic facial expressions (sad,
angry, disgusted, afraid, happy, positive surprise)
along with congruent or incongruent conventional
hand gestures (thumbs-up, thumbs-down, A-OK, and
a fist). When instructed to categorise the valence of
either the face or hand and ignore the other expressive channel, participants’ judgment accuracy and
reaction time were nonetheless influenced by the
task-irrelevant channel. Such effects had previously
been observed in face-body combinations involving
functional, as opposed to symbolic and conventional,
body postures (e.g. cowering in fear; Aviezer et al.,
2012b). The current work suggests that, like body postures, learned symbolic hand gestures interfere with
facial expression processing, and vice-versa. The
current work also increases the ecological validity of
prior work on face-body expression processing by
using video stimuli rather than still photos (cf. Martinez et al., 2016).
COGNITION AND EMOTION
11
Figure 4. Study 2 Face-Hand Phase. Model estimates from a linear mixed-effects model in which Face-Hand Phase RTs (adjusted using Study 1
Baseline Phase RTs) were regressed on the 4-way interaction between Attention Instructions (face-attend vs. gesture-attend), Face-Hand Congruency (incongruent vs. congruent), Participant Gender, and Face Tape Manipulation (face tape vs. control). Points represent individual participants’ average adjusted RTs, which were negative in the event that a participant’s responses were faster in the Face-Hand Phase than the Baseline
Phase (presumably due to practice effects).
The observed face-hand congruency effect, which
was replicated in the second study, has implications
for how observers process a whole expressive
person. Since conventional hand gestures are presumably more volitional and controllable than facial, vocal,
and bodily expressions of emotion, they may more
regularly contradict the other, otherwise cohesive,
expressive channels in daily life. If your friend
notices you seem sad, you may not be able to suppress the sadness conveyed by your face or voice, so
you may give them a thumbs-up to indicate that
they do not need to take care of you. The current
work suggests your friend will not ignore or prioritise
one of the communicative channels but will instead
combine the channels to infer your nuanced
affective and intentional state.
Study 2 was identical to Study 1, with the addition
of a facial mobility manipulation: the facial movements of half of the participants were restricted with
the application of medical face tape and a mouthguard, which previous work has used successfully to
reduce facial movements and produce disruptive
somatosensory feedback (Carpenter & Niedenthal,
under review; Rychlowska et al., 2014). This manipulation allowed us to replicate previous studies in
which this and similar manipulations reduced accuracy and response times in facial expression perception tasks (e.g. Wood, Lupyan, et al., 2016). Here we
partially replicated the effect, specifically in male but
not female participants, and the effect appeared sensitive to repeated exposure to the stimuli, as it was
statistically significant in the Baseline Phase but not
the Face-Hand Trials. The exposure effect is not
surprising, as the task (judge whether an expressive
face is positive or negative) is already easy to begin
with, and should only get easier with practice, reducing the need to recruit iterative, cross-modal perceptual processes like sensorimotor simulation.
The interaction between the facial mobility
manipulation and participant gender, however, was
unexpected. A cautious, post hoc interpretation of
the current finding is that males, who generally
12
A. WOOD ET AL.
perform worse on emotion recognition tasks (Donges
et al., 2012; Korb et al., 2015; Montagne et al., 2005),
experience cumulatively less emotion-related socialisation and their emotion recognition is therefore
more vulnerable to disturbances, such as disruptions
to the sensorimotor simulation process. Indeed, previous work has shown that extended pacifier use –
which inhibits facial mobility – negatively predicts
future emotional intelligence in boys but not girls
(Niedenthal et al., 2012). The present male-specific
effect should be interpreted with extreme caution
and will be examined in future work.
Finally, we did not find support for the prediction
that incongruent hand gestures interfere with facial
expression judgments even more strongly when
facial mobility is restricted. While the context of the
expresser’s body and the perceiver’s sensorimotor representation contribute separately to facial expression
processing, perceivers do not increase reliance on the
former when the latter is disrupted. Whether this null
result is due to the generally weak effect of the facial
mobility manipulation or to the hypothesis being incorrect is unclear. We have suggested previously that sensorimotor simulation will contribute more to emotion
perception when facial expressions are subtle or
difficult to interpret (Wood, Rychlowska, et al., 2016).
Future work should therefore employ subtler facial displays to create a more difficult perceptual task that is
more vulnerable to disruption.
Future directions
The current work involved novel, information-rich
videos of actors producing all possible combinations
of hand gestures and facial expressions. These stimuli
could be used to address many research questions
besides the ones we addressed here. For instance,
how does ambiguous social feedback (a sad thumbsup) influence behaviour in a decision-making context?
We have made the stimuli openly available (https://
osf.io/9xs48/) and we hope that other researchers will
use them to address other research questions.
In Study 2, we inhibited people’s ability to move
their facial muscles, and we examined how this
influenced their ability to process facial expressions.
A parallel question is whether restricting hand movements would restrict participants’ ability to process
gestures. We did not include a manipulation to restrict
hand movements, because participants used their
hands to respond with keypresses in the current
work. A future study could use a vocal response
format and could test whether restricting hand movements influences processing of hand gestures that
convey valence, as in this study, or gestures that
convey semantic information.
Male and female participants in Study 1 differed in
how strongly incongruent facial expression influenced
their processing of gestures. This leads us to ask
whether other individual difference factors might
also matter. Some evidence suggests that people
high in social anxiety fixate more on the hands of
expressive bodies than people low on social anxiety,
perhaps as a way of avoiding eye contact but nevertheless extracting information from an expressive
body (Kret, Stekelenburg, de Gelder, & Roelofs,
2017). For socially anxious people, hand gestures
may be particularly important sources of social feedback, an idea that could be explored using the
current paradigm.
Future work could also study users of signed
languages, such as ASL, in which facial expressions
can modulate the information conveyed by hands.
Whereas participants in this study showed difficulty
integrating “incongruent” signals, fluent signers use
the face and body as interactive channels of communication (Wilbur, 2000). For instance, an inner
eyebrow raise – usually associated with negative
emotions like sadness – acts as an adverb that intensifies the meaning of a sign, even if the sign’s
meaning itself is not negatively valenced. Future
work should explore whether ASL users process the
meanings of seemingly contradictory hand and face
signals differently than non-ASL users. Future work
should also explore whether valence-incongruent
face and body signals sometimes combine to
produce a new emergent signal in non-sign communication, as they can in ASL – for instance, a sad facial
expression with a thumbs-up might convey a
nuanced message not communicable with just a
signal expressive channel.
In conclusion, the present work explores the complementary questions of how perceivers extract
meaning when faces and bodies communicate divergent messages, and whether the process of meaning
extraction is affected when simulation processes are disrupted. Although research suggests that communicative
information from the face and body is processed holistically, only limited research in facial expression perception has considered the effect of bodily signals that are
inarguably learned (i.e. conventional gestures). Extending previous findings on combined face-body signals,
the present work suggests the affective meaning of
COGNITION AND EMOTION
conventional gestures is processed holistically along
with information from the face.
Notes
1. What constitutes a “congruent” face-hand pairing depends
on the salient feature dimension; in the current study participants focused on the valence of the facial expressions
and hand gestures. The facial expressions and gestures
might also be re-categorized according to other dimensions, such as approach-avoidance, resulting in a different
set of face-hand stimuli pairs being considered “congruent”
vs. “incongruent.” We operationalised congruency by
valence, rather than by specific emotions (e.g. angry faces
and angry gestures), because it is unclear what conventional gestures, if any, would convey the same specific
meanings as discrete facial expressions.
2. A coding error that was not discovered until both studies
were completed resulted in all participants repeating a
block of gesture-attend trials at the end of the session.
Since this occurred at the end of the session, we simply
excluded these redundant trials from our analyses. Including
them in the analyses did not change any of our conclusions.
3. The means and standard deviations reported are averaged across participants and are therefore not sensitive
to within-subject changes in reaction times. This is why
the effect can be statistically significant despite the
means being so close together.
Acknowledgments
We thank the individuals who participated in this study, and we
thank Magdalena Rychlowska, Nolan Lendved, Leah Schultz,
Nathanael Smith, Crystal Hanson, Olivia Zhao, Emma Phillips,
Holden Wegner, Alicia Waletzki, Mathias Hibbard, and Jay Graiziger for their help with stimuli creation and validation and data collection. We also thank John Lendved for filming and editing the
video stimuli, and Mark Koranda for his useful insights on ASL.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
A.W. and J.D.M. were supported by NIH Emotion Research Training Grant (T32MH018931-24). Further support for this research
was provided by the Office of the Vice Chancellor for Research
and Graduate Education at the University of Wisconsin–
Madison with funding from the Wisconsin Alumni Research
Foundation; National Institute of Mental Health.
ORCID
Adrienne Wood
http://orcid.org/0000-0003-4773-4493
13
References
Archer, D. (1997). Unspoken diversity: Cultural differences in gestures. Qualitative Sociology, 20(1), 79–105. doi:10.1023/
A:1024716331692
Atkinson, A. P., Tunstall, M. L., & Dittrich, W. H. (2007). Evidence for
distinct contributions of form and motion information to the
recognition of emotions from body gestures. Cognition, 104
(1), 59–72.
Aviezer, H., Trope, Y., & Todorov, A. (2012a). Body cues, not facial
expressions, discriminate between intense positive and negative emotions. Science, 338(6111), 1225–1229.
Aviezer, H., Trope, Y., & Todorov, A. (2012b). Holistic person processing: Faces with bodies tell the whole story. Journal of
Personality and Social Psychology, 103(1), 20–37.
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random
effects structure for confirmatory hypothesis testing: Keep it
maximal. Journal of Memory and Language, 68(3), 255–278.
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear
mixed-effects models using lme4. Journal of Statistical
Software, 67(1), 1–48. doi:10.18637/jss.v067.i01
Blakemore, S.-J., & Decety, J. (2001). From the perception of
action to the understanding of intention. Nature Reviews
Neuroscience, 2(8), 561–567. doi:10.1038/35086023
Brauer, M., & Curtin, J. J. (2018). Linear mixed-effects models and
the analysis of nonindependent data: A unified framework to
analyze categorical and continuous independent variables
that vary within-subjects and/or within-items. Psychological
Methods, 23(3), 389–411. doi:10.1037/met0000159
Burgoon, J. K., Buller, D. B., Hale, J. L., & de Turck, M. (1984).
Relational messages associated with nonverbal behaviors.
Human Communication Research, 10(3), 351–378. doi:10.
1111/j.1468-2958.1984.tb00023.x
Carpenter, S. M., & Niedenthal, P. M. (under review). Inhibiting
facial action increases risk taking.
De Gelder, B. (2006). Towards the neurobiology of emotional
body language. Nature Reviews Neuroscience, 7(3), 242–249.
de Gelder, B., de Borst, A. W., & Watson, R. (2014). The perception
of emotion in body expressions. Wiley Interdisciplinary Reviews:
Cognitive Science, 6(2), 149–158.
De Gelder, B., & Hadjikhani, N. (2006). Non-conscious recognition
of emotional body language. Neuroreport, 17(6), 583–586.
de Gelder, B., & Hortensius, R. (2014). The many faces of the
emotional body. In J. Decety, & Y. Christen (Eds.), New frontiers
in social neuroscience (pp. 153–164). Cham: Springer
International Publishingdoi:10.1007/978-3-319-02904-7_9
Donges, U.-S., Kersting, A., & Suslow, T. (2012). Women’s greater
ability to perceive happy facial emotion automatically: Gender
differences in affective priming. PLoS ONE, 7(7), e41745.
Eimer, M., & Holmes, A. (2002). An ERP study on the time course
of emotional face processing. NeuroReport, 13(4), 427–431.
Flaisch, T., Häcker, F., Renner, B., & Schupp, H. T. (2011). Emotion
and the processing of symbolic gestures: An event-related
brain potential study. Social Cognitive and Affective
Neuroscience, 6(1), 109–118. doi:10.1093/scan/nsq022
Flaisch, T., Schupp, H. T., Renner, B., & Junghöfer, M. (2009).
Neural systems of visual attention responding to emotional
gestures. NeuroImage, 45(4), 1339–1346. doi:10.1016/j.
neuroimage.2008.12.073
Green, P., & MacLeod, C. J. (2015). SIMR: An R package for power
analysis of generalized linear mixed models by simulation.
14
A. WOOD ET AL.
Methods in Ecology and Evolution, 7(4), 493–498. doi:10.1111/
2041-210X.12504
Ipser, A., & Cook, R. (2016). Inducing a concurrent motor load
reduces categorization precision for facial expressions.
Journal of Experimental Psychology: Human Perception and
Performance, 42(5), 706–718.
Korb, S., Malsert, J., Rochas, V., Rihs, T. A., Rieger, S. W., Schwab, S.,
… Grandjean, D. (2015). Gender differences in the neural
network of facial mimicry of smiles – an rTMS study. Cortex,
70, 101–114. doi:10.1016/j.cortex.2015.06.025
Kret, M. E., Roelofs, K., Stekelenburg, J., & de Gelder, B. (2013).
Emotional signals from faces, bodies and scenes influence
observers’ face expressions, fixations and pupil-size. Frontiers
in Human Neuroscience, 7. doi:10.3389/fnhum.2013.00810
Kret, M. E., Stekelenburg, J. J., de Gelder, B., & Roelofs, K. (2017).
From face to hand: Attentional bias towards expressive
hands in social anxiety. Biological Psychology, 122, 42–50.
doi:10.1016/j.biopsycho.2015.11.016
Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017).
lmerTest package: Tests in linear mixed effects models.
Journal of Statistical Software, 82(13), 1–26. doi:10.1863
Martinez, L., Falvello, V. B., Aviezer, H., & Todorov, A. (2016).
Contributions of facial expressions and body language to
the rapid perception of dynamic emotions. Cognition and
Emotion, 30(5), 939–952. doi:10.1080/02699931.2015.1035229
Meeren, H. K., van Heijnsbergen, C. C., & De Gelder, B. (2005).
Rapid perceptual integration of facial expression and
emotional body language. Proceedings of the National
Academy of Sciences, 102(45), 16518–16523.
Montagne, B., Kessels, R. P. C., Frigerio, E., de Haan, E. H. F., &
Perrett, D. I. (2005). Sex differences in the perception of
affective facial expressions: Do men really lack emotional sensitivity? Cognitive Processing, 6(2), 136–141. doi:10.1007/
s10339-005-0050-6
Moratti, S., Keil, A., & Miller, G. A. (2006). Fear but not awareness
predicts enhanced sensory processing in fear conditioning.
Psychophysiology, 43(2), 216–226. doi:10.1111/j.1464-8986.
2006.00386.x
Niedenthal, P. M., Augustinova, M., Rychlowska, M., Droit-Volet, S.,
Zinner, L., Knafo, A., & Brauer, M. (2012). Negative relations
between pacifier use and emotional competence. Basic and
Applied Social Psychology, 34(5), 387–394.
Niedenthal, P. M., Mermillod, M., Maringer, M., & Hess, U. (2010).
The simulation of smiles (SIMS) model: Embodied simulation
and the meaning of facial expression. Behavioral and Brain
Sciences, 33(6), 417–433.
Perry, A., Aviezer, H., Goldstein, P., Palgi, S., Klein, E., & ShamayTsoory, S. G. (2013). Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent
emotional body context. Psychoneuroendocrinology, 38(11),
2820–2825. doi:10.1016/j.psyneuen.2013.07.001
Poyo Solanas, M., Zhan, M., Vaessen, M., Hortensius, R., Engelen,
T., & de Gelder, B. (2018). Looking at the face and seeing
the whole body. Neural basis of combined face and body
expressions. Social Cognitive and Affective Neuroscience, 13
(1), 135–144. doi:10.1093/scan/nsx130
Redcay, E., & Carlson, T. A. (2015). Rapid neural discrimination of
communicative gestures. Social Cognitive and Affective
Neuroscience, 10(4), 545–551. doi:10.1093/scan/nsu089
RStudio and Inc. (2014). rmarkdown: R Markdown Document
Conversion, R package (Version 0.1.90). Retrieved from
github.com/rstudio/rmarkdown.
Rychlowska, M., Canadas, E., Wood, A., Krumhuber, E. G., Fischer,
A., & Niedenthal, P. M. (2014). Blocking mimicry makes true
and false smiles look the same. PLoS ONE, 9(3), e90876.
Scherer, K. R., & Scherer, U. (2011). Assessing the ability to recognize facial and vocal expressions of emotion: Construction
and validation of the emotion recognition index. Journal of
Nonverbal Behavior, 35(4), 305–326. doi:10.1007/s10919-0110115-4
Tamietto, M., Castelli, L., Vighetti, S., Perozzo, P., Geminiani, G.,
Weiskrantz, L., & De Gelder, B. (2009). Unseen facial and
bodily expressions trigger fast emotional reactions.
Proceedings of the National Academy of Sciences, 106(42),
17661–17666.
Van den Stock, J., & de Gelder, B. (2012). Emotional information in
body and background hampers recognition memory for
faces. Neurobiology of Learning and Memory, 97(3), 321–325.
doi:10.1016/j.nlm.2012.01.007
Van Den Stock, J., & de Gelder, B. (2014). Face identity matching is
influenced by emotions conveyed by face and body. Frontiers
in Human Neuroscience, 8. doi:10.3389/fnhum.2014.00053
Wieser, M. J., Flaisch, T., & Pauli, P. (2014). Raised middle-finger:
Electrocortical correlates of social conditioning with nonverbal affective gestures. PLoS ONE, 9(7), e102937.
Wilbur, R. (2000). Phonological and prosodic layering of nonmanuals in American sign language. In K. Emmorey, & H. Lane
(Eds.), The signs of language revisited: An anthology to honor
Ursula Bellugi and Edward Klima (pp. 215–244). Mahwah, NJ:
Lawrence Erlbaum Associates.
Wood, A., Lupyan, G., Sherrin, S., & Niedenthal, P. (2016). Altering
sensorimotor feedback disrupts visual discrimination of facial
expressions. Psychonomic Bulletin & Review, 23(4), 1150–1156.
doi:10.3758/s13423-015-0974-5
Wood, A., Rychlowska, M., Korb, S., & Niedenthal, P. (2016).
Fashioning the face: Sensorimotor simulation contributes to
facial expression recognition. Trends in Cognitive Sciences, 20
(3), 227–240. doi:10.1016/j.tics.2015.12.010
Download