Testing Category-Specific Spatial Frequency Adaptation Kara B. Grubb Honors Thesis completed in fulfillment of the requirements of the Honors Program in Psychological Sciences Under the direction of Dr. Isabel Gauthier, PhD and Olivia Cheung, Graduate Student Vanderbilt University April, 2009 2 Abstract This study examined representational overlap between faces and scenes by means of spatial frequency adaptation. The results show that adaptation to faces and scenes in either low or high spatial frequencies affect the subsequent perception of a face hybrid or a scene hybrid (consisting of low and high spatial frequencies) in different ways: for face hybrids, the spatial frequency of the adaptor is more important, while low spatial frequency faces and high spatial frequency scenes cause more perception of low spatial frequency scenes for scene hybrids. This research provides insight for using spatial frequency adaptation to explore the overlap between face processing and other object processing, specifically processing for objects of expertise. 3 For the last few decades, face processing has been a major area of interest in psychological research. In the beginning, researchers focused on whether or not faces are processed differently than other objects and sought behavioral measures to quantify the differences; through these efforts, the face inversion effect and holistic processing for faces were discovered (Davidoff & Donnelley, 1990; Tanaka & Farah, 1993; Valentine, 1988; Yin 1969; Young, Hellawell, & Hay, 1987; Yovel, Paller, & Levy, 2005). Further research and subsequent advances in brain imaging technology led to the discovery of the fusiform face area, the FFA, which is an area on the fusiform gyrus that shows significant activation for faces compared to the activation for non-face objects such as houses, hands, and schematic faces (Kanwisher, McCermott, & Chun, 1997; Tong, Mascovitch, Weinrib, & Kanwisher, 2000). The mounting evidence that faces are processed differently than most other objects led researchers to hypothesize why faces might be special in the human brain. Two main hypotheses rule the field: the domain-specific hypothesis (McKone, Kanwisher, & Duchaine, 2006; Kanwisher, 2000) and the expertise hypothesis (Diamond & Carey, 1986; Gauthier & Logethetis, 2000; Gauthier, Skudlarski, Gore, & Anderson, 2000). Because of the disagreement between proponents of each hypothesis (McKone et al., 2006; Gauthier & Bukach, 2007), researchers are searching for novel ways in which to test the validity of the hypotheses. I will first review the literature that suggests that faces are processed differently than other objects as well as outline the current standing in the domain-specific versus expertise hypotheses. One of the major differences noted between the processing of faces and other objects is the effect of inversion. When participants are asked to recognize previously studied faces and objects, the participant’s accuracy is higher and reaction time is faster for faces than for objects (Yin, 1969; Valentine, 1988). However, when participants are asked to recognize previously studied faces and other objects that are presented upsidedown during the test phase, the participants have a much lower accuracy and a much longer response time for inverted faces than for inverted objects (Yin, 1969; Kanwisher, Tong, & Nakayama, 1998; Valentine, 1988). Yin proposed that this difference of inversion effects between faces and objects might 4 be because humans may use a special mechanism for processing faces such that the mechanism is greatly affected by being shown inverted faces; the mechanisms for processing other objects are not so affected by inversion, explaining why the participants were worse at recognizing the inverted faces than the inverted objects. The face inversion effect has been found in numerous studies with many different procedures and materials used (Valentine, 1988), suggesting that humans have some different or special mechanism for processing faces than for processing objects. Another difference that has been found between face processing and object processing is holistic processing, which means that an object is processed as a whole rather than piece by piece. Faces are processed holistically, meaning that people have trouble attending to and processing just one portion of a face while ignoring the rest; this phenomenon is not shown to be as robust for other objects as for faces (Tanaka & Farah, 1993; Davidoff & Donnelley, 1990; Young et al., 1987; Yovel et al., 2005). In the early 1990s, when imaging became more widely used in research, researchers began to try to locate the specific neural substrates that are responsible for or that show preference for face processing. Kanwisher et al. (1997) found an area on the fusiform gyrus that showed significant activation for faces and named it the fusiform face area, or FFA. This area showed selective activation for upright faces but not for other objects, inverted faces, or scrambled faces. Prosopagnosic patients, who cannot recognize faces because of damage to the FFA and surrounding areas, support these neuroimaging and behavioral data (Farah, Wilson, Drain, & Tanaka, 1995). FMRI studies and behavioral studies show that faces may be processed in a different way than most other objects (Davidoff & Donnelley, 1990; Farah et al., 1995; Kanwisher et al., 1997; Kanwisher et al., 1998; Tanaka & Farah, 1993; Tong et al., 2000; Yin, 1969; Young et al., 1987; Yovel et al., 2005). The prominent two hypotheses to explain why human brains might process faces in a different way than most other objects are the expertise hypothesis and the domain-specific hypothesis. The expertise hypothesis states that faces are processed in a different way because humans are face “experts” (humans are 5 exposed to them multiple times a day and much of human life depends on recognizing and interpreting human faces). The domain-specific hypothesis postulates that there is a neural substrate that processes human faces in a different area of the brain and in a different nature than other objects. Proponents of the domain-specific hypothesis claim that if the expertise hypothesis were true, experts in some category would show similar processing effects (FFA activation, holistic processing, inversion effects, etc) for their objects of expertise and for faces. Kanwisher (2000) and McKone et al. (2006) both claim that no neuroimaging or behavioral effects have been found that demonstrate an expertise effect. A main critique of studies supporting the domain-specific hypothesis is that the studies fail to fully differentiate between basic level and subordinate level of categorization (See Gauthier, et al., 1997 for a review). Novices exhibit a basic level of categorization (i.e. to a novice, a bird is a bird), whereas experts exhibit subordinate levels of categorization (i.e. to an expert, a bird is an eagle, or a sparrow). Therefore, simply having plenty of hours or days or even years worth of exposure to a certain category does not make someone an expert; a person needs to exhibit subordinate level processing as quickly as basic level processing in that category along with repeated exposure in order to be classified as an expert (Gauthier et al., 2000b; Gauthier et al., 1998; Tanaka & Taylor, 1991). According to this criterion, humans process faces differently than other objects because humans are face experts, but with repeated exposure and the acquired ability to categorize at the subordinate level, humans can become experts with other objects as well (Gauthier & Tarr, 1997; Gauthier et al., 2000b; Tanaka & Taylor, 1991); similar processing effects would be acquired for faces and objects of expertise. In accordance with the expertise hypothesis, Gauthier et al., (2000a) found that bird and car experts show significant activation in the FFA and other known face areas when looking at their objects of expertise. Also, when subjects were trained to become experts of lab-created novel objects called Greebles (Gauthier & Tarr, 1997; Gauthier et al., 1998), they showed a significant increase in activation in the FFA from novices (Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999). Behaviorally, many processing 6 effects of expertise have been found that are similar to processing effects for faces (Diamond & Carey, 1986; Gauthier & Tarr, 1997; Gauthier et al., 1999; Gauthier et al., 1998). Diamond and Carey were among the first to demonstrate these effects; they found that dog experts (judges in dog shows) showed significant inversion effects as compared to novices when shown inverted pictures of dogs. Similarly, when Gauthier et al. and Gauthier and Tarr trained people to become Greeble experts, the experts subsequently showed inversion effects when shown Greebles. Also, the Greeble experts showed signs of holistic processing for the Greebles (Gauthier et al., 1998; Gauthier & Tarr, 1997). Because of the lack of conclusive evidence for either hypothesis (McKone, Kanwisher, & Duchaine, 2006; Gauthier & Bukach, 2007), researchers are looking for other ways besides the traditional behavioral and fMRI studies to investigate why faces are special, and to look at the overlap between face processing and other object processing, specifically, objects of expertise processing. The current study examined the functional overlap between face processing and scene processing by using spatial frequency adaptation. The aim of this study is to demonstrate that adaptation to one category of stimuli, such as faces, should not affect perception of another category, such as scenes, that have been shown to have dissociated processing systems. If we can show this to be true, subsequent research can be done using faces and objects of expertise, such as cars. In these future studies, any cross-adaptation effects will suggest overlaps between face and car processing, which provides support for the expertise hypothesis. Adaptation in visual processing can be used when any category of visual information has different components that are separated in the visual system. Many low-level adaptation effects have been shown, such as the red-green phenomenon in color perception, or the tilt effect, or most other aftereffects in which a person adapts to one thing, resulting in subsequent perception becoming repelled or biased away from the adapting stimulus. Adaptation effects like this can occur when there are neurons in the visual system that respond to the same category of stimuli, but respond to different aspects of that 7 category. For example, the red-green phenomenon is fairly well-known, and involves a subject looking at a red screen for a certain amount of time, and then looking at a white screen and seeing a green tint. This happens because neurons that respond to the color green are closely linked to neurons that respond to the color red in the visual system, and fatiguing the neurons that respond to red by looking at the red screen results in perception of the green (Thompson & Burr, 2009). Higher-level visual adaptations have been demonstrated for things like face identity or category judgment (Kovacs et al., 2006; Sigala & Rainer, 2006), implying that adaptation can be used to study the nature of the higher-level visual system. Adaptation studies using faces have already been done for direction of gaze (Calder et al., 2007), gender, race, emotion (Webster, Kaping, Mizokami, & Duhamel, 2004), and attractiveness (Rhodes et al., 2003). While these studies do provide important information about the visual system, these traditional adaptation studies cannot be used to test face processing in comparison with other object processing because faces and other objects have very different shapes and characteristics, and therefore no studies like the above mentioned can be done for faces and cars, for instance. For this reason, other facets of the visual information that comprise both faces and other objects need to be used in adaptation studies. One such facet that could be used in higher-level adaptation studies is the separation of high and low spatial frequencies throughout the visual system to the FFA. High spatial frequencies look like outlines and contain fine scale information useful for judgments such as age, wrinkles, and the detailed information of the face; low spatial frequencies look like blurred images and contain coarse information that makes up the general shape of the face [Figure 3] (Schyns & Oliva, 1999; Gauthier, Curby, Skudlarski, & Epstein, 2005). Spatial frequency studies were conducted to explore how these different spatial frequencies are used in visual processing, and have shown that different ranges of spatial frequencies are important to the visual processing of faces during different categorization judgments (gender, expression identification). Schyns & Oliva (1999) examined which range of spatial frequencies are used for gender 8 discriminations versus expression discriminations, and they researched how spatial frequencies are used in scene processing (Schyns & Oliva, 1994); their research shows that spatial frequencies, a low-level property, can be used to research high-level categorical effects for faces and scenes. The current experiment uses spatial frequency adaptation to examine whether the adaptation effects can be obtained between different objects within a category but not between objects from different categories. In particular, we used faces and scenes, two types of stimuli that recruit distinct visual areas (the FFA for faces and the parahippocampal place area, the PPA for scenes, Epstein, Harris, Stanley, & Kanwisher, 1999). Gauthier, Curby, Skudlarski, & Epstein (2005) showed that faces and cars in car experts in high vs. low spatial frequency ranges independently activate neurons in the FFA. Because of the separation of high and low spatial frequencies in the FFA, we predict that adaptation to faces in a particular spatial frequency range would lead to an aftereffect where other faces will be more easily perceived in other ranges of spatial frequencies. This experiment explores the effects of spatial frequency adaptation to faces and scenes with the presumption that adaptation to a particular spatial frequency of faces should not affect scene perception and that adaptation to a particular spatial frequency for scenes should not affect face perception. That is, if a person is adapted to a certain spatial frequency for faces (i.e. adapted to low spatial frequency faces), then that person’s perception of scenes should not be affected (and vice versa). This is because any fatigue to the neural network that processes faces should not fatigue or even recruit the neural network that processes scenes, as the two substrates have been dissociated (Epstein et al., 1999). Thus, adaptation effects of spatial frequencies may be used to reveal the functional differences between face and scene processing, and eventually, the functional overlap between face processing and object of expertise processing. This research is important not only because it could possibly lead to a way to investigate the expertise versus the domain-specific hypotheses, but also because it can help illuminate the way that faces are processed in general and can specifically help us understand how spatial frequencies are used in visual processing. 9 After this initial research, subsequent research on spatial frequency adaptation to faces and objects of expertise, such as cars, can be done with the assumption that cross-category adaptation effects would be found in car experts. That is, a car expert adapted to a particular spatial frequency for cars should show similar adaptation effects for both cars and faces, while a car novice adapted to the same spatial frequency for cars should not show any cross-adaptation effects for faces. If the data support these hypotheses, the expertise hypothesis will also be supported. In this experiment, the participants were adapted to either faces or scenes in either low or high spatial frequencies. Because no research has been done on spatial frequencies in the PPA, and because this is the first study of its kind, specific predictions are hard to make. I did posit, however, that ideal results would have shown that perception of one category of stimuli was not influenced by adaptation in the other category of stimuli. That is, the ideal expected results would have shown evidence of adaptation effects with no cross-category effects. Methods Participants. Fifty students at Vanderbilt University with normal or corrected to normal vision were recruited for this experiment; because of data problems (subjects using the wrong keys or incomplete data), 4 participants’ data was excluded, resulting in 46 participating. Participants were given credit for psychology courses that require experiment participation or were paid for their participation. The participants were divided into 4 groups of 11 or 12, and each group was adapted to a different spatial frequency and a different category of stimuli, resulting in a 2x2 between-subjects design. The groups were as follows: Group 1 - adapted to high spatial frequency faces; Group 2 - adapted to low spatial frequency faces; Group 3 - adapted to high spatial frequency scenes; Group 4 - adapted to low spatial frequency scenes. Stimuli. The stimuli were taken from 80 color photographs of adult male and female faces and 80 photographs of scenes. Forty faces and forty scenes were used to create the adapting stimuli, and the 10 remaining forty faces and forty scenes were used to create the test stimuli. The faces were front views with happy and disgusted expressions (40 of each); they were taken from the Karolinska face database. The particular faces were chosen using a validation study that tested the effectiveness of the faces in the Karolinska database in representing to participants the emotion that they were meant to represent (Goeleven, Raedt, Leyman, & Verschuere, 2008). The scenes were black-and-white of two categories: house and highway scenes. The faces and scenes were converted into grayscale using Adobe Photoshop CS2. The canvas sizes of the photographs were trimmed down to 160x160 pixels also using Adobe Photoshop CS2. Adapting Stimuli. The face adapting stimuli came from 20 happy photographs and 20 disgust photographs of different individuals. The scene adapting stimuli came from 20 house photographs and 20 highway photographs. The 40 adapting faces and 40 adapting scenes were filtered into both low and high spatial frequencies and were used as adapting stimuli as both spatial frequencies. For example, a picture of a happy face was filtered into both low and high spatial frequencies, resulting in 2 different stimuli. Low spatial frequencies were defined as <8 cycles per image, and high spatial frequencies were defined as >32 cycles per image, which is consistent with previous studies using spatial frequency filtering (Gauthier, Curby, Skudlarski, & Epstein, 2005; Schyns & Oliva, 1999). After filtering, there were 80 stimuli faces (40 high spatial frequency and 40 low spatial frequency) and 80 stimuli scenes (40 high spatial frequency and 40 low spatial frequency). Each participant was only adapted to those denoted by the group assignments and each participant saw all of the adapting stimuli for his/her group such that each participant saw 40 adapting stimuli during the adaptation phase. Test Stimuli. The face test stimuli came from the happy and disgust photographs of 20 individuals. The happy and disgusted faces were filtered into both high and low spatial frequencies (using the same criterion for high and low spatial frequencies as above) and happy and disgusted faces were matched to create hybrids with the same person of the opposite expression and opposite spatial 11 frequency. For example, there was a photograph of Female 1 with a disgusted face and another photograph of Female 1 with a happy face. Both the happy and the disgusted face were filtered into high and low spatial frequencies and then the low spatial frequency of one emotion was combined into a hybrid with the high spatial frequency of another emotion. This resulted in 40 face hybrids to be used during the testing phase. Similarly, the scene hybrids were composed of a house scene of either high or low spatial frequency paired with a highway scene of the opposite spatial frequency. Each test scene picture was filtered into both high and low spatial frequencies (using the same criterion for high and low spatial frequencies as above) so that each test scene picture was in 2 different hybrids: one in which it holds the high spatial frequency information and one in which it holds the low spatial frequency information. This resulted in 40 scene hybrids that were used during the testing phase. None of the photographs used to make the adapting stimuli were used to make the test stimuli, and vice versa. Procedures. This experiment asked participants to make judgments about which category of the types of stimuli they perceived when they were shown the hybrids. For faces, the participants’ responses were either “positive” or “negative”, and for scenes, their responses were either “house” or “highway”. The study had a pre-adaptation and a post-adaptation test phase in order to demonstrate the effects of adaptation to different spatial frequencies for scenes and faces and an originals phase and a practice phase to give the subjects some exposure to the stimuli. Originals phase. In the originals phase, the participants were shown 10 examples of each of the types of stimuli to familiarize them with the types of pictures that they would be looking at. They were shown positive faces (happy), negative faces (disgusted), houses, and highways [Figure 1]. The pictures during this part of the experiment had not been filtered into different spatial frequencies. The participants were not asked to make judgments during this phase, only to pay attention because they would be asked to categorize the faces and scenes in the same way at a later point in the experiment. 12 Practice phase. In the practice phase, the participants were given a chance to practice classifying the faces and scenes as either positive, negative, house, or highway. During this phase, the participants looked at 2 examples of the adapting stimuli from each category and were asked to make judgments and to classify the pictures according to what they had seen in the originals phase. The purpose of this phase was to familiarize the participants with the stimuli, to make sure that the participants could differentiate between the different types of stimuli, and to give the participants practice with switching keys from the faces to the scenes. Pre-adaptation phase. In the pre-adaptation phase, participants were shown the face and scene hybrids [Figure 2] and were asked to judge if they saw a positive or negative emotion for faces or a house or a highway for scenes; this was done to obtain a pre-adaptation baseline for each participant. All participants were shown all of the test stimuli regardless of which group they were in. A fixation point was shown on the screen at the beginning of each trial followed by a test hybrid, either a face hybrid or a scene hybrid. The test hybrid was shown for 50 ms and then participants were given an indefinite amount of time to respond. Participants were shown all 40 face hybrids and all 40 scene hybrids. After each test hybrid, the participants pressed keys to indicate what they saw. Adaptation phase. The adaptation phase differed for each group of participants in that each participant was adapted to stimuli according to his or her group. The participants were shown all adapting stimuli for their group [Figure 3]. The adapting stimuli was presented in a random order for 2 s each, with a 200 ms interval between the stimuli. Participants were not asked to make judgments or press any keys during this time, but were asked to pay careful attention to the stimuli. Test and re-adaptation phase. The test and re-adaption phase was similar to the preadaptation phase but also included 6 seconds of re-adaptation before each trial. Participants were asked to initiate each trial by pressing a key. At the beginning of each trial, the participants were shown 4 readapting stimuli (presented for the same duration and with the same between-stimuli duration as during 13 the adaptation phase); the re-adapting stimuli were chosen randomly according to the participant’s group. After the re-adapting stimuli were presented, the participants saw a fixation point followed by a test hybrid. The participants were then asked to make a judgment on what they saw, using the same keys that were used during the pre-adaptation phase. Each trial contained one cycle of re-adapting stimuli and a test stimulus. The test stimuli were chosen randomly from the test hybrids and each was presented for 50ms. Participants were shown all 40 face hybrids and all 40 scene hybrids during the test and readaptation phase. Results Data Analysis. Each participant’s post-adaptation trials were compared to his/her pre-adaptation trials to obtain the amount of change because of adaptation. Each participant’s data were analyzed in terms of the rate of change of low spatial frequency from pre-test to post-test for both the category that the participant was adapted to and the category that the participant was not adapted to. The ideal pretest values would have been if participants saw 50% high spatial frequency and 50% low spatial frequency for both faces and scenes, and if participants saw 20 positive faces, 20 negative faces, 20 scenes, and 20 highways. These values were ideal because they would have ensured that any changes from pre-test to post-test were actually effects of adaptation and not just regression to the mean. Results. The average pre-test values are very close to the 50-50% ideal range: for low spatial frequency faces: 57.826% (values ranged from 72.5% - 40%), for high spatial frequency faces: 42.174% (values ranged from 60% - 27.5%), for low spatial frequency scenes: 48.533% (values ranged from 67.5% 30%), and for high spatial frequency scenes: 51.467% (values ranged from 65% - 22.5%). The average pretest values for number of positive faces seen and number of houses seen are 24 and 26, respectively, which is also very close to the ideal range. The average changes of low spatial frequency from pre-test to post-test are graphed in Figure 4, and range from -5.25% to 7.5%. 14 The data were analyzed using 2 ANOVAs, one for each category of hybrids, with spatial frequency and category of adaptors as between-subjects factors. For face hybrids, the main effect of adapting spatial frequency was significant, F(1,42) = 6.906, p = .012. The main effect of category of adaptors was not significant, F(1,42) = .002, p = .965. The interaction of adapting spatial frequency x category of adaptors was not significant, F(1,42) = .083, p = .775. When adapted to low spatial frequency faces or low spatial frequency scenes, participants were more likely to see high spatial frequency face hybrids (-5.255 and -4.258 respectively), and when adapted to high spatial frequency faces or high spatial frequency scenes, participants were more likely to see low spatial frequency face hybrids (3.542 and 2.8 respectively). For scene hybrids, the main effect of adapting spatial frequency was not significant, F(1,42) = .666, p = .419. The main effect of category of the adaptor was not significant, F(1,42) = .016, p = .8998. The interaction of the adapting spatial frequency x category of adaptors was significant, F(1,42) = 3.914, p = .054. When adapted to high spatial frequency scenes and low spatial frequency faces, participants were significantly more likely to see low spatial frequency scenes (5.264 and 7.5 respectively), and were slightly more likely to see low spatial frequency scenes when adapted to low spatial frequency scenes (1.492). When adapted to high spatial frequency faces, participants were more likely to see high spatial frequency scenes (-1.567). Discussion I hypothesized that the ideal results would show adaptation effects within each category with no cross-adaptation effects. That is, I predicted that being adapted to a particular spatial frequency of scenes would cause participants to see more of the opposite spatial frequency of scene hybrids, but would not change face hybrid perception. The results, however, suggest a cross-over effect for face hybrids. For face hybrids, the data does show the predicted pattern for face adaptors, but this pattern is also found for scene adaptors, which is not expected. For scene hybrids, the only category of adaptor that showed the predicted pattern was high spatial frequency scenes, and somewhat high spatial frequency faces because 15 the change was very small. These results suggest that some effects of adaptation to spatial frequencies between faces and scenes may not be limited within category. However, the very different patterns on faces and scene hybrids support the idea that the adaptation is sensitive to the type of stimuli. Although the data does not show the predicted patterns for distinctions between faces and scenes, subsequent research needs to be done before my hypotheses should be discarded. One possible problem with this experiment is that the face stimuli included a lot of “background information” [Figures 1-3], such as ears, hair, and even background, all of which are not part of the core facial features that activate the FFA. These background features could have somehow been encoded as “scene-like” features, leading to more general adaptation effects. One way to fix this is to crop the faces in order to only show the distinguishing facial features [Figure 6]. Another possible problem is that the task itself may not have sufficiently engaged the FFA, and might have activated other areas where the role of spatial frequencies has not been explored, and therefore the outcomes could not be predicted. The task, an emotion discrimination task, does not typically activate the FFA the way that a gender task or identity discrimination task would (Winston, Henson, Find-Goulden, & Dolan, 2004). Subsequent research should address these problems using a different task. For instance, a gender discrimination task similar to the one in Schyns & Oliva (1999) could be done using the same stimuli and the same experiment set-up as this experiment. The current research has important implications. Because no studies have explored the effects of spatial frequency adaptation across categories, the results are still informative and show researchers how to improve upon and change the experiment in order to really tease apart the relationship between visual processing of faces, scenes and other objects. Also, this research has implications for spatial frequency adaptation studies that will test the relationship between face processing and objects of expertise. If a clear distinction can be made between adaptation effects for faces and for other (non-expertise) objects, 16 then any cross-adaptation effects found between faces and objects of expertise can be attributed to the overlaps in processing between faces and objects of expertise. Figure 1. An example of an original positive face, negative face, house, and highway. Figure 2. Examples of a face hybrids and scene hybrids. 17 Figure 3. An example of a high spatial frequency face, a low spatial frequency face, a high spatial frequency scene, and a low spatial frequency scene. Figure 4. The results graph for face hybrids. The results show the percentage of change of low spatial frequency for each category of adaptor. The error bars show a confidence interval of 95% of the interaction. Figure 5. The results graph for scene hybrids. The results show the percentage of change of low spatial frequency for each category of adaptor. The error bars show a confidence interval of 95% of the interaction. Figure 6. An original face with an oval. Figure 1. 18 Figure 2. 19 Figure 3. 20 Figure 4. 21 Figure 5. 22 Figure 6. 23 References Calder, A.J., Beaver, J.D., Winston, J.S., Dolan, R.J., Jenkins, R., Eger, E., and Henson, R. (2007). 24 Separate coding of different gaze directions in superior temporal and inferior parietal cortex. Current Biology, 17, 20-25. Davidoff, J. & Donnelley, N. (1990). Object superiority: A comparison of complete and part probes. Acta Psychologica, 73, 225-243. Diamond, R. & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology, 115, 107-117. Epstein, R., Harris, A., Stanley, D., & Kanwisher, N. (1999). The parahippocampal place area: Recognition, navigation, or encoding? Neuron, 23, 115-125. Farah, M.J., Wilson, K.D., Drain, H.M., & Tanaka, J.R. (1995). The inverted face inversion effect in prosopagnosia: Evidence for mandatory face-specific perceptual mechanisms. Vision Research, 35, 2089-2093. Gauthier, I., Anderson, A. W., Tarr, M. J., Skudlarski, P., & Gore, J. C. (1997). Levels of categorization in visual recognition studied using functional magnetic resonance imaging. Current Biology, 7, 645-651. Gauthier, I. & Bukach, C. (2007). Should we reject the expertise hypothesis? Cognition, 103, 322-330. Gauthier, I., Curby, K.M., Skudlarski, P., & Epstein, R.A. (2005). Individual differences in FFA activity suggest independent processing at different spatial scales. Cognitive, Affective, & Behavioral Neuroscience, 5, 222-234. Gauthier, I., Skudlarski, P., Gore, J.C., Anderson, A.W. (2000a). Expertise for cars and birds recruits brain areas involved in face recognition. Nature America, 3, 191-197. Gauthier, I. & Tarr, M.J. (1997) Becoming a ‘Greeble’ expert: Exploring the face recognition mechanisms. Vision Research, 37, 1673–1682. Gauthier, I., Tarr, M.J., Anderson, A.W., Skudlarski, P., & Gore, J.C. (1999). Activation of the middle fusiform ‘face area’ increases with expertise in recognizing novel objects. Nature, 2, 568-573. 25 Gauthier, I., Tarr, M.J., Moylan, J., Anderson, A.W., Skudlarski, P., & Gore, J.C. (2000b). Does visual subordinate-level categorization engage the functionally defined fusiform face area? Cognitive Neuropsychology, 17, 143-163. Gauthier, I., Williams, P., Tarr, M. J. & Tanaka, J. W. (1998). Training ‘Greeble’ experts: A framework for studying expert object recognition processes. Vision Research, 38, 2401–2428. Goeleven, E., Raedt, R., Leyman, L., Verschuere, B. (2008). The Karolinska directed emotional faces: A validation study. Cognition and Emotion, 22, 1094-1118. Kanwisher, N. (2000). Domain specificity in face perception. Nature, 3, 759-763. Kanwisher, N., McCermott, J., & Chun, M.M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience, 17, 4302-4311. Kanwisher, N., Tong, F., & Nakayama, K. (1998). The effect of face inversion on the human fusiform face area. Cognition, 68, 1-11. Kovacs, G., Zimmer, M., Banko, E., Harza, I., Antal, A., Vidnyanszky, Z. (2006). Electrophysiological correlated of visual adaptation to faces and body parts in humans. Cerebral Cortex, 16, 742-753. McKone, E., Kanwisher, N., & Duchaine, B.C. (2006). Can generic expertise explain special processing for faces? Trends in Cognitive Sciences, 11, 8-15. Rhodes, G., Jeffery, L., Watson, T.L., Clifford, C.W.G., & Nakayama, K. (2003). Fitting the mind to the world: Face adaptation and attractiveness aftereffects. Psychological Science, 14, 558-566. Schyns, P. G. & Oliva, A. (1999). Dr. Angry and Mr. Smile: When categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition, 69, 243 - 265. Sigala, R. & Rainer, G. (2006). Visual neuroscience face-encoding mechanisms revealed by adaptation. Current Biology, 17, 20-22. Tanaka, J.W., & Farah, M.J. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology, 46, 225-245. 26 Tanaka, J.W., & Taylor, M. (1991). Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology, 23, 457-482. Thompson, P. & Burr, D. (2009). Visual aftereffects. Current Biology, 19, R11-R14. Tong, F., Nakayama, K., Moscovitch, M., Weinrib, O., & Kanwisher, N. (2000). Response properties of the human fusiform face area. Cognitive Neuropsychology, 17, 257-279. Valentine, T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79, 471-491. Webster, M. A., Kaping, D., Mizokami, Y., & Duhamel, P. (2004). Adaptation to natural face categories. Nature, 428, 557-561. Winston, J. S., Henson, R. A., Fine-Goulden, M. R., & Dolan, R. J. (2004). fMRI-Adaptation reveals dissociable neural representations of identity and expression in face perception. Journal of Neurophysiology, 92, 1830-1839. Yin, Robert K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141-145. Young, A.W., Hellawell, D., & Hay, D.C. (1987). Configurational information in face perception. Perception, 16, 747-759. Yovel, G., Paller, K.A., Levy, J. (2005). A whole face is more than the sum of its halves: Interactive processing in face perception. Visual Cognition, 12, 337-352.