OBJECT RECOGNITION AND COMPARISON IN HUMANS: SOURCE ANALYSIS OF VISUAL EVOKED ACTIVITY Elena V. Mnatsakanian Institute of Higher Nervous Activity and Neurophysiology, RAS, Russia mnazak@aha.ru Ina M. Tarkka Brain Research and Rehabilitation Center Neuron, Finland ina.tarkka@neuron.fi The neurophysiological mechanisms for visual recognition are of great theoretical and practical interest. The face recognition has also high social relevance. Cerebral responses elicited by faces differ from those elicited by other kinds of visual objects [1, 2]. The purpose of this study was to locate cerebral sources that explain the evoked electrical activity during the recognition and comparison of pairs of familiar faces and abstract patterns. Face task in our experiments required categorical comparison of faces and presumably elicited processes related to both face familiarity and semantic priming. The Pattern task was an alternative task where the pictures were physically identical to those in Face task, but the targets were non-face objects. The spatio-temporal electrical dipole source models were created to explain scalprecorded event-related potentials (ERPs). Eighteen healthy volunteers participated. Each trial began with one of the two cues (S1) followed by consecutive pictures (S2 and S3). Each picture was a photograph of a familiar face (colleagues) on which an abstract dot pattern was superimposed. One cue directed attention to compare faces (Face Task) and another to compare patterns (Pattern Task). The targets were the same or different pairs with equal probability (for details see [7]). Data acquisition and preprocessing were performed with a system by Electrical Geodesics, Inc. (Eugene, Oregon, USA). Continuous EEG was recorded with 128-electrode net, the sampling rate was 250 Hz, with filters of 0.01-100 Hz. EEG was segmented and averaged off-line according to the conditions: Same Face (SF), Different Face (DF), Same Pattern (SP), and Different Pattern (DP). The averaged ERPs were digitally filtered off-line at 0.3-15 Hz. Spatiotemporal multiple dipole source models [3] were created in BESA2000 (Megis Software GmbH) for the window of 80-600 ms from S3 onset. In the spatio-temporal model, a dipolar source has a stationary location and orientation, and changes the moment (i.e. its strength) with time. The source waveform (SWF) describes the temporal changes of the dipole moment. The residual variance (RV) describes the proportion of the recorded data that is not explained by the model. An ellipsoidal head model with four shells was used. The dipole model was developed first for the grand averaged ERPs, using an 88 mm head radius. The details on the modeling can be found elsewhere [8]. The model was then applied to the data of each individual and the SWFs of individual models were used for further analysis. The SWFs for pairs of conditions were compared using the non-parametric Wilcoxon matched pairs test. The major components of the waveforms appeared around 120-150 ms, 200-250 ms, 270-300 ms, 350400 ms, and 500-600 ms [7]. The models were first developed for ERPs for the Face Task (base model) and then applied to Pattern Task [8]. The final models for each condition consisted of eight dipoles and differed only in the SWF (Fig. 1). Sources were in the occipital cortex, bilaterally in medial temporal and inferotemporal (IT) regions, and in the frontal area close to midline. For all conditions, the RVs of the grand average data in the analysis window were about 3%, for individual models RVs ranged 11-13%. Our model for Face task was compatible with the well-known models based on psychological [4] or fMRI and PET [5] data for face processing. Similar dipolar locations and orientations explained the data for Pattern Task. The two models dissociated the SWFs in the occipital cortex (dipoles 5 and 6 explain components at 120 and 180 ms) and in the IT cortex (dipoles 7 and 8 explain components at 150-160 ms). The IT area is considered to be the source for the face-specific ERP components [1, 2]. The general similarity of SWFs of dipoles 5-8 in the early latencies is explained by the fact that our stimuli always included both faces and patterns and thus the physical feature analysis was similar. Fig.1. On the left: The dipole locations plotted in the slices of Talairach Atlas. Sagittal plane (x = 8) shows the location of dipole 2 with a large circle, and in addition dipoles 1 and 5-6 (small circles) are seen because of their proximity to the midline. Transverse plane (z = 4) shows the location of dipoles 4-6 with large circles, and in addition the dipoles 2 and 3 (small circles) located in the neighbor slices are seen. Coronal plane shows locations of the dipoles 7 (y = -32) and 8 (y = -35). On the right: Grand average SWFs for paired conditions. Numbers correspond to the numbers of dipoles. (For both tasks: D - different pairs, S - same pairs). The statistically significant (p < 0.05) group differences are marked with gray. The activity after 250 ms was explained mainly by dipoles 1-4. The task specificity was clearly seen in dipoles 3-4. Larger moment values of these sources in Face task may be related to networks implicated in metabolic studies as sources for personal identity-related brain activity [5, 6]. The activity of dipolar sources 1 and 2 in the anterior brain areas showed both match-mismatch and task-specific differences in our experiments. Dipole 1 was stronger for DF than for SF condition. The dipole 2 had a prominent peak at 300 ms in all conditions, and another peak at 400 ms observed in DF. It seems that the differences observed in 300-500 ms in scalp-recorded ERPs [7] are explained by multiple sources, mainly in the anterior regions [8]. Our models can explain the grand averaged data (97%) and individual data (about 90%) recorded during visual recognition and target comparison. The sources in our models estimate the activity common for all conditions, and we also indicate differential activity related to the match-mismatch item processing (priming effects). The models also suggest the brain areas associated with task-specificity. References: [1] Allison T, Ginter H, McCarthy G, Nobre AC, Puce A, Luby M, Spencer DD. Face recognition in human extrastriate cortex. J. Neurophysiol. 1994, 71: 821-825. [2] Bentin S, Allison T, Puce A, McCarthy G. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 1996, 8: 551-565. [3] Berg P, Scherg M. A fast method for forward computation of multiple-shell spherical head models. Electroencephalogr Clin Neurophysiol 1994, 90:58-64. [4] Bruce V, Young A. Understanding face recognition. Br. J. Psychol. 1986, 77: 305-327. [5] Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends Cogn. Sci. 2000, 4: 223-232. [6] Leveroni CL, Seidenberg M, Mayer AR, Mead LA, Binder JR, Rao SM. Neural systems underlying the recognition of familiar and newly learned faces. J. Neurosci. 2000, 20: 878-886. [7] Mnatsakanian EV, Tarkka IM. Matching of familiar faces and abstract patterns: behavioral and highresolution ERP study. Int. J. Psychophysiol. 2003, 47: 217-227. [8] Mnatsakanian EV, Tarkka IM. Familiar face recognition and comparison: source analysis of scalprecorded ERPs. Clinical Neurophysiology, 2004, 115: 880-886.