Name that tune. Song title? Performer(s)? R.G. Bias | rbias@ischool.utexas.edu | 1 Psycholinguistics “The Knower” 1/31/10 R.G. Bias | rbias@ischool.utexas.edu | 2 Objectives After this class you will be able to (it is my hope!): Describe some reasons why speech perception is hard. Describe how we still do it, seemingly easily. Understand the Stroop Effect. Understand much about how we read. Know what “phonological recoding” is. Demonstrate an even DEEPER appreciation for how we can use behavioral data to make inferences about brain function. R.G. Bias | rbias@ischool.utexas.edu | Psycholinguistics The psychology of language. What goes on from the time I get an idea until you have the same idea, – Whether I speak my idea (speech production, auditory science, speech perception) – Or write my idea (motor movements, visual system, reading) R.G. Bias | rbias@ischool.utexas.edu | Last week . . . We talked of selective attention. Get this – the Stroop Effect. I need a volunteer. http://www.david.tam.name/SelfTests/Stro opEffects.html R.G. Bias | rbias@ischool.utexas.edu | Speech perception . . . . . . is EASY!! I need three volunteers to each read just a few words out loud. R.G. Bias | rbias@ischool.utexas.edu | Did anyone NOT hear . . . DOG FISHING ROD LADDER So, speech perception must be easy, eh? R.G. Bias | rbias@ischool.utexas.edu | Yeah, consider this: To understand ONE word, you need to : – Transform sound waves into neural signals (remember – tympanic membrane, middle ear bones [auditory ossicles], cochlea, basilar membrane, hair cells). – Apply “pattern recognition routines” to figure out what letter a particular pattern of sounds represents. – Hold early letters in working memory to put together one word’s worth of letters. – Look up the meaning of that word in your mental lexicon. R.G. Bias | rbias@ischool.utexas.edu | Furthermore . . . . . . adults speaking English speak about 15 speech sounds (letters, but when spoken “phonemes”) per second, thus about 900 per minute. You gotta pick out those sounds (from the speaker) from other ambient noise. Plus, there are NOT blank spots between words, in the acoustical stimulus. (Notice how hard it is to parse individual words in a language you don’t speak?!) PLUS . . . R.G. Bias | rbias@ischool.utexas.edu | Variability in Phoneme Pronunciation!! Differences in pitch and tone of different speakers. (Men vs. women. Adults vs. kids. Tall people vs. short people. Native English speakers vs. not.) Even an individual doesn’t always pronounce a phoneme the same way. Plus, coarticulation effects – your articulators (lips, tongue, jaw) are different positions when you START to utter a particular phoneme, depending upon the context, and you’re preparing to pronounce the next phoneme. And soooo . . . – The phoneme sounds different from word to word! – So all those /d/ phonemes you perceived as the same are actually QUITE different, acoustically (physically) – they’re firin’ different patterns of hair cells! Speech perception is a miracle! How do we do it? R.G. Bias | rbias@ischool.utexas.edu | How do we do it? Word boundaries – we use our knowledge of our language to insert those word boundaries that aren’t really there. Visual cues Context. We are “active listeners.” More o’ this “topdown” perception – we use our knowledge of language, and the context of the utterance, to perceive “ambiguous stimuli.” – For example – Warren and Warren (1970) study on “phonemic restoration” – a cough replaced the asterisk below, but people had no problem getting the “right” word. • “It was found that the *eel was on the axle.” • “It was found that the *eel was on the shoe.” • “It was found that the *eel was on the orange.” – So phonemic restoration is a sort of illusion. R.G. Bias | rbias@ischool.utexas.edu | So, in speech perception . . . . . . we receive a less-than-crisp acoustic signal, this non-unique speech stimulus, and IMPOSE meaning on it by . . . – Building in those word boundaries – Utilizing visual cues, when available, and – Attending to context R.G. Bias | rbias@ischool.utexas.edu | The Psychology of Reading Except for fairly rare cases of “phonetic symbolism” (onomatopoeia) words have no inherent meaning. – (And rarer cases of “orthographic symbolism”!!) So, READING is the interpreting of words, the acts that go on to impose meaning, from within, on external visual stimuli. R.G. Bias | rbias@ischool.utexas.edu | Some facts about reading Eyes of the mature reader move rhythmically across the page (from left to right). Eye movement consists of fixations, saccades, regressions, and return sweeps. No information is taken in during saccades (10-25 msec), regressions (same duration), or return sweeps (40 msec). During fixation (250 msec) a visual pattern is reflected onto the retina. Span of perception = amount of print seen during a single fixation. Span of perception = 12 letter spaces for good readers, 6 for poor readers. R.G. Bias | rbias@ischool.utexas.edu | More facts Span of recognition – 1.21 words for senior high, 1.33 words for college readers. So, 7 to 8 fixations per line of print. As content gets tougher, duration of fixations, not number, changes (increases). Regressive movements aren’t systematic. Used when attention is faltering. College readers have 1 regressive movement per 3 or 4 lines of print. Immature readers have 3 or 4 regressions per line. R.G. Bias | rbias@ischool.utexas.edu | Iconic Memory Remember in Week 1 I mentioned a two-stage memory process – STM and LTM. A third stage, Iconic Memory: The unidentified, “pre-categorical” pattern of lines, curves and angles; formed in about 100 msec. Lasts just about 500 msec. – Echoic Memory lasts about 2 sec. Icon can hold up to 20 letter spaces. Pattern recognition routines are applied to the lines, curves. It takes about 10 – 20 msec to read each letter out of the iconic memory. Neural signal takes about 30 msec to go from the retina to the visual cortex. R.G. Bias | rbias@ischool.utexas.edu | Iconic Memory (cont’d.) At some point, thanks to pattern recognition routines, letters are read out. Letters are transformed into abstract phonemic representations. The abstract phonemes are used to search the mental lexicon. About 300 msec after the eye has fallen upon the page, the first word is “understood,” i.e., placed in Primary Memory (STM, Working Memory). Syntactic and semantic rules are applied to gain the meaning of the sentence. R.G. Bias | rbias@ischool.utexas.edu | How do you know, Randolph? Psycholinguists employ a variety of methods to acquire this data about human behavior. One question: Why do we think readers routinely transform the visual representation into a phonological representation? – Cognitive economy – all (healthy) new readers come to the task as skilled hearers. – “I thought you said something about data?” R.G. Bias | rbias@ischool.utexas.edu | Rubenstein et al. (1971) Used a lexical decision task (word/nonword?). Two types of nonwords – homophonous (with real words), like burd and nonhomophonous like rolt. Equally “wordlike.” Longer latencies for burd. Similarly, longer for real homophones like meat. Pointed to “false matches” in the mental lexicon. R.G. Bias | rbias@ischool.utexas.edu | More Data McCusker et al. (1977) proofreading experiment – Homophonous typos (e.g., furst) went undetected more often than nonhomophonous typos (e.g., farst). Gough and Cosky (1977) used the Stroop task. – Nonwords homophonous with color words (e.g,. bloo) led to more interference than control words (e.g., blot) or nonwords nonhomophonous with color words (e.g., blop). I found readers took longer to process words with irregular “spelling-to-sound rules” (e.g., pint) than words with regular rules (e.g., hint) (Bias, 1978). R.G. Bias | rbias@ischool.utexas.edu | The Point The reasons for this somewhat esoteric discourse on the psychology of reading are: – To communicate the complexity that is human information processing – The illustrate the ways scientists go about answering questions about info processing – To sensitize you to the sorts of things known about human behavior R.G. Bias | rbias@ischool.utexas.edu | SO WHAT? Given that we’re so all-fired complex, what does this have to say about how we design computer interfaces? – Depth cues. – Color perception. – Effects of context on perception. – What’s easy to read? – Recognition vs. recall. R.G. Bias | rbias@ischool.utexas.edu | Resources Matlin, M. W. (2009). Cognition. Hoboken, NJ: John Wiley and Sons, Inc. Bias, R. G. (1978). Phonological recoding of words in isolation and prose. Unpublished doctoral dissertation. The University of Texas at Austin, Austin, TX. (Wink.) http://www.david.tam.name/SelfTests/Stro opEffects.html R.G. Bias | rbias@ischool.utexas.edu | Today’s song was “Ten Thousand Words” by The Avett Brothers. Why do you suppose we chose to play it before THIS class? R.G. Bias | rbias@ischool.utexas.edu | 24