Uploaded by navneet luther

201 revision.docx

advertisement
201 revisions
Lecture 1
Donders and the subtractive method
-used a frog leg in his experiment
Put electrodes in each side of the leg to measure how fast a response neuron went from one
rod to the other
If it was instantaneous then there would be no time to receive neuronal transmission.
- More neuronal distance =more time
So we can calculate the speed of neuronal transmission
Donders subtraction technique
- Can’t measure the speed of though directly, so we have to measure the time to
respond instead. This is a indirect measure
Donders developed 3 tasks that he thought would demonstrate that we can measure thinking
- A; simple detection- detect then respond
- B: Choice response detect, discriminate then respond
- C: Go-No/Go detect, discriminate, response selection then respond
Simple detection is one button one light – respond always
Choice is two buttons two lights
Go no go is one button and red button
Donders concluded that if we subtract the detection response time (rt) from go no go rt, this
would reflect the amount of time to discriminate the stimuli (in the case, red from green)
Assumptions behind Donders subtractive method
- Cognitive processes (stimulus detection, stimulus identification, response selection,
response execution) occur in discrete stages such that the output of one stage forms
the input of the legs.
- That only one stage/processes may be operating at a time (so that means no parallel
processing)
- That the addition of a stager in no way changes the other stages (this assumption is
often referred to as the assumption of pure insertion)
Lecture 2
Additive factors logic
- Driving from a to b
- There are then two stages to this trip
- Therefore you can have things that can effect both stages
- We can apply neither one or both effects, but no two effects can be placed in the same
stage
- So you can have 4 combinations like this
The two effects in this case is bad weather of bad road conditions which cause bad road
driving conditions. This condition can produce a 1 hour time delay to each stage.
Our effects
So if you were to have a bad trip and both conditions happened in one trip
Therefore the time would add to four hours as both conditions add an extra hour
201 revisions
Interaction term is when one effect relies on the other
-not statistical interaction – additive effects. This tells you that the two manipulations are
influencing different stages of processing
StagesThis is similar to donders thinking of tasks that have to be completed in a sequential order
Differences between donders and Sternberg
- Donders subtractive method is based upon the idea of designing different tasks that
differ by a single stage. So choice vs go no go tasks- (i.e. comparing driving A to B
with driving A to C)
- Whereas, Sternberg suggested we have people do the same tasks. So always a choice
task but under various conditions. (i.e. always drive A to B, but different road
conditions and routes)
Lecture 3- statistics
Lecture 4Signal detection theory
-reality- what is really out there
-perception- which is what we experience
It is easy to measure reality, for example, we could measure how bright something is using a
light meter
Perceptions are not always 100%
Sometimes we miss seeing something (or hearing something)
Other times, we think we can hear or see something
You can either have a hit- this means you are correct
Miss- you are incorrect
False alarm- you thought you saw something
Correct rejection- correct in not seeing something
In order to measure the strengths of the perceived signal, we need to calculate something
called d-prime (written as d’)
Response bias can affect the accuracy of these tests
-response accuracy may reflect this
-things like the reward can affect the way you answer
D prime
-the measure of our response bias is not as well established, but we will use one called ‘c’
(criterion)
-We can calculate these based upon hit rate and false alarm rate
-have to remember that our perception is based on what we see and extract from the stimulus
- we do not get the exact same amount if information on each trial because sometimes we see
it and sometimes we do not
- if we did have the same information every time, we would either always see or always miss
the same information
201 revisions
(watch second half to understand calculations)
Lecture 5 and 6
Cognitive Neuroscience EEG
-electrical signals from the pyramidal cells
These are found at the surface of the cortex and line up at right angles
Measure
- Electrodes on the scalp pick up these electrical signals
- So EEG is measuring the electrical signals produced by the brain activity
- MEG measures magnetic signals produced by the electrical current
Record all brain activity- however not all of it is related to what we are interested/ or trying to
measure
-averaging together a whole lot of presentations of the same stimulus event, we average out
the brain activity
-all that is left is the event related potential ERP
Topographies
- Just a way to view the activity of all electrodes at the same time
- So we might want to see what all the electrodes were doing 100ms after the inset of
the visual stimulus in our left visual field (just left of fixation)
Important point
- Because we measure the brain activity from a distance (surface of the skull) we
cannot be sure exactly where the signal comes from
- This is due to the electrical voltage from one area could send its signal to a pint on the
skull and produce a voltage value of 3
- Another area could send its signal to the same area, where it sends 4
- Our electrode picks up the sum, and tells us 7
- In that case, the electrode tells us we have a voltage of 7, this came from two areas
that individually sent the values 3 and 4
- However, as far as we know, one area could have sent all 7 or 3 areas could have sent
3,3,1 or maybe two sent 6,1 or 8,-1 (can be negative)
So you have to be careful to not assume that the exact same brain areas are active just
because the topographies are the same
Different waves
- Sometimes we might want to compare the brain activity between two quite different
situations
- For example, the sentence “I like tea with milk and sugar” makes sense
- However, the sentence “I like te with milk and dog” does not
We might expect that the brain processes words that produce a semantic violation differently
from semantically expected words
N400
To makes sure they knew when someone was reading a given word, they presented the words
one at a time
201 revisions
Eg
The
Pizza
Was
Too
Hot
To
Cry
N400 effect
Cry does not make sense
They would also have ‘correct’ sentences, so ones that made sense (like “the pizza was too
hot to eat”)
They would make sure that the same words were used both as ‘odd’ and ‘good’ endings
- Would then record the ERPs to matching and mismatching words
Just like the donders would subtract one response time from another, EEG effects are often
shown as ‘difference waves’
In the case, the N400 effect is shown by subtracting the match condition from the mismatch
Source estimations
- Now we cannot know for sure where EGG signals come from, meaning, we cannot
locate the sources of the signals with 100% certainty based on the recordings taken at
the remote sensors (the electrode)
However, if we build in some other information, like if we do not allow for sources in the
‘white matter’ and we do not allow sources to be placed in the ‘ventricles’ then the number of
possible solutions can be reduced, and plausible solutions can be found
-LORETA is a program that builds in such constraints
Lateralized potentials
Trying to compare conditions in this case becomes difficult because the lateralized stimuli
will activate different hemispheres, and moving each hand activates different hemispheres, so
subtractions will pick this up
Two lateralized potentials
- The N2pc and the LRP
- The N2pc has to do with lateralized stimuli and which one we’re attending
- The LRP has to do with which hand you are going to move (respond with)
Think encoding and N2pc
Think response selection and LRP
The N2pc is the dame set of operations, except the electrodes are coded as being contralateral
to the side the stimulus was on
The N2pc shows us that the voltage at the electrode over the opposite hemisphere of where
you were attending (to read the word) was more negative than the corresponding electrode
over the same hemisphere as the word
201 revisions
For the LRP, it means that the electrode over the hemisphere opposite to the hand you moved
is more negative relative to the electrode on the same side
The N2pc seems to require you to have located the stimulus that needs more processing (i.e.,
found the red one and it is the one that you pay attentions to)
So the N2pc may be related to stimulus encoding
The N2pc is bigger if there is more ‘clutter’ in the display, for example, if you have to decide
if the red line is vertical or horizontal in the display, the N2pc would be bigger that in the read
the red word task.
In the fact, the N2pc can even be bigger than a lateralized flash
SO TO CALCULATE LATERALIZED POTENTIALS
You need two conditions (i.e. attend left and attend right, move the left hand and move the
right hand, move the left hand and move the right hand; stimulus on the left and stimulus on
the right )
You subtract contra-ipsi electrodes for each condition (this flips one of the conditions)
And average those two sets of differences, and put the values on the right side of your map
and invert the sign and put that on the left of your map
Oscillations (defines change in the way brain responds)
The idea of the ERP is thar when some event happens (when the stimulus comes one, when
we find the target, when we decide to move etc) some brain activity happens and the same
electrical signal is generated each time
This is why averaging g many trials does not reduce the ERP
The brain activity cycles through positive and negative values
If the brain activity ‘resets’ meaning when the stimulus comes on the activity always starts at
0 and goes positive, then negative, then positive then averaging gives us the ERP (like the
visual ERP)
Averaging cancels out the effect of not allowing oscillations to go as high or as low
Large amplitude to a smaller- desynchronization
Higher amplitude= higher power
Delta wave= 0-4 Hz
Theta wave= 4-7 Hz
Alpha wave= 8-12Hz
Beta=13-30Hz
Gamma=30+ Hz
These different frequency bands are thought to be related to various aspects of mental
activity. However, there is no set rule as to what the bands mean. What band you might want
to examine requires reading previous work and experiments
As the rule of thumb though, for synchronization, as the frequency gets higher, the distance
between the parts working together is getting smaller
SO gamma synchronization is probably small areas of cortex, while theta
Synchronization is probably small areas of cortex, is probably between brain structures, etc
201 revisions
So, once you’ve decided on what band is important, here’s how we look for changes in these
bands
First filter the recording so that only the frequencies you are interested in get through
Second average the filtered trials together
This gives you filtered version of ERP
Take the filtered ERP and go back to each individual trial and subtract the ERP from the trace
Third, square what is left after subtracted ERP
Fourth, average these squared voltages
Lecture 7
Attending information
Overt orienting- is when you direct your eyes towards the stimulus
Covert orienting – is when you attend to some location, but do so without re directing your
eyes
Our attention might be grabbed by some external event like a flash of light or a loud sudden
noise
When our attentions are automatically captured by an external event, this is called exogenous
Orienting
However, we know we can choose to attend to some location
If we voluntarily orient our attention, then this is called endogenous orienting
Limited capacity
Information that we process is said to have a limit capacity
Buffers
Idea of some information is being held, or is stored, while some other information is being
processed
Bottleneck
Would be a point in where the processing stream where information has to be buffered, it
implies too much information is coming in, so the system has to deal with some of it, store
the rest, and this creates a traffic jam on our information mental highway
Auditory attention tasks
Dichotic listening tasks- different stimuli are often presented to each ear
Monaural listening tasks- stimuli presented to one ear
Items presented are repeated back- this is called shadowing
This is too ensuring subject is actually paying attention
Focused vs divided tasks
In a focused attention task, subjects shadow one ear, meaning they only report back the items
presented to the ear like the example or only count target words in that ear
201 revisions
A divided attention task would require the subject to repeat back all the items (from both
ears) or count target words in either ear
Broadbent found – Focused
When told to focus their attention on one ear, what was generally found was that they could
report this information accurately
But generally, were unaware of the content of the items in the non-attended ear
Broadbent found- focused
They could tell if something was presented to the other ear, but could rarely report any of the
items that were presented to that ear
Broadbent found- divided
When subjects had to report all of the items presented in both ears
If the items were presented slowly, they reported them back In pairs
Broadbent’s filter model
Presented in a very metaphorical way, not intended to suggest we have pipes in our heads
through which little balls are rolling
However, presenting a model in such a metaphorical manner can make it easier
1. A buffer
2. Limited capacity
3. Bottleneck
4. Attention
Broadbents model
-this early selection model of attention because he suggests attention operates before stimuli
are identified
That makes the role of attention as being something that selects information from the huge
input of stimulation, and allows that selected information to be further processed
Problems with the model
Makes some clear statement, which can be (and were) turned into predictions
Treismans attenuation theory
Suggested that unattended information was not completely blocked rather it was simply
attenuated (reduced)
In other words, attention allowed some information to pass through at full strength and all
other information was made weaker
Basically, attention would select the information that gets , ‘ full strength passage’ at the
earliest point where a difference could be determined which might be
Physical cues-syllabic patterns- specific words – grammar- gist
The point we get to ‘specific words’ the ‘meaning has been processed’
Physical cues- syllabic patterns- specific words- grammar – gist
Deutsch & deutsch (late selection)
-proposed idea that there is no capacity except for our ability to maintain information
Everything is fully processed for meaning
201 revisions
Triesman and Geffen told subjects to shadow the left ear but tap a key anytime the word dog
was presented in either ear
If D&D were correct, then the unattended dog should be easilty detected since we don’t have
to maintain the words
However, it was found that targets were more likely to be missed if they were in the nonattended ear
Lecture 8
Attention 2
Whole report- recall all items
Partial report- name only a few
Attention in the visual domain
Implies that
1. All of the items are stored initially
2. The storage in brief
3. People scan the information that is stored in order to report the items (move their
attention around the buffer)
4. The storage fades before all items get reported during whole report
Attention as a spotlight
Introduced by posner , describing our attention as a spotlight.
Speed/accuracy tradeoffs
-It is absolutely vital to measure both response time and accuracy
The reason for this is because lets say you only measured response time if you found that
people were generally faster to respond in one situation compared to another, you might
conclude that the first situation is somehow easier
-Would ideally see that when fast response lower accuracy
Slow response higher accuracy
-If accuracy is changing, and response times stay the same, we can still make comments
about which task is easier (the one with higher accuracy, or lower errors)
-Visual attention research often employs what is called a ‘cuing task’ . this style of
experiment was introduced by posner
-Two attentional systems- (endogenous and exogenous)
All a subject in the experiment has to do is decide if a small dot appears either in the left box
or the right box (simple detection)
before the target (the dot) appears, however, they are shown an arrow, pointing to one box or
the other or to both
single arrows indicate that the target is 80% likely to appear in that box
usually, errors are low, and do not differ across conditions. So, I will just show response time
201 revisions
figures. However, one would analyses the response accuracy data and present it in a report or
a publication!
-Posner thought that we were able to show that our attention could be cued reflexively, which
is thought to reflect the exogenous system
-cueing our endogenous by the use of cues and decide to move our attention, and because we
have to keep our eyes still, this is convert orienting
Of endogenous attention
There are some other differences between endogenous and exogenous attention, these have to
with the time course of the effects
A time course study various the amount of time that the cue appears before the target is
presented
The time between the onset of any two events is called the stimulus onset asynchrony or the
SOA
-reversal of the cuing effect is called inhibition of return
Time coursing experiments can be used to determine how fast attention can move to get to
some location
Probe trial, this is when there is a string of at least 5 letters that is either of a living or nonliving creature, the participants have to respond if it in living and then have to distinguish if
there is a spot over the letter by responding with a button press
-then to adjust the size of attention spotlight, change the living or non-living part to if the
middle letter is a vowel
Object based attention, that attention not only is distributed over space but on objects. Using
non-informative peripheral cues
This means we are talking about exogenous
Lecture 9
Working memory
Atkinson & shirffrin 3 stage model
Memory is not just 1 system
Sensory memory – short term/working memory- long term/reference memory
-sensory memory
 Iconic (visual)
 Echoic (auditory)
 Tactile
 Taste
 Smell
Attentional systems are between stages (exo or endo)
For example, between sensory and short term
201 revisions
Why 3 memory systems?
Each of the stages have different properties
If they all had the same properties, then we would just conclude that we have one memory
system
Sensory memory systems
- Appear to me modality specific (one for vision, one for audition)
- They appear to have unlimited capacity (from the whole report studies
- They have a very short duration
- If the information is attended to, it can be maintained for a longer period because it
gets shifted into a different memory system
Early selection vs late
Sensory memory – short term/working memory- long term/reference memory
This is early as there is no semantics before attention
Sensory memory – short term/working memory- long term/reference memory
Late selection because we now have such a link with semantics.
Working memory
- Has a longer duration but is not able to hold as much information
- Working memory has a limited capacity which then creates a bottleneck between
sensory memory and working memory and attention is thought to act like a
traffic cop that selects which information to let through
Rehearsing information allows you to hold onto information in working memory
The primacy effect- is thought to reflect the fact that the early items are the ones that get
rehearsed the most. And, this rehearsal transfers those items into reference memory (long
term memory)
The recency effect - is thought to reflect the fact that the last items presented are still in the
sensory memory and can still be attended to before they fade
To get rid of the recency effect, we can use a delayed recall task. Say count backwards from
10 after the last letter is presented
-this additional task ties up attention and now the items in the sensory memory will fade
Baddeley & Hitch’s model
-proposed that working memory was not simply a passive storage system, but was rather a
collection of systems that actively processed the information that was out into working
memory
- suggested that working memory was broken down into a system that was auditory in nature
(phonological loop) one that was more visual in nature (visuo spatial sketchpad(and a system
that was under voluntary control (the central executive) that was used to maintain each of the
previous two systems (sort of similar to endogenous attention)
Central executive
(similar to attention)
201 revisions
Phonological loop
sketchpad
visuo spatial
Episodic buffer
Model of working memory is often examined using dual tasks
The idea is that if you have to do two things at the same time if they both use the same part of
working memory, then the two tasks will interfere with each other
But if each task uses different parts if working memory, then each part can do its job, and the
tasks will not interfere with each other
Turns out, that finger tapping does not remove the primacy effect, which demonstrates that
not just any secondary tasks will cause interference
If you present words visually, people will rehearse them auditorily. So although working
memory has been broken into an auditory system and a visual one . working memory as a
whole is not as modality specific as the sensory memories. We can rehearse auditorily what
we see (written words)
Lecture 10
Amusia and tonal memory
How to calculate a note
-sound waves are measured in Hz hertz (cycles per second)
-one octave higher is 880Hz. So this doubles the frequency
-12 equal steps in 1 octave
12 steps = 12 semitones
Semi tones sound equally different, but the change in the physical frequency is not equal
Amusia
-
Harder to remember sequences if similar sounding tones
The tines are harder to distinguish for the amusics therefore, they must sound
more similar
So does this means they are really memory problems or a perceptual problem?
Amusics also listen to music much less than the rest of population
So less experienced with music
So cant become experts (less practice ) chase & simon
Due to memory failure? Or social environment
Claire lecture 1
Introduction to language
What is language?
Communication is an important component of language but not the only one
Language
201 revisions
- System of symbols and rules that enables us to communicate
Communicate
- Impairing or exchanging of information by speaking , writing or using some
other medium
Speak
- Say something in order to convey information or to express a feeling
For main listening components
- Listening
- Reading
- Speaking
- Writing
input
output
Functions of language
- Communication
- Thinking
- Expressing emotions
- Pretending to be an animal
- Expressing identity with a group
Kanzi
- Started learning symbols from humans at nine months of age, but his step- sister
Panbanisha began at birth
Is language innate
- Professor Noam Chomsky, prominent linguist
- Meaning and grammar can be separated
- Humans possess a language acquisition device consisting of an innate
knowledge
Innate or learned
- Innateness idea stems from observations of the ease which children typically
learn spoken language
- Criticized by people who do not believe that innate grammar can be general
enough to account for learning if such different languages if such different
languages.
Genetic evidence for innate factors are important
-twin pairs ‘strong genetic’ influences on both structural and pragmatic language impairments
in children
-FOXP2 gene
Link between language and thought
-contraversal theory developed by Benjamin lee whorf
-Language determines of influences thinking
-hypothesis of linguistic relativity
Sapir-Whorf hypothesis
- Certain thoughts if an individual in one language may not be understood by
people who use a different language
201 revisions
-
The way people think is strongly affected by their native languages
Strong- differences in language cause differences in thought
Intermediate- language influences cognition and memory
Weak- language may cause preferences for certain ways of thinking
Evidence suggests that the effects of language on thinking are rather weak
Evidence of the Whorfian hypothesis
- Bermino have only 5 basic colour terms, whereas English regard the two greens
as similar
- Presented 3 colored stimuli – choose two that are most similar
- Both groups had categorical perception based on their own language. E.g. blue
and green most similar for berinmo- speaking and green and green most similar
for English speaking
Language influences perception
What about modern trends
Gender neutral language
- If the reader pays attention he will notice
People first language
- The deaf man ‘versus ‘ the man who is deaf
Language that reflects tolerance of differences and avoids stereotypes and avoids stereotypes
Claire lecture 2
Reading and speech perception
Orthography vs phonology
Letter sound knowledge
-what sound do these letters make ‘a’ ‘oo’ ‘ck’ ‘gh’ ‘ch’
(bat, book, clock, rough, yacht)
Phoneme awareness
- What sounds are in ‘cat’
- What rhymes with ‘cat’
- What word do you get if you take ‘c’ off and add ‘bl’ to the start of the word
Research in psychology
- Lexical decision task; deciding whether a string of letters form a word
- Naming task; saying a printed word out loud as quickly as possible
- Eye movement tracking; measures attention to words, tracking, but not all
reading processes
- Priming; prom word shortly before target word, prime related in orthography.
Meaning or sound
- Event related potentials; time taken for processes to occur e.g N400
Phonology
- The sound system of a language
- There are regional differences, but English generally described as having 45
phonemes
201 revisions
Phonetic features
- Voicing
- Placing
- Manner
- Coded by the IPA
Phonological awareness
- The ability to hear and consciously break words into syllables, rhyme, onset and
rime, and individual sounds or phonemes
- C-AT
- C-ATAPULT
- C-ATASTROPHE
Homophones
- Same sound but different spelling. E.g. rose, rows, made, maid
- Also called homophonic (same sound)
- Heterotrophs (different writing)
Claire 3
Reading
Phonological priming is when a word is immediately preceded by non-word priming
(words are processed faster)
Word processing
- Interactive activation
- Dual route cascaded
- Connectionist triangle models
Word level
Letter level
feature level
word superiority effect
a letter string is presented very briefly followed by pattern mask
orthographic neighbors
stem-step-stew
activation of this can
- Facilitate speed up) recognition if they are less common that target or
- Inhibit (slow down) recognition if more common than target word
Strength
-
influential example of how connectionist processing system can be applied to
visual word recognition
it accounts for
word superiority effect
Pseudoword superiority effect
201 revisions
Limitation
- no account of the role of meaning
- does not consider phonological processing
- too much importance on letter order (cannot explain reading words with jumbled
letters)
- does not account for longer words
interactive activation model of word reading work?
It accounts for
- the word superiority effect
- top down lexical knowledge affects word recognition
reading phenomena that models need to explain
- phonology and meaning can be separated affected in developmental or acquired
dyslexia
- word superiority effect
- orthographic neighbor effect (letter features)
- semantic priming (context effect)
- rapid context effect
- word frequency effects
- word consistency
context effects words in sentences
- reading is affected by context (i.e. top down effects)
- measured experimentally using semantic priming task
- is decision time in a word reading task affected by meaning of preceding word
semantic priming effect
- lexical decision task. – is 2nd word a real word or nonword
- nurse – doctor
- library-doctor
- faster decision if previous (priming) word is semantically related
- suggests that priming word automatically activates stored words in lexicon that
are related to the priming word
- OR priming word changes expectation (expect that a related word will follow)
Lecture 4 Claire
Reading processes
Dual route cascade model (DRC)
Has been used to study reading aloud and silent reading
Proposes two main routes that both start with orthographic analysis (identifying and grouping
letters on the page into words)
Different processes when we read words vs non-words (using route 1)
Types of words used in reading tests
-regular
Irregular
Non-words
Lexical and non-lexical routes
- Route 1 (non-lexical)
201 revisions
- - grapheme- phoneme conversion (convert letters into sounds)
Allows accurate reading & pronunciation of regular words and non-words (as long as they
obey grapheme- phoneme rules)
Routes 2 & 3 (lexical)
- Naming visually presented words relies more on this route because its faster
- Route 2 = lexicon plus semantic knowledge (knowing word meaning)
- Route 3 = lexicon only (meaning not involved. Just know how the word us
pronounced)
Dyslexia
Surface dyslexia
- Rely on route 1, can read regular and nonwords via grapheme-phoneme
mapping.
Phonological dyslexia (more common)
- Very poor reading nonwords and cannot read new or unfamiliar words
DRC can explain different types of dyslexia but NOT deep dyslexia
Deep dyslexia (uncommon)
- Problems reading unfamiliar words and inability to read nonwords
- Semantic reading errors (e.g. ship read as boat)
Criticism of DRC model
- Consistent words have letter groupings that are always said the same e.g. face,
mace, pace, ace, base, case
- Inconsistent words have letter groupings that are said different ways in different
words though bough rough
- Consistency matters
Other criticisms of the DRC model
- Does not account for individual differences
- Assumes phonological processing of visual words is slow
- Ignores semantic processes
- English writing system only
(look at Claire 4) – triangle model
Strengths
-
Evidence supports notion that orthographic , semantic and phonological systems
are used in parallel
Greater emphasis on involvement of semantics in reading aloud
Includes an explicit learning mechanism
Limitations
- Focus on simple monosyllabic words (as for other models)
- Does not explain all cases of dyslexia (e.g. completely intact performance on
phonological tasks but with impaired semantic access)
- Nature of semantic processing not fully explained
What our eyes do during reading
Saccades
- Ballistic
- Once initiated, their direction cannot be changed
- Take 20-30 ms to complete
201 revisions
Spillover effect, words is fixated longer during reading when proceeded by a rare
word rather than a common one
Lecture 5 Claire
Speech perception
-
Processes involved in speech perception and comprehension
1. Decode complex acoustic signal into phonetic segments (extract discrete elements)
2. Identify syllables and words
3. Word recognition
4. Comprehension
5. Integrate into context
Energetic and informational masking
Energetic masking occurs when frequency content and loudness of competing sounds mask
the speech signal (bottom up)
Informational masking relates to cognitive load and attentional factors (top-down factors)
Segment speech
- Certain sequences of sound are never found together within a syllable (so
indicates a syllable boundary)
Context (top down) effects
- Interactionist view
- - context influences processing at an early stage
- Autonomous view- context has late influence
Lecture 6 Claire
Context (top down) effects
Interactionist view- context influences processing at an early stage
Autonomous view- context has late influence
Sentence context almost immediate on sort processing, supporting interactionist view
Ganong effect on categorical perception
- Word context affects categorical perception
- Referred to as lexical identification shift
TRACE model
- Network model based on connectionist principles
- Assumes interaction of both bottom and top down processes
- Three levels (speech features, phonemes words)
Key components
- Speech features nodes connected to phoneme nodes connected to word nodes
- Nodes influence each other in excitatory or inhibitory manner to produce a
pattern of activation (trace)
- Connections at the same level inhibit
- Bottom up activation from word features proceeds upwards, context effects
cause downward activation
- The word recognised will depend on activation level possible candidate words
For and against the trace model
201 revisions
-
Explains lexical identification
Includes interactive effect
Predicts word frequency effects on speech perception
Copes with noisy data
Against
- Exaggerates importance of top-down processing
- Model assumes top-down processing will compensate
- Problems with Variable timing of speech rates between speakers
- Relies heavily on computer stimulations of recognition of one- syllable
- Does not consider all factors such as written form effects on word recognition
Cohort model
- Words similar to what is heard are activated word initial cohort
- Words eliminated if do not match further information such as semantic or
another context
- Processing continues until all other possibilities are eliminated recognition point
uniqueness point
- Context affects word recognition at a fairly late stage of processing
- Supporting evidence from cross-modal priming
Facilitation effect
- Doe target visuals probes in each of the tree contexts, even at a late stage in word
processing. Results consistent with revised cohort model as the facilitating effect
of context were most evident in a later stage of word processing
Cohort model
Strengths
- Processing of competitor word is correct
- Processing of words is sequential
- Uniqueness point is important
- Context effects occurs at integration stage
Weakness
- Context can affect word processing earlier
- De-emphasises the role of word meaning
- Ignores the facts that prediction of upcoming speech is important
Route 1
- Heard word activates meaning and spoken form
Route 2 (input lexicon)
- Meaning of hears words not activated
Route 3 (phonological-deep dysphasia)
- Rules about converting heard word into spoken form
Claire 7
Comprehending sentences
Parsing- the way we separate parts of a sentence, looking at individual information within the
sentence
Four possibilities for the relationships between syntactic and semantic processing
- Syntactic analysis generally precedes (and influences) semantic analysis
- Semantic analysis usually occurs prior to syntactic analysis
- Syntactic and semantic analysis occurs at the same time
201 revisions
-
Syntax and semantic are very closely associated and have a hand in glove
relationship
Grammar
-nothing more than just a simple system
Ambiguity can either be lexical (single words) or syntactic (sentence/utterance level)
Syntactic
-
Sometimes called structural
Where meanings of the component words can be combined in more than one
way
Two stage models
- Stage one only uses only syntactic information to process the sentence
- Stage two uses semantic information
- Of importance is the temporal aspect of the semantic information- when is it
used in parsing
Most common model
Garden path model
Using stage one and then go on to stage two as do not fully understand
Minimal attachment – grammatical structure producing the fewest nodes is proffered
- New words encountered in sentence are attached to current phrase
Late closure
Garden path model
Strength
- Simple and coherent account
- Minimal attachment &late closure often influence selection of intial sentences
structures
Limitations
- Word meanings can influence assignment of structure
- Prior context and misleading prosody can influence interpretation earlier than
assumed
- Hard to test the model
- Does not account for cross-language differences
One stage models
- All sources of information (syntactic and semantic) called constraints are used at
the same time to construct a syntactic model of the ambiguous sentence
Constraint based theory
Uses four language characteristics to resolve ambiguities
- Grammatical knowledge constraints possible sentence interpretations
- Various forms of information associated with a given word interact with each
other
- A word may be less ambiguous in some ways than others
- These permissible interpretations differ according to past experience
201 revisions
Verbs
-
Are important constraints that can influence parsing
Verb biases are formed
Lecture 8 Claire
Language comprehension 2
Concerned with language use and understanding
Intended rather than literal meaning
Often involves drawing inferences
Autism spectrum disorder
- People with ASD are often poor at distinguishing between literal and intended
meanings
- Find social communication very difficult
- Weak central coherence
Standard pragmatic model
- Literal meaning is accessed
- Reader/listener decides if it makes sense in the context
- If it does not, then we search for a suitable non-literal meaning that makes sense
Prediction model
- Understands metaphor
- Latent semantic analysis component
- Construction- integration component
Co-cooperativeness principle
- Speakers & listeners work together to ensure mutual understanding
- When there is a failure of communication between speaker and listen, we rely on
common ground to repair it
Egocentric heuristic
- Effortful for listening to keep working out the common ground between them
and the speaker to use egocentric heuristic
Lecture 9 Claire
Discourse processing
- Beyond sentences and single words , looking at a larger text sample
- Have to draw inferences to get from one sentence to another
Types of inferences
- Logical inferences
- Dependent of the meaning of the individual word
- E.g. widow (only implies female)
Bridging inferences
- Establishing coherence between current and preceding text
Elaborative inferences/towards inferences
- Add details to text from general knowledge
201 revisions
Theories in what inferences to use
- Constructionist approach
- Numerous elaborative inferences drawn when we read a text
- -constructionist approach
- Numerous elaborative inferences drawn when we read a text
- -readers constrict a relatively complete mental model of the situation and events
referred to in the text
- -comprehension requires our active involvement in to supply information
Minimalist approach
- -Only a few inferences drawn when we read a text
- -automatic when reading
- -establishes local coherence (chunking words)
- -relies on information readily available because it is explicitly stated
- -formed in pursuit of our goals as readers. (goals vary for why you are reading)
- -elaborative inferences are made to recall rather than during reading
Anaphoric resolution
- Simplest form of bridging inference
- -pronoun or noun has to be identified with a previously mentioned noun or noun
phrase
Complex inferences
- -causal inferences are common form of bridging inference
- -assuming something caused something else
- Ken drove to London yesterday, the car kept overheating
- Assuming he was driving in a old model
- Bonding (low level process involving automatic activation of words)
- Resolution (ensure the overall interpretation is consistent with contextual
information)
Evaluation
- Readers and listeners form bridging inferences to make sense of text
- Use contextual information and our world knowledge to form inferences
- Drawn automatically (many times)
- Readers goals influences predictive inferences
- Superior reading skills draw more inferences
- Discourse comprehension
- Remember main themes/events and leave out minor detail
- Where have schemas about the world, people action, use when recalling or
describing what we have read
Bartletts theory
- Schemas theory- helps us understand a story
- Rationalization- comprehension errors to make the story fit expectations
201 revisions
-
-memory for precise details are forgotten
Schematic information is not
Schemas Influences comprehension and retrieval of information
Schema evaluation
Strengths
- Schematic knowledge helps with text comprehension and general understanding
- Double dissociation in the neuropsychology literature between schema based and
lower level knowledge impairments
Weakness
- Differences in definition of schema
- Schema theories are hard to test
- When schemas are used is unclear
- How they are used is unclear
- Exaggerate how error prone we are
- Schemas affect both retrieval and comprehension processes
Event index model
- As we read we are processing multiple layers of information
- -five key aspects
- Protagonist
- Temporality
- Causality
- Spatiality
- Intentionality
Lecture 10 Claire
Language production
Speaking vs writing
Four key differences
1. Speakers generally know who is receiving the message
2. Speakers mostly receive moment by moment verbal and non verbal feedback (e.g.
boredom) consequently, speakers often adapt what they are saying in response to
feedback
3. Speakers generally have less time to plan what they are going to say, whereas typing
you have more time
4. Writers usually have access to what they have written even right after they have written
it whereas when you say something it is gone, you cannot go back on it and retract
Spoken language, generally informal , simple, quick
Written language, formal, complex
Spoken language
-we use strategies to reduce processing demands
preformulations
201 revisions
Approximately 70% of what we say has been said before. Have a group of words that use In
multiple occasion
Underspecification
-simplified expressions ,meaning not fully expressed
Or something and things like that
Speech production levels
Semantic

Syntactic

Morphological

Phonological
Speech planning
-occur at level of clause
or level at the phrase
To resolve differences between speaking rapidly and fluently by flexibility planning
Speech errors
-spoonerism
When the initial letters of two words are switched
Freudian slip
Believed to reveal the speaker’s true sexual desires
Semantic substituting
-word is replaced by one with another in similar meaning
Where is my tennis bat?
Morpheme exchange error
-inflections are in place but attached to wrong word
Attaching the wrong endings to different words
Word exchange errors
-two words in sentence are swapped
Number agreement error
Singular verbs mistakenly used with plural subjects
Error detection
Comprehension system

Through perceptual loop, we detect own errors by listening to ourselves
Conflict based account 
Detection relies on information generated by the speech production system itself
rather than a comprehension system
Claire 11
201 revisions







Theories on speech production
Spreading activation (Dell 1986)
-when we are preparing to say something, we form a representation at the four levels
Semantic
Syntactic
Morphological
Phonological
More complex more advanced spreading occurs
Nodes within network vary in activation or energy
Categorical rules constrain categories of items and combination of categories that are
acceptable
Insertion rules select the items for inclusion at each level
1.
Mixed error effect, incorrect word is semantically & phonemically related to
target. Lexical neighborhoods
2.
Lexical bias effect- errors ten to be real words rather than nonwords
3.
Anticipatory errors- The sound spoken earlier in sentence than expected
4.
Anticipation errors- exchange errors
Two words with the sentence are swapped
5. errors involve words within short distance
Most speech errors are either
Anticipatory- sounds/words are spoken ahead of their time
Perseverated sounds or words are spoken later than should have been
Levelts weaver ++
Word form encoding by activation and verification
Three main levels
-nodes representing lexical concepts
Nodes representing lemmas (representation of words)- when the word is on the tip of
the tongue, can’t quite think of it
Nodes representing word forms (morphemes) sounds attached to beginning or ending of
words
Meta-analysis of studies for assessing brain activity
-timings in column
Conceptual preparation 200ms
Lemma retrieval 75ms
Phonological code 20ms
Production of it 600ms
Weaknesses to weaver++
Narrow focuses (single word production)
In reality there is more interaction between processing stages than the model assumes
Evidence suggests more parallel processing than model allows


Aphasia
An acquired language deficit in the modalities of listening, speaking, reading and
writing
Central language disorder
201 revisions


Language impairment when all other intellectual motor, sensory functions are intact
Cause by brain damage- stroke, brain tumor, traumatic brain injury (not born with it)
Broadly divided into fluent and non-fluent on basis of neuropathology and expressive
output
Occurs at all levels of communication, involves anything to do with numbers
Word finding difficulty (anomia) is the underlying characteristic
Is tip of the tongue a mild form of aphasia?
Brocas aphasia
Nonfluent/motor aphasia
Slow hating speech (dysfluent)
May have presence of agrammatism
-reduced phrase length
-reduced syntactic complexity 'telegraphic'
Omission of function words and grammatical morphemes
Wernickes aphasia
Fluent/sensory aphasia
Fluent speech
Normal melodic line
Sentences can be long, complex grammar
Poor comprehension
Not necessarily damage to Wernicke's area
Distinction between brocas and Wernicke’s aphasia
1. Implies that brain damage people all have similae patterns of impairment
2. There are several brain areas involved in language processing and theyre
interconnected in complex ways
3. People with Brocas aphasia sometimes also have damage to wernickes area and vise
versa
4. The findings that people with brocas aphasia have greater difficulty speaking
grammatically is language specific
5. Traditional view focuses too much on specific impairments (e.g. word finding) but
people with aphasia can also have problems with attention and memory
Anomia
- An impaired ability to name objects
- Errors based on meaning
- Errors based on phonology
Agrammatism
- Produce short sentences containing content words
- Lack of function words (the in and)
- Lack of inflections
Jargon aphasia
- Extreme form of fluent aphasia
201 revisions
-
Severe problems with grammatical processing- both comprehension and
production
Sever word findings difficulties
Numerous neologism
Typically unaware of their errors (poor self-monitoring ) so do not attempt to
self-correct
Seem to speak fairly grammatically
Claire lecture 12
Speech as communication

Tailoring for the needs of the listener
For maxims
Relevance- when talk only speak of relevant to situation
Quantity-only give as much information as necessary
Quality- only give information that is truthful
Manner- modify what you are saying so that listeners can understand
Co-cooperativeness principles
Some maxims are ignored
Audience design
- Speakers need to take account of specific needs to their listeners (see common
ground)
Syntactic priming
- Speech tends to follow syntactic structure that has been heard recently (e.g.
passive construction)
Gestures
- Used to aid understanding and clarification
Prosodic cues
- Intonation used to aid meaning
Common ground
Includes representations of information that is physical, linguistically, culturally co present
-speakers generally focus on their own needs
-more likely to use when there is interaction
Gesture

Used by speakers

Time and meaning in words being spoken coordinated
People who are blind from birth also gesture
Why?
-type of self-clarification
Could be reflexes
Self-cueing
Discourse markers- words that add no linguistic purpose
201 revisions
Writing

Involves retrieving and organizing information stored in long term memory

Suggest writing is a form of thinking
Key processes (not really examinable)
-prosper- ideas for expression
Translator-converts message to word string
Transcriber
Evaluator
Two strategies used in planning stage
Knowledge telling strategy- write down everything about topic
Knowledge transforming strategy - occurs with increased writing ability
Knowledge crafting stage
-when focusing on reader
Knowledge effect- tendency to assume that other people share the knowledge we process
Hugh lectures
System 1 and system 2
- Fast thinking and slow thinking
Mentalese
- The language of thought
Reasoning and deciding
- Deductive reasoning with conditionals and quantifiers
- Introduction- hypothesis formation and evaluation
Decision making and risk
- Thinking probalisticallly
- Cognitive biases and heuristics
Problem solving
- Varieties and vagaries
- The curious case of insight
Creativity
- Its measurement and models
Different kinds of thinking
- Modes or ways of thinking
System 1
-
Fast
High capacity
Parallel
Nonconsciousness and autonomous
201 revisions
- Biased responses
- Contextualized
- Associative
- Does not require working memory
System 2
- Slow
- Capacity limited
- Serial
- Conscious and deliberate
- Rational
- Abstract
- Rule- based
- Requires working memory
System 1
-evolved early
Similar to animal cognition
Implicit knowledge
Basic emotions
System 2
-
Evolved late
Distinctively human
Explicit knowledge
Complex emotions
Cognition
Unbounded rationality
Formal logic
Bayesian logic
Expected utility theory
Requires system 2
bounded rationality
cognitive biases
heuristics
intuition
system 1 activated
Reasons why language of thought is not a natural language
1. Preverbal children cannot think
2. Animals cannot think
3. Sensorimotor control is a form of fast thinking
4. Humans understand ambiguity- thoughts are unambiguous
5. Natural language is rarely logically explicit
6. Computers thinking in a natural language
Deductive vs indictive reasoning
- Deductive reasoning id often reasoning from the general situations to the specific
instances. From the top (the theory) down to the predicted results
In contrast
201 revisions
-
Inductive reasoning is reasoning under less-than-certain circumstances, where
probabilities are important
Deductive inference with conditionals
Premise 1
- States the general position that if such and such is true then something follows
form its being true
Premise 2
- Is the actual situation or fact or observation
- The conclusion follows from premises 1 and 2
Premise 1 states if such and such is true then something follows from its being true
Premise 2 is the actual situation or fact or observation
The conclusion follows from premises 1 and 2
If you = antecedent
Then you = consequent
Modus ponens (or affirming the antecedent)
1. Of A then B
2. Given that A holds
3. Then can infer that B holds
Modus Tollens (denying the consequent)
1. If A then B
2. Given that B does not hold
3. Then can infer that A does not hold
Hugh 2
Law
induction
Theory
Hypothesis
deduction
Observation
system 1 is quick to look for a pattern and if I found treat it as nonrandom and predict that the
pattern will continue
system 2 can be engaged to realise that more than one rule can underlie the given sequence
and hence that narrowing down the possibilities involves disconfirming potential rules
how might we process of inductive hypothesis formation take place?
1. Holist strategy; the learner takes all of the features of the first positive instance as the
initial hypothesis and , as more instances are successfully presented, eliminates any
feature in the seat that does not occur with a positive instance
201 revisions
2. Parist strategy; the learner takes a subset of the features of the first positive instance
as the initial hypothesis and as more instances are successively presented, adds to or
subtracts its features to maintain consistency
Download