Uploaded by Maz

All Congitive Psychology revision

advertisement
Cognitive Psychology
1956-Cognitive Psychology- “Scientific Research of our input and output internal
machines”















The term “cognition” stems from the Latin word “cognoscere” or "to know".
Cognitive psychology is the scientific study of how the mind acquires knowledge as an information processor and responds
Cognitive psychologists try to build up cognitive models of the information processing that goes on inside people’s minds,
including perception, attention, language, memory, thinking, and consciousness.
Fundamentally, cognitive psychology studies how people acquire and apply knowledge or information. It is closely related to
the highly interdisciplinary cognitive science and influenced by artificial intelligence, computer science, philosophy,
anthropology, linguistics, biology, physics, and neuroscience.
Cognitive psychology became of great importance in the mid-1950s. Several factors were important in this:
1. Dissatisfaction with the behaviourist approach in its simple emphasis on external observable behaviour
rather than internal processes.
2. Research in Linguistics- Noam Chomsky’s emphasis of mental processes we need for language. The
inborn ability for humans for language rejected behaviourist approach that language acquisition can
wholly be explained by learning principles.
3. In Memory The studies of memory alteration that could not be exampled by behaviourist reinforcement
and proposed memory models.
4. In Developmental Psychology-Jean Piaget’s Object permeance.
5. The development of better experimental methods.
6. Information Processing Approach-Comparison between human and computer processing of
information happening in stages. A number of information processing models were proposed (e.g.
multistore model, Atkinson-Shiffrin model- working and long-term memory) memory is now
understood as much more complex than this.
Cognition: or mental activity describes the acquisition, storage, transformation and use of
knowledge
Cognitive approach: emphasises people’s knowledge and mental processes
Behaviourist Approach: (behaviour is learnt) Must only focus on objective observable
reactions focus on environmental stimuli that determine behaviour
Although behaviourist approach opposed cognitive did contribute to field e.g. Emphasizing
importance of operational definition (clearly defined question) and empirical data obtained
through careful and controlled observation and measurement of behaviour
Gestalt Psychology: emphasises our basic human tendency to organise to organise what they
see as a whole (e.g. remembering a figure more effectively as a face)
Some challenge to Cognitive Psychology; if studies have ecological validity (if conditions of
research are like the real world
Cognitive neuroscience: Search for brain-based explanation for cognitive processes, using
brain lesion studies. PET Scans, fMRI, ERPs and single cell recording.
The computer metaphor proposes that our cognitive (a multipurpose that processes info
quickly and accurately) Artificial Intelligence (AI) approaches to cognition may design
computer programmes that accomplish cognitive tasks as efficiently as possible (pure AI0not
human like) or programs that accomplish tasks in a human-like fashion (computer stimulationtakes into account human limitation)
Parallel Distributed Processing Approach- Unlike AI that says processing happens through
stages one at a time (serial processing) suggest instead they happen in a neural network simultaneously.
Began in the investigation of the cerebral cortex (the outer layer of the brain responsible for cognitive
processes) and the numerous connections among neurons. Proposes a) Cognitive processes operate in a
parallel fashion b) neural activity is distributed throughout a relatively broad region of the cortex c) An
activated node can excite or inhibit other nodes with which it is connected and d) learning is associated
with a strengthening of connections between nodes e) cognitive processes can be completed even when the
supplied information is incomplete or faulty
Cognitive Science: Tries to answer questions about the mind; it includes disciplines such as psychology,
neuroscience, computer science, philosophy, linguistics, anthropology and economics.
Themes of Cognitive Psychology
1)
2)
3)
4)
5)
Cognitive Processes are active rather than passive (as behaviourist believe); searching and synthesising.
Cognitive Processes are remarkably efficient and accurate
Cognitive Processes handle positive information better than negative info
Cognitive Processes are interrelated with one another; they do not operate in isolation (e.g. decision making requires memory)
Many Cognitive Processes rely on both bottom-up and top-down processing
Thalamus: the thalamus takes sensation that comes from the body and directs it to the appropriate part of
the brain for processing.
1) Intro to Sensation and Perception







Sensation- Senses stimulated by outside source- e.g. hair cell- cilia moving in response to air pressure changes- generates action potentials- detected by
brain for sounds
Perception- Next level of processing brain starts to interpret and process sensory info- e.g. sound- processes image or how loud sound is (what we
perceive)
Cognition- How we use what we perceive to learn about the world classify it to understand what’s happening in the world e.g. cognition understands
seagull is making sound
Why do we study it? Eg We know 100ml seconds is the physical limit for how fast we can respond to a sound (races and false starts). Olympics
shooting- had to fire at two clay discs- brain has to process two events and figure out which one to aim and coordinate his body- athlete using colour
filters to boots perception and emphasise contrast of target amongst background. Art- Jackson Pollard- structure similar to nature scenes and makes it
more appealing to senses. Developing computer’s senses by trying to replicate processes of brain. Brain- how does it understand the world?
Fundamental concepts- flow of info (changes in the world=electromagnetic energy- photoreceptors detecting light- in bran processed by primary
visual cortex) So different types of changes are detected by different receptors and processed by different parts of the brain
All senses pass the thalamus and proceed to different places in brain. Info doesn’t only flow in one direction- feedback connection- what’s happening in
the higher cortex can modulate what you sense and perceive at lower level. Transduction- conversion of environmental energy into nerve signals
Gibson- direct perception- we understand the world only by what comes through sensory preceptors- no complex thought or categorisation- used
depth as an example- we understand without complex thinking.
How do we investigate the senses?
1.
2.
3.
4.
Staining: dead brain tissues- apply stain to it binds to particular receptors- highlights types of neurons- reveals structure
of brain. Structure reveals function
Single cell recordings (electrophysiology): - inserts fine microelectrode close to cell with measure action potentials of
that are coming from a single or group of neurons Requires section of the skull to be removed and only used on humans
if as part of a medical operation, e.g. found in cat cortex cells found that are particularly responsive to certain
orientation- e.g. for epilepsy E.g. Quian Quiroga, Kraskov, Koch & Fried, 2009 who found that single neurons can
encode multimodal representations of people, representing individuals in multiple sensory modalities.
Single cell recording study on patients undergoing epilepsy treatment- showed that certain neurons
respond particularly strongly to certain people- e.g. Jennifer Aniston- could find cells that are so selective
they respond to a specific person
Functional Magnetic Resonance Imaging (fMRI scanning): see activity in brain when performing a
certain task- Looks at blood flow within the brain. Uses visual/auditory tasks to show which parts of the
brain are being used at which time
Lesion Studies- Studying people who have damage to parts of brain and studying their change. Can be
conducted on animals by knife (which destroys axons) or by neurotoxins (which only destroys nerve
cells) or on humans with existing damage to the brain due to strokes or trauma etc. Not ideal as involves
ethical issues in animals and strong individual differences in humans. Also involves studying what
becomes a faulty brain system which may not function in the same way as someone/thing without brain
lesions. Virtual lesions – Transcranial Magnetic Stimulation (TMS): Uses pulses of magnetic energy
to disrupt activity in a small part of the brain for a short period of time. Uses biological motion (points of
light used to show a person moving, very impoverished stimulus) in order to investigate how the brain reacts to the loss of certain stimuli/how the
disruption of certain brain areas affects how we are able to respond to biological motion tasks.
5.
6.
7.
Optical imaging/Near infra-red spectroscopy (NIRS): High powered cameras to look at structures of neurons e.g. cones in the retina etc. NIRS
involves nearly infra-red light being shone towards the skull, penetrates the skull and can use the signal that is returned to show how much blood flow
is going to each area at any given time.
ERP- Event- related potential, EEG- from elector- encephalogram. - net of electrodes receiving information- measuring changes in currentsmeasuring response to a stimulus.
Psychographics- measuring psychological response to physical stimuli ‘Quantifying the relationship between physical stimuli and sensation and
perception.’ Changes the intensity of a stimulus until it can no longer be detected, forced-choice procedures to detect which side has the stimulus in it.
E.g. Blake, Turner, Smoski, Pozdol & Stone (2003) found disrupted biological motion in children with autism (poorer performance than control
children in biological motion tasks but not in global form tasks).
8.
Illusions and introspection of illusions- thinking about why we experience things- why am I experiencing this? Investigating in systematically to see
how that effects sensation.
9.
Computational modelling: Using computer programs to model theoretical brain structures, trying to get the same outcome from the computer
program as in the humans in order to test hypotheses.
3) Vision from Retina-Thalamus (LGN)- Cortex
1) The Eye: Light and the Retina



Light passes through the retina and passes through a number of cells including the
ganglion, bipolar, cone and rod cells
Convergence takes place in the retina because there are 126 million rods and cones
but only 1 million ganglion cells to relay data to the brain- Higher convergence of
rods than cones- Average of 120 rods to one ganglion cell- Average of 6 cones to
one ganglion cell- Cones in fovea (where the most detailed perception happens in
the retina) have one to one relation to ganglion cell
Repetition suppression / adaptation = the process by which we see an ‘after image’
after being exposed to a particular image whilst keeping the eyes still. Photoreceptors
are either exposed to bright light or not exposed at all (as the image is block black and
white) and upon switching to a plain white screen, the effect of fatigue on these
photoreceptors results in an inverted image of the original being seen.
Rch Bag (other way
)














Last stage of retinal processing- Ganglion calls receive input
from multiple photoreceptors like the bipolar and amacrine
cells process their input further and send to the brain through
their axon which gather to form the optic nerve.
Types of Retinal ganglion cells – Neurons which connect to
photoreceptors
Midget bipolar to Midget ganglion cells –
- Part of the parvocellular pathway
- Connect to integrate information over a few different
cones
- Look at colour
- Have a small receptor field.
Diffuse bipolar to Parasol ganglion cells – (spread out more,
like an umbrella)
- Part of the magnocellular pathway
- Connect to many different rods
Look at shapes and outlines
- Have a large receptor field
Ganglion cells have receptive field windows which increase fire action
potential when shown a stimulus it likes and can decrease it. The centre
responds to increase and the outer responds to decrease in illumination
The visual system sees only what the retinal ganglion cells show it.
The ganglion cells, together with the bipolar amacrine and the horizontal
cells act as an image filter.
Key study for ganglion cells- receptive fields- cat
ganglion retina- ganglion cells- light appears filling
ganglion cell- firing on it isn’t spontaneously without
light (baseline) once light shined nothing happenedfound for a particular ganglion cell shining light on
centre of receptive field- activity increased.
So, some ganglion cells have centre surround receptive
filed- if surround is filled activity decreases (no
activation potentials) (On centre off surround) you can
have reveres (Off centre on surround)
This is also called Lateran inhibition:
- Light shown into a single receptor leads to a
rapid firing rate of the nerve fibre
- Adding light into neighbouring receptors leads
to reduced firing rate of the initial nerve fibre
- Inhibitory signals are sent by neighbouring receptors,
affecting the rate of fire
- Lateral inhibition highlights changes in luminance
three lightness perception phenomenon explained by lateral
inhibition?
- The Hermann Grid – Seeing spots at an intersection
- Mach Bands – Seeing borders more sharply
- Simultaneous Contrast – Seeing areas of different
brightness due to adjacent areas
On centre surround ganglion cells- play a role for edge detectionwhen there an edge you’ll have a big difference between centre and
surround. Arranging all ganglion cells together strong response
would occur at hard line gradient.
Can also be explained in ganglion cells inhibition in edges- but
doesn’t account for when its waved.
Simultaneous contrast- thinking box in white background appear
darker because more inhibition for surround- Dark less surround
inhibition.
2) Through the Thalamus- Lateral Geniculate Nucleus (in Thalamus) to brain
Signals from the retina travel through the optic nerve to the:
Lateral geniculate nucleus (LGN)
Primary visual receiving area in the occipital lobe (the striate cortex)
Then through two pathways to the temporal lobe and the parietal lobe
Finally arriving at the frontal lobe – executive control
Around 10% of all output of the retina ends up in the superior
colliculus in the forebrain because this part of the brain is involved in
eye movements. This area is also responsible for blind sight – the idea
that we see things without being consciously aware of them
The optic chiasm is a cross over between the two optic nerves. Your
right visual field is processed in the left part of your brain, and vice
versa (note it is visual fields that are split, not the individual right/left
eyes). No actual processing occurs in the optic chiasm
The LGN is part of the thalamic nuclei whose major function is to
regulate (or sort) neural information from the retina to the visual cortex
(like a visual librarian) Signals are received from the retina, the cortex,
the brain stem, and the thalamus Signals are organised by eye, receptor
type, and type of environmental information
Lateral Geniculate Nucleus (in thalamus)- In layers we have different
pathways (mango – parvo cellular) also have third koniocellular.
- Mango cellular system- interested in movement and
flicker- how/where
- Parvo cellular- colour and detail what
- Koniocellular- specific for blue yellow perception.
The parvocellular system from the midget ganglion cells, and the
magnocellular system from the parasol ganglion cells, are kept in separate
pathways in the lateral geniculate nucleus. These pathways travel through
separate layers of the LGN so they arrive in separate layers of the primary
visual cortex.
3) Primary Visual Cortex





Primary Visual Cortex: High level processing at back of brain.
In V1we have retinotopic map- we can see what parts of the brain respond
to which part of the visual field- everything you can see can be mapped to a
specific point in V1- point by point mapping representation.
Experiment- Found cells in visual cortex that were sensitive to a particular
orientation – stimulating activity for cell- in V1 found cells selective for
orientation- cats very sensitive to horizontal
Imaging surface of V1 shows that- show parts of V1 that respond to
different orientation- has pin wheel structures- responds to every orientation
we see. So, ganglion cells linked o orientation selection cell.
Critical periods: Blakemore & Cooper 1970- Raised kittens in striped
tubes from birth (only saw a particular orientation of line). After 5 months,
there were no cells in their eyes which responded to any other orientation of
line. Recorded from cells in visual cortex no neurons that respond to
orientation absent in tube. Therefore, a critical period exists during which
time all orientation of lines must be seen in order to develop the necessary
connections to process this visual information. Neural Plasticity “use it or
lose it”
Two key Streams of Processing: Once information is received at visual cortex it
progresses along visual processing streams depending on where the information is
going to
Dorsal stream Vision for action (where): from visual cortex, up the back of the
brain, towards the motor system (parietal lobe). Computes how to move
towards visual stimuli, how far apart you are from the object, etc.
Ventral stream Vision for perception (what)– from visual cortex, down the
underside of the brain, to hippocampus (temporal lobe)- memory Allows us to
store, encode and recognise objects that we see so that we can identify them.
Colour is part of ventral pathway- illusion show that when it moves, we don’t
perceive the colour changing. Not good at processing colour and motion at the
same time.
4) Colour Vision & Perception
Human visible colour spectrum is continuous and ranges from Blue to Red as below, but other animals have vision of other levels
of the electromagnetic spectrum, e.g. Birds have UV vision, Snakes have Infrared Vision.
Long to short Waves- ROYgbiv









Human Trichromacy: Three types of cones (blue, yellow
and red/green), maximally sensitive at short, medium and
long wavelengths. Found to be an evolved trait related to
foraging for ripe fruit and berries, i.e. red/green was
developed in order to see fruits as different to leaves/trees
etc. (Regan et al, 2001).
Evolution also linked to the amount of bare skin shown on
animals, e.g. Changizi et al (2006) found monochromatic
primates had more fur on show than dichromatic primates
who had more fur on show than trichromatic primates.
Humans are an extreme example of this and therefore are
trichromatic.
Colour Vision Deficiencies: 8% of men and <1% of women
have colour deficiencies as a result of genetics, but they are also acquired through ageing, drugs (smoking, malaria
tablets and recreational drugs etc) and hormones (women going through menstrual cycle have changing blue/yellow
colour perception). Carried down a mother’s line with regard to
genetics, maternal grandfather with colour deficiency means a
50/50 chance of a boy having it.
Monochromats: Only have one type of cone, can only see in
one colour.
Dichromats:
Protanopia: Lack red cones, long wavelength
Deuteranopia: Lack green cones, medium wavelength
Tritanopia: Lack blue cones, short wavelength
Anomalous Trichromats:
Deuteranomoly: Green cones shifted to red, difficulty
distinguishing between red and green.
Protoanomoly: Red cones shifted to green, difficulty
distinguishing between red and green.
Mancuso et al 2009 – Curing Colour Deficiencies: Dichromatic male
squirrel monkeys had red opsin gene, virus and DNA injected into some
of their cones and were found to
be able to see colours they previously could not. Their brains were able
to perceive and use these signals despite the circuitry required not being
in use in early life. (Not use it or lose it?)
Jordan et al 2010 – Human Tetrachromacy? (Four Colours): Some
women have four cone types as opposed to the usual 3. Some women
who did have this extra cone were put through psychophysical tests and
genetic analysis but only one woman was found to be behaviourally
tetrachromatic. Concluded that cortical processing of this extra signal is
what was required as opposed to just the extra cone.
Cone opponency: Output from the three cones is combined and
contrasted to give three ‘cone-opponent’ channels – red-green, blueyellow and black-white.
Colour-opponent Cells in the Lateral Geniculate Nucleus:
 Parvocellular = Colour (Red-Green): E.g. R+/G- LGN
Cells - Red cones excited, cell fires.
 Green cones excited, cells inhibited. Opposite in G+/R- LGN
cells.
 Koniocellular = Colour (Blue-Yellow):
 Magnocellular = Luminance (Black-White)
Colour at the Cortex: Patches of cells are seen to be responsive to
colour at Primary Visual Cortex (V1). Other areas of the Visual Cortex
also process colour, e.g. V2, V4/V8. Signals are sent to Temporal
Cortex. (ventral processing stream “what” pathway)
Cowey & Heywood, 1997 – Cerebral Achromatopsia: Damage to
small cortical regions can result in loss of colour perception. Tested
humans with lesions in extrastriate visual cortex (e.g. V4/V8), found
that cones functioned, and activation was recorded at V1 in response
to colour, but colours were still not seen by participants. Can effect
one visual field but not the other e.g. left but not right. Again shows
importance of cortical processing as opposed to simply having the correct cones.
Top Down Effects- Hansen et al 2006 – Memory and Colour Perception:
Memory of a typical colour of an object influences our perception of colour. Colours of objects are remembered as more saturated
than they really are, e.g. bananas are remembered as more yellow than they are. If asked to make the image of a banana grey (to
remove colour), participants made it blue-grey instead (overcompensation for the yellow). No error if just testing a patch of colour
as there is no memory for a typical colour.
Preferring Some Colours to Others: Occurs due to:
Biological Components Theory (Hurlbert & Ling, 2007):
Ecological Valence Theory (Palmer & Schloss, 2010): Colour
preference is due to associations between colours and certain
objects, e.g. bright blue which is associated with skies and water
is preferred to dark yellow/green, which is associated with mould
etc.
5) Attention
1) Attention: Consciously or unconsciously focusing on internal or external
stimuli.
Attention is important because when attention fails the outcome is generally negative, e.g. when driving, and it can be
directed, e.g. through advertisements. However, we receive too much sensory input and we can’t look at, listen to, feel
and think about everything we see at once. The perceptual system cannot process everything.
This suggests that attention is associated with some kind of limitation, i.e. Attention as a limited capacity resource– or
processing bottleneck- what is limited? Where in processing is this bottleneck?
Different Types of Attention:




Selective attention: Focusing attention on certain information whilst ignoring other information.
Sustained attention: Maintaining a focused attention or vigilance, e.g. a security guard monitoring a
surveillance camera.
Divided attention: Giving attention to multiple items of information, e.g. multi-tasking. Multi-tasking is
hard because none of the tasks are being focused on and this limits performance.
Attention to different sensory modalities: Attention uses all of our senses, but visual attention has received
the most examination as it is easiest to quantify.
Types of Stimuli and Processing
Endogenous – cue gets attention voluntarily,
top down processing
Exogenous – cue involuntarily gets attention,
bottom up processing
Bottom up processing – data driven,
perception starts with stimulus, carried out in
one direction from retina to visual cortex,
each stage in pathway carry out complex
analysis
Top down processing- use of contextual
information in pattern recognition.
Understanding difficult handwriting is easier
as use contextual knowledge from whole
sentences
Covert attention Visual attention is generally
studied through eye movements, but we do
not always look at what we pay attention to.
Covert attention is paying attention to
something other than what is being looked at
directly and this can be done voluntarily or
involuntarily. Covert spatial attention is
generally studied using a number of reaction
time experiments where we assume that
attention takes time to be shifted from one
place to another
Visual- eye movement- generally studied
through eye movement pattern- eye tracking.
but we do not always look at what we pay
attention to
Reaction time experiments.
Spatial cuing (where cues are used to direct
attention) tasks which see participants respond
to certain target stimuli with a cue (valid cue)
find that responses are typically slower when
attention has been directed to an area where
the stimulus does not appear (invalid cue),
suggesting that spatial attention has been
directed to that area. (Can work with
endogenous cue  or exogenous cue just
automatically captures attention.)
Visual Search experiments (where
participants locate a target stimulus amongst a
number of other stimuli) find that if a target
‘pops out’ (e.g. a green O amongst red Xs),
increasing non- targets does not affect
reaction times. However if the target is a
conjunction (e.g. a green O amongst red and
green Xs and red Os) then reaction time
increases with the number of non- targets
shown. This suggests that serial search is
required.
Distractor effects: We assume attention has
been distracted by a stimulus if it slows us
down when it is irrelevant, e.g. saying the
colour as it is written in a Stroop task as
opposed to the colour of ink it is written in
suggests we are unable to ignore the meaning
of a presented word.
Response competition flanker tasks (where
attention is given to a central target and
flanker non-targets are ignored) show that
responses are slower when the flankers are
incongruent than when congruent or neutral.
Attentional capture task: We assume
attention has been ‘captured’ by a stimulus if
it slows us down when it is irrelevant, e.g.
when a non-target is presented in a different
colour in a Singleton Attentional Capture
Task, reaction times increase as attention is
captured by this fake target that is more
salient This also applies for when the target is
presented in a different colour – reaction
times decrease because attention is captured
by the target- Taken as evidence of
“attentional capture” by salient stimuli



Self-report measures are often used to test the effects of attention on awareness (e.g. change blindness) as well as
subjective phenomena such as mind-wandering which cannot easily be quantified.
People who report more mind-wandering also show more reaction time interference when distractors are involved as well
as more error on sustained attention tasks.
Change Blindness: Being unable to see large changes in our environment (such as investigators changing in an
experiment) which would be obvious to those who were expecting it.
Effects of attention on neural processing:




Neural response is boosted for covertly attended stimuli (Wojciulik et al 1998; Vuilleumier et al 2001)
Two regions are known to respond selectively to specific stimulus categories (Faces and Places)
Fusiform Face Area (Faces): Covert attention to faces increased FFA response
Para hippocampal Place Area (Places): Covert attention to houses increased PPA response
2) Attention Early Vs Late Selection and Load theory
Early and late selection
Debates
When having a conversation in a busy location, we are able to focus on what our friend is saying despite there being lots of other equal-volumed inputs from other
people’s conversations. At some point we are filtering out the other conversations – when is this?



Early selection:
We are just hearing the physical characteristics of the other speakers’ voices, like understanding their gender or mood. We are able to filter out the
meaning of this to only process the meaning of what we are attending to. filter occurs before any high-level processing happens
Late selection:
We are taking in the meaning of other people’s conversations as well as the physical characteristics of their voices, and we select which information we
want to attend to at a later stage.
Both theories suggest information is process but argument is on what attention ignores and where it happens
Early Selection






The cocktail party effect: When in a busy room, we are able to ignore other
people’s conversations and attend to the one we are involved in.
Colin Cherry 1953 – Dichotic listening task- Participants were played different
speech in different ears, and were asked to attend to just one. They were able to
repeat what was being said in this ear, and could report physical characteristics of
the voice they were ignoring (eg. Pitch changes, gender of speaker) but didn’t notice
anything else (content, language, reversed speech). This supports the early selection
view – we only process the meaning of the attended message.
Broadbent’s theory- The Filter model (1958)
Attention is necessary because the central mechanisms cannot cope with amount of
sensory stimulation present at any given time. “protects the brains limited capacity
system from information overload”. Attention must therefore be selective.
The human central nervous system has a Limited capacity information transmission
channel for communication= (bottle necks)
Filtering occurs before the incoming information is analysed to a semantic level. (eg
surface feature but not meaning analysed )
Problems with this model




Moray found that in unattended message they DID notice their own name. so, something
is being processed. People do process the more semantically logical response. Development
of attenuation model- argues message are processed but faintly. So own name does pass a
threshold for detection- other names won’t. Words need to meet a certain threshold of signal
strength to be detected. Thresholds for certain words lowered ...so more easily detected
Moray- Subjects were able to hear their own name in the unattended stream in dichotic
listening experiments.
Triesman- People who were bilingual were often influenced by messages in the unattended
stream if the message was in their second language
Gray and Weddeburn- When logical messages changed between each ear, participants
repeated the message that made sense rather than the nonsensical message that was played
into the ear they were meant to attend to.
Modifying Early Selection Theory – Attenuation Model
Treisman- Attenuation model
Unattended messages are not lost completely; they are attenuated instead. Bottle necks are
later- Everything is processed to the point of meaning, but we are getting less signal from
unattended information.
Words need to meet a certain threshold of signal strength to be detected – explains how we
could hear our own name in someone else’s conversation as the threshold for certain words,
like our own name, is lowered.
Late Selection Models Deutsch and Deutsch (1963) Kahneman (1973) Duncan (1980)


Both attended and ignored inputs are both processed to the stage of semantic
(meaning) analysis
Selection:
- takes place at a higher stage of processing
- based on analysing which input is more important/demands a response
Can explain:
Mackay (1973) Dichotic listening tasks when participants hear things in event in the
unattended stream when they are more relevant.
Erikens & Eriksen (1974) Response competition interference effects (flanker tasks) –
when a distractor is incongruent with the stimulus being searched for, our reaction times are
slower so we have been distracted by having to process both stimuli to understand their
meaning.
Negative priming (1988) when responses to stimuli that we have previously ignored are
slowed down. E.g. asked to ignore green and categorise red stimuli into animals or objects.
Responses to words were slower when they came after a semantically related picture that they
had to ignore. Suggests that ignored stimuli are processed and semantically recognised, but the
brain inhibited this kind of stimulus, the after- effect of which carried on to the next trial and
slowed participants’ responses.
Reconciling Early and Late Theories:




Lavie- Load theory
Both early and late selection are possible.
The stage of selection depends on the availability of our perceptual capacity
(our capacity to take in visual information), which in turn, depends on the
visual demands (load) of the task stimuli.
High perceptual loadcapacity is exhaustedearly selection- irrelevant
distracters are filter or attenuated at and early perceptual stage
Low perceptual loadsome capacity sparelate selection can happen.
Irrelevant distracters are processed. we process everything we can until our
capacity limits are reached.
Evidence:





Behavioural measures of distraction support this theory- Response
competition effects exist under low perceptual loads in flanker tasks
(select for certain letter, congruent)- responses are slower. When the
task has a higher perceptual load (search for letter among letters) these
effects are reduced or completely eliminated- identity of distractor was
not processed.
Inattentional blindness occurs when we don’t notice something that we
aren’t looking for. When an unexpected stimulus is presented,
participants completing an easier task (low load) are far more likely to
notice it than those completing a more difficult task (high load)
Neuroimaging evidence Schwartz (2005) – Load manipulated by
changing difficulty of perceptual task- even in the primary visual cortex,
responses to irrelevant information (background) is reduced in tasks
with higher loads. Similarly, there is less activation in the amygdala in
response to fearful faces when the task is harderNeuroimaging Evidence- Bishop (2007) - Emotional processing in
amygdala- Participants given perceptual task with high and low condition
and in background face was presented - Low load shower higher
response from amygdala where high load didn’t show strong fear
response - Fear response blocked at early stage
Individual differences : Those with a higher perceptual capacity need
greater loads in order to avoid distraction. People have differences in
capacity based on age, autism and video game experience. Green &
Bavelier- Video game players remain distracted even under a high load,
where non-video game players would not be distracted.
3) Determinants of Attention: What Determines what we pay attention to?
Voluntary Attention- Top Down Theories
Attention is directed by our goal within a setting






Top down
Goal-driven
Endogenous
Attentional control
Executive attention
Voluntary attention
Involuntary Attention- Bottom up Theories
Attention is drawn to certain stimuli involuntarily –
salience of stimuli
 Bottom up
 Stimulus-driven
 Exogenous
 Involuntary attention
 Reflexive attention
Top Down Bottom Up interaction Biased Competition Theory (Desimone & Duncan, 1995):
We have 2 competing influences on attention (top-down and bottom-up). Eventually one
of the two will win, and this is what gets selected for attention.


If we are able to exercise a very high degree of attentional control, our top-down
goal is stronger, and we’ll be able to override the bottom-up stimulus.
If the signal from the bottom-up stimulus is very strong, we will accidentally
select this even though we didn’t want to pay attention to it.
Bottom up Stimulus Dirven - Characteristics for Attentional Capture:



High salience (eye-catching stimuli)
Movement or abrupt onset of stimulus
Relevance, i.e. a relatable stimulus
Salient Colour Singletons (Theeuwes, 1992):
Salient colour singletons are effective at capturing attention.

Singleton attentional capture task. Asked participants to find a circle amongst a
series of squares, either with or without a distractor that was irrelevant to the
task. Found that participants took longer to respond when the irrelevant
distractor was present- the colour captured attention even though it not the
goal- meaning salient colour singletons are powerful enough stimuli to
momentarily draw our attention away from our goal. First task asked to find
shape singleton, then show shapes with different colours- can the colour
capture your attention even though it’s not your goal to look for colour? Study
shows- There is stimulus driven attentional capture.
Attention location with highest local feature contrast or salience




Stimulus-driven Selection: Theeuwes argued that bottom-up attention comes first
(initial sweep across visual field entirely bottom up) as it is the most physical,
goals are not considered, and only salience of stimuli is considered. In this first
stage, attention is drawn simply to the location with the highest local feature
contrast or salience.
A second stage uses top-down mechanisms where it is considered if we want to
pay attention to this stimulus and if not it is inhibited and attention shifts to the
next more salient item.
Attentional Window: Theuwes also argues this bottom up sweep only take place
in an attentional window (so calculation of local salience is restricted by top down goals in a specific area) and spatial
cues can vary the size of this window. If we know where we need to direct our attention we can narrow the attentional
window to make directing this attention more effective.
Disagreement of this (Distractor task- distractor did distract us even though it was outside our attentional window)
Top Down Contingent capture that stimuli ONLY capture our attention because it aligns in some way
without goals- yellow sign capture our attention because we’re also
looking for a yellow taxi. Involuntary attention influence by
voluntary goals.
Contingent Capture (Folk & Remington, 1992): Attention is contingent to task
goals- it is not stimulus-driven, argues that attention can only be captured by
stimuli relevant in some way to our goals.
Participants were shown a matrix where targets could appear in different
locations. They were either given valid or invalid cues as to these locations, and
then had to indicate where the target appeared. Here, the target is an onset
(something that just appears) Participants’ attentional setting (goal) was either
instructed to be for a unique onset or a unique colour.
Results indicated that:


Invalid cues lead to slower reaction times, suggesting attentional capture.
This was contingent on the relationship to the task- The colour cues captured attention
when the target was defined by a different colour- Onset cues also captured attention,
only when the target was defined by onset. Therefore, attentional capture is contingent
on our top-down goals of the task.
But Theeuwes’ colour singleton was irrelevant to the shape task, but still captured attention.
Bacon & Egeth (1994): Theuwes study was re-examined from a contingent perspective arguing
that the goal was to find a singleton as and as the shapes were all the same in looking at the red
colour people completed task of finding a singleton more effectively. So, singleton colour IS
relevant to top down gaols. Their studies changed all the shapes and found that the colour red
wasn’t as distracting. (singleton detection strategy prevented)
Theeuwes (2004): Argued that Bacon & Egeth’s task reduced local salience of the singleton.
Found that when the local salience of the singleton was maintained (by making many of the
shapes different), the coloured singleton did again interfere with reaction times.
Abrupt Onsets: Something which suddenly appears
Only abrupt onsets can produce stimulus-driven capture
Yantis et al.- Asked participants to look at figures and dentify whether a letter was
present. They were presented with either a colour singleton which was not predictive of
the target location, or an onset (the letter appearing in a location that wasn’t previously
occupied).
Found that participants were quicker to respond when the onset letter was the target, but when the colour singleton was the target,
they were no faster than baseline. Therefore, onsets produced attentional capture but colour singletons didn’t.
Supported the idea that onsets are what can cause attentional capture.
Why might it be important to detect an abrupt onset?
In evolutionary terms, a predator abruptly appearing would need to be attended to, so we must have developed this due to the
involuntary attentional capture being adaptive.
Franconeri & Simons- Even when the targets were an offset, participants’ attention was captured by looming stimuli (moving
towards them) but not receding stimuli (moving away from them). Supports the idea that attentional capture has evolved to avoid
predators.
Arguments against this stimulus driven capture- Display-wide settings Gibson and Kelsey- Attention tasks in the
laboratory usually begin with some kind of change to display – task stimuli onset, offset or change of colour – participants
know that when the change happens, they must respond. This might induce general ‘display-wide’ settings for dynamic
changes – their goal is simply to respond to things changing. This would include onsets. Therefore, the outcomes of
research might not be because onsets are more attention-capturing, but because participants are simply seeking a change.
Other reasons why stimuli capture attention
1) Attentional capture: because of our meaning- snake
2) Attention capture: because of their personal relevance to us (Purkis- spiderphobics shown attentional capture by spiders. Dr Who image- attention
capture by Dr Who images)
3) Attention capture: by areas of expertise (Weiner Thies show experts in
American football faster to notice changes in football related images) (Rio,
Frigeel- Expert musicians more distracted by musical instruments)
4) Attention capture: by reward- Anderson (2013) people noticed colour they
were rewarded for more than target colour. Participants were given a search
task where the target could be one of two colours – one being associated
with higher reward than the other. In a subsequent test phase, reward
singletons were presented as a distractor – stimuli that had previously been
used to indicate financial reward. These reward singletons captured
attention in the same way that colour singletons have been shown to in
other research, despite having no physical properties that would make them
stand out any more than the other stimuli in the visual field. Therefore,
value could be a determinant of attention.
4) Attention and Cognitive control
Key Terms:
 Executive function – processed which make decision and
conflict in accordance with current goals
 Executive Control
 Cognitive control –control of aspects of mental process
Working memory – cognitive system which allow
information to stay online for processing
 Inhibition – stop processes
 Conflict resolution – prioritise one process over another
 Proactive/reactive control –proactive set strategy in advance,
solve
reactive
respond to events as happen
Perceptual Load Theory: (looking) high sense task you have to be selectivetightens bottle neck  early selection

Perceptual load increase reduces distraction.

Incongruence of distractors increases reaction time as perceptual capacity
is filled








Cognitive Load Theory: processing high mental task widens bottle neckless selective  late selection
 Working memory overload- increases distractor effect.
Lavie (2004) Effects of Cognitive Load: Response competition flanker taskLooking at target with distractor- but participants also asked to memorise digits.
Some asked to memorise high load (allot of digits) or low load (not allot of digits)
so in the high task is taxing executive resources.
High cognitive load showed more tendency for distraction- High perceptual load
showed less distraction- and don’t interact together.
Further Study- used singleton attentional capture task using working memory load
manipulation and again found higher cognitive load increases distraction from
colour singleton.
Perceptual Load might increase attention but also increases intentional blindnessso you are less aware of irrelevant stimuli.
Can Cognitive load reduce inattentional blindness? So a harder task will make you
less distracted.
Proved by Carmel (2012): people told to classify names and ignore face that
appeared, and surprise memory test for faces. People performed better at
remembering faces under high working memory load. (low cognitive load- 50%
accuracy) (high cognitive load- 80% accuracy)
Different types of load have opposite effects on attention
Perceptual vs Cognitive Load Theories
1.
2.
3.
Load Theory: Two stages where selection can happen- Early then passing
on to Late.
Perceptual load- something that is perceptually different due to perceiving
a lot of different information or needing to make fine discrimination
between two things. EARLY SELECTION- relies on availability of
perceptual capacity.
Cognitive Control- What determines late selection- Cognitive control. –
involved in LATE SELECTION - involves executive processes
Individual Difference and Distractibility
Are people with better cognitive control less distracted?
 Research with task OSPAN used to measure individual differences in
working memory capacity- simultaneously perform maths whilst
memorising words- and then how many words memorised tested.
 Individuals with lower working memory capacity have more STROOP
interference, more distracted with unattended information- people with
higher working capacity were better at ignoring unattended message.
 Link with low executive control and ADHD- anxiety.
Neural Mechanisms of Attentional Control








Study which looked at neural response to stimuliincreased activation for different stimuli (houses, faces)which regions are involved in controlling attention?
Study using spatial queuing found different places for
effect of attention and activation of time of cue- Frontal
parietal network.
Frmi attention presence vs absence of singleton activates
these areas- people who showed stronger frontal
activation showed less interference (distraction). Frontal
processes help us to stop get distracted
Study response competition flanker task- incongruent vs
incongruent distractions associated with- reaction time
interference- frontal recruitment: Dorsolateral prefrontal
Cortex (DLPFC) and Anterior Circulate Cortex (ACC).
Found people high in anxiety recruited these areas that
control distraction less- controls mind wandering
Frontal areas- more involved with top down processing.
Study also found these same frontal regions were involved for sustained
attention.
Mind wandering positively relate to external task with distraction
- High working memory associated with reduced mind wandering
-High working memory associated with increased mind wandering in
low perceptual load
4) Speech Perception




Sound – local pressure disturbance in a continuous medium that contains frequencies
in the range of 20-20,00 HZ TITZE
Caused by an increase in air pressure at certain point, domino effect. IF REPEATED
THEN CREATES SOUND WAVES
Speech chain Speaker Motor Nerves Vocal muscle sound waves Ear 
Sensory Nerves Listener
Linguistic Level Physiological Level Acoustic Level.
How do We produce Speech?
• The lungs push energy/air up the trachea (windpipe) which vibrates the
•
vocal cords in the larynx (voice box) sounds from the vocal chords are
then filtered/shaped by the subpharyngeal vocal tract: pharynx, oral cavity, Nasal cavity
Highly controlled synchronised movement to produce sound
Consonants: are produced when there is a constriction somewhere along the vocal tract- Classified
into 3 main features
Manner (Stop, Fricative, Nasal, Approximant)
Voicing
Place of Articulation (labial, Alveolar, Velar)
Spectrogram: Tool used to analyse acoustic structure of speech.
•
•
•
A sound spectrogram (or sonogram) is a visual representation of an acoustic signal. To oversimplify things a fair amount, a Fast Fourier transform is applied to an
electronically recorded sound. This analysis essentially separates the frequencies and amplitudes of its component simplex waves. The result can then be displayed
visually, with degrees of amplitude (represented light-to-dark, as in white=no energy, black=lots of energy), at various frequencies (usually on the vertical axis) by time
(horizontal).
The result is a wide band spectrogram in which individual pitch periods appear as vertical lines (or striations), with formant structure. Generally, wide band
spectrograms are used in spectrogram reading because they give us more information about what's going on in the vocal tract,
formant structure. Generally, wide band spectrograms are used in spectrogram reading because they give us more information about what's going on in the vocal tract,
How do we produce Speech: Source Filter Theory
•
•
We often talk about speech in terms of source-filter theory. Put
simply, we can view the vocal tract like a musical instrument. There's
a part that actually makes sound (e.g. the string, the reed, or the vocal
folds), and the part that 'shapes' the sound (e.g. the body of the violin,
the horn of the clarinet, or the subpharyngeal articulators). In speech,
this source of sound is provided primarily by the vibration of the vocal
folds. From a mathematical standpoint, vocal fold vibration is
complex, consisting of both a fundamental
frequency and harmonics.
Because the harmonics
always occur as integral
multiples of the fundamental
(x1, x2, x3, etc.—which
phenomenon was
mathematically proven by
Fourier, hence "Fourier's
Theorem" and "Fourier
Transform"), it turns out that
the sensation of pitch of
voice is correlated to both the
fundamental frequency, and
the distance between
harmonics.
The energy provided by the
source is then filtered or
shaped by the body of the
instrument- the
subpharyngeal vocal tract
(pharynx, oral cavity (lips,
tongue, teeth) nasal cavity.
Filters are also called
Formants
How do we Perceive Phonemes? (dev)






We perceive things categorically- we always discriminate sounds (one or another not ambiguous)
b & p exist on an acoustic continuum produced in exactly the same way only different voice onset time (VOT) lag.
When hearing” b” change gradually to” p” adults only perceive it as an abrupt switch to p. (no ambiguity)
All the sounds in the continuum that have a VOT of less than 25ms are perceived as /b/ and those with greater /p/
The perception of a continuum into categories is a very useful ability as it allows one to pay attention to sound differences that are meaningful and ignore unmeaningful ones (e.g. b with VOT of
10 or 20 so just b) in their native language.
Difference in continuum from ba- to pa VOT greater than 25 less pa
The lack of Invariance Problem
• Categorical Perception demonstrates the brain’s ability to categorise sounds into discrete groups of phonemes
• But this is difficult thing to do in the real world the relationship between the acoustic signal and phonemes is messy (lack
of variance problem)
• Phenomes are cued by more that one acoustic feature eg as many as 16 different cues can distinguish ba from pa
• Integrating information from different cues might enable listeners to detect consistent apttern not possible ny considering
one cue aloe.
4)Influence of Context on Speech
Lexical Context effect “Ganong effect” 1980

Speech is very context dependent

Ganong effect – did you hear g or k: even though g or k is ambiguous what follows biases you (hearing iss) means k
was heard for k. of ift heard g- context influence it

The VOT is changed if short g if long k.
Visual Context effect McGurk Effect 1976

What we he hear changes depending on what we see. we see the picture of her saying ga,ga and hear ba ba we put
them together and we get da da- It occurs because speech perception involves both visual and auditory information.
The da sounds between ba and ga- best guess of what we are hearing and seeing.

Pop out effect: hearing intelligible sounds but after hearing real sound we can make sense out of it.
Brain Basis of Speech Perception:

Classic model from 19th century neurologist

Superior temporal gyrus for speech perception (Werneck’s area)

Inferior frontal gyrus for speech production (Boca’s area)

All happens in Left hemisphere dominant

In another interpretation- we have two streams/pathways of speech processing that are
task dependent- f

First stream (ventral) at start low level sound processing- and you go through
processing becomes less about sounds and more about language- higher semantic
representation- (happens when you do a task focusing on comprehension)

Both hemispheres do this ventral stream processing

Second stream- Dorsal- starts off with same stream as ventral stream (superior) –
processing become more about articulation- (happen when you do a task on perception
of speech sound- phoneme discrimination)

Ventral- mapping speech onto lexical representations, Dorsal mapping speech sounds
on articulatory representation you are internally articulating to perceive difference in
sounds (phonemes)

Explains why aphasics can do one but not the other.

Evidence for Dual stream account Fmri looking at where in the brain there are changes
in blood flow- more processing happening

Get listeners to do tasks that activate both stream- dorsal stream task (repetition of nonsense words- compares with words). Ventral stream task (listening to words – compared with
nonsense words) : we see different areas of activation
Models of Speech Perception

Good model makes testable predictions about how we think the mind/brain works.

Disconfirming and confirming prediction enable models to be refined to drive further research

Cohort Model: early “verbal” model of speech perception and how it deals with time- time an essential part of speech perception.

We have Templates of words- ideal sounds of these word in brain- if speech matches this ideal it is “recognised” , activated entry in
lexicon.

If we just hear c it activates all entries in lexicon (all words beginning with c, compete for our attention) - and then as the word
continues we cut out words that don’t fit- until we finally reach one word that matches the ideal of what we heard.

Point where the sound matches specific lexicon- uniqueness point.

Uniqueness point happens sometimes earlier then when reaching the whole world sound- cutting out lexicons and reaching ideal.

Time is as effective as possible.

Evidence for Cohort model: shadowing task: listeners hear continuous speech and repeating back as quickly as they can- measure
time between onset and speech production
How quickly participants are recognising words. – average response latency 250 milli seconds.

Confirmed listeners were recognising words before they had heard ends of word
A better way of being sure what a model or theory predicts is to implement it as a computer program (Computational Model)
2) MEGRE model of speech perception (opposite to TRACE)
1) TRACE model of speech perception









Replicating process in brain- has processing hierarchy for speech- Bottom acoustic
features- move to phonemes- finally put it together for words.
Speech presented to this model activates acoustic features in the same way for a word to
be made out
Connectional model- zooming in to each level of processing- shows different neuron like
unit coding for different phonemes and are connected with each other. Eg for hearing cat
acoustic feature, phonemes representation will be activated to recognize cat
Within layer inhibitory connections- inhibit other lexicons from being activated when
other are more likely.
Bi-directional excitatory connections- we have bottom up connection, but we also have
top down connections.
Evidence for trace- eye tracking- seeing picture for beetle, beaker, speaker, carriage.
Listeners listen to spoken instruction (take beaker and put it under the diamond).
Computer track eyes as listening to instruction. Can see relationship of competitor
objects- eye looks both at beaker and beetle when hearing instruction as they share
phoneme-p. and vowel. So, they overlap
Eye tracing can allow for very detailed mapping of lexicon processing.
Data can be stimulated in Trace (computer) and compared with human. Data match so
same processes are happening (graphs look the same)- confirms this lexical competition
How does trace model also explain context effects that happen? (Ganong effect)
Explanation for ift – activates lexicons for gift- connection biased at lower level
phoneme stage- same for if iss is presented- activated for kiss.







Against Trace Model- Merge model (completely different explanation for this
effect that involves no top down processing)- No top down processing in speechcontext doesn’t effect what we hear at all- All bottom up our brain just makes
smartest decisions.
When you see a Ganong effect- changing decision about what phoneme is present
rather than perception itself.
Computational model
Testing top- down vs bottom-up accounts: Difficult with behavioural experiments
since both accounts make the same prediction about listeners behaviour ie
phoneme identification should be influenced by lexical context
In TRACE because lexical activation biases activation at lower level phonemes
In MERGE because lexical activation biases listeners decisions about phonemes
Evidence from neuroscience: the Ganong effect has been localised to the superior
temporal gyrus (STG) a region implicated in low level auditory processing- This
word level context appear to change lower level auditory processing- evidence for
top down IF we are right about what the STG does.
Motor Theory of Speech Perception

1)Speech perception is very different from perception of sounds that are continuous- it is the result of specialised speech module that operates separately from the
mechanisms involved in perceiving non-speech sounds and is uniquely human.- Perception of speech is categorical (whereas perception of non-speech qualities like
pitch and loudness is continuous)

2) The objects of speech perception are intended articulatory events rather acoustic events (complex mapping between acoustic and phonemes)- lack of invariance
problem- hard to find clear systematic unambiguous mapping between acoustic cues and phonemes perhaps brain is focusing on how sounds are being produced.

Evidence: 2004- Frmi listeners asked to listen to meaningless speech- shown activation of motor and premotor areas in brain used in producing meaningless
monosyllables are also activated when listening to them- perhaps were perceiving speech as gestures.

Transcranial Magnetic Stimulation: TMS evidence- produces temporary lesion- Tms of premotor areas
ask participants to do phoneme discrimination and colour discrimination task with and without TMS- without TMS they are the same, With TMS speech discrimination
effected colour discrimination not (evidence this area is involved and needed for speech percpetion)

Evidence Against: Categorical perception can also be demonstrated for non-speech sounds eg musical intervals so not sleech specific

With training chinchillas shows the same phoneme boundary for a da ta continuum as humans.
5)Language
1)Language and Cognition





Communication is vital for everything: we must have necessary
biological hardware. Language must be complex enough to convey
any possible message. Social setting also important. Language is
largely what sets us apart from other animals. - culture and technology
depend on it.
Psycholinguistics is the study of the psychological processes
involved in language.
Psycholinguists study understanding, producing, and remembering
language, and hence are concerned with listening, reading, speaking,
writing, and memory for language. They are also interested in how we
acquire language, and the way in which it interacts with other
psychological systems.
Psychologists believe that we store representations of words in
a mental dictionary. We call this mental dictionary the lexicon.
Language has change enormously over time and many
languages are related to each other
Cognitive Aspects of Language
 Language reflects and represents patterns and thoughts
 Help understand how behaviour works
 How language moulds behaviour
 It helps in the study of How do we learn new thing, How
do we understand what people are saying, How are
concepts ideas stored accessed and expressed
1)The classical Model of concepts: (now largely disregarded) We
categorise things by features what it has and doesn’t.
• Things have a list of features for what it does and doesn’t
have – not very clear cut you need to have a he list of
features to sperate them
• All or none assumption instances either are or not
members of a category.
Concepts: how do we categorise/create boundaries for things.
Why do we need them?
 Fundamental building blocks of thought
 We have concepts to: Enable us to generalise from past experience and
observations – e.g. know which dogs are friendly.
 Concepts can be organised in Conceptual hierarchies  Economy of
representation – group them, decrease amount of information that must be
perceived and remembered. We can be efficient in how we retrieve information
 Allow us to predict new outcomes by applying model to information
 How do we categorise the worlds and create these boundaries for a concept: e.g. a
dog each category has a set of defining features?
Problems with the classical View
1. Concepts don’t have defining features, more like a ‘family resemblance’. Particularly
hard to find defining features of a superordinate category, e.g. ‘furniture’.
Rosch & Mervis (1975) - Asked people to come up with 20 features of members of a
concept, such as ‘fruit’. No identifiable common feature was found for all members.
2. Concepts aren’t arbitrary (easy to judge) – Rosch’s work led her to the view that
concepts have “internal structures” with exemplars i.e there are good, poor, or medium
exemplars some of which are more typical of a concept. The typicality effect – people
are faster to agree that an exemplar is a member of a concept if the exemplar is more
typical (good exemplar) e.g. they can say that a sparrow is a bird, faster than they can
say that an ostrich is a bird (the typicality effect)
2) The Probabilistic View assumptions or Prototype Thory





There is no list of defining features, only characteristic ones.
More representative exemplars have more of those
characteristic features, less have fewer at the middle we have
a prototypical exemplar which has all those features.
Explains lack of clear boundaries in some categories. We
have many representations of a “dog” a less prototypical and
more prototypical that share features. And then other who
don’t share features.
So we have a cloud of things that or more or less like
protype.
Rosch and Mervis (1975) showed that good exemplars
(prototypes) have large number of features in common, poor
exemplars have few, if any.
Explains typicality effects:
Typical instances are
classified more rapidly
because they are more
similar to the prototype.
Explains lack of clear
boundaries between
concepts.
Problem with the Probabilistic View
1.
-
Problem with ad hoc categories: e.g. tail lights headlights, post box, blood, - not
all red but we can make one- problem for prototypicality we don’t have a
prototypical red object.
2. Conceptual Combination – combining different concepts, e.g. ‘pet’ and ‘fish’ to
make the prototypical ‘pet fish’ – won’t necessarily have the features of the
prototypes of each sub-category.
3. Problems with Feature Similarity
What counts as an attribute? Murphy & Medin argue that people form categories on
the basis of interests, needs, goals and theories, not on the basis of feature similarity.
Concepts are not just the sum of constituent attributes. People seem to know about the relations between attributes, rather than just the
attributes themselves.
3) Theory Theory Assumptions




Knowledge- based: what do you know about the world? Based on
people’s gaols, assumptions and understanding
Murphy- stressed importance of person’s intuitive model of the
social and physical world - something like a set of schemas.
Not a checklist of unrelated features
The way you think about the world and how it works is what helps
us form concepts.
E.g.: birds can flythey have feathers to help them do so , birds
build neststhey can do this because they can fly These features
are related to one another, rather than being a tick- list of the
concept category.
Explains




We use pre-known information to help us understand – e.g.
we know that olive oil is made of olives, but baby oil is not
made of babies. We therefore combine the words in different
ways as a result of what we know about the world.
Murphy (1990)- Noun-noun combinations (e.g. baby oil) are
typically more difficult to understand than adjective-noun
combinations (e.g. red apple). We use this extra time to
process and combine the information to make sense of it.
Helps with conceptual combination (e.g. olive oil, corn oil vs
baby oil)
See Gagne & Spalding ( 2004 and others)
Alternative Theories

One view is that features are a ‘fall back’ for when theories fail.

Medin (1990)- Psychological Essentialism: things have essences or underlying intension, eg. we know that sex is genetically determined,
but genders have associated essences (height, voice, hair etc.). These essences form the prototype for a category.  The essences can help
us to judge the sex of a stranger (although they are not arbitrary).

Essentialist heuristic = things that look alike share underlying principles.
Concepts: Concepts are the foundational building block of though, Language and thought are not exactly the same thing, Concepts and
relationships are distinct from language. However, linguistic symbols are closely tied to conceptual information
Concept= Words?
Language and Thought: Arbitrariness

Are language and thought sperate?

DE SAUSURRE – founder of
semantics(study of signs and meaning): Said
connection between signifier (the symbol or
image) and signified (the meaning
conveyed) is arbitrary
Understanding words and sentences: Building
Words

Is there a difference between thought and language- are word concepts (and vice versa) Can we think without it?
Does language itself reflect concepts? Are the sounds and structures of language inherently meaningful?

Can think without language- Prelingual babies show evidence on conceptual categorisation as they can distinguish
between phonemes

Pathology – speech and language impairment doesn’t destroy thought or reasoning

Language and thought closely linked but not the same
Is language Actually Arbitrary?

Many experimental tests using nonsense words (eg Nielsen & Rendall 2013 bias in both vowels and consonants)

Sound symbolism – sound of a word corresponds in a meaningful word to its meaning, (slime, slip, slide)

non-arbitrary connection between concept and word

Blasi et al (2016) – Patterns in sound-meaning connections across languages e.g. small and I full and p or b

Other Evidence: Imai et al (2008) Children lean sound symbolic verbs more easily

Klink (2000): Sound symbolism in bran same (which brand od ketchup seems thick? Nida or Nodax?)






How are words build and stored?
•
•
•
•
Storage Vs Computation
•
•
•
•
•
•
•
The Wug test (1950s) At the time of the study (1950s), Behaviourism was very influential- Children learn language
by imitating/memorising what adults say Words like “wug” and “zib” are not real English words- Children can’t
learn to say “wugs” because they have never heard an adult say “wugs”!- Instead: children must learn the system and
apply it to new words- The Wug Test showed that children do in fact successfully apply implicit morphological rules
to novel (never-before-seen) words
(Dog) is a simplex word or free morpheme – fundamental unit of meaning
(Dogs) Complex words- (Dog)free morpheme: can stand on its own+ (s) bound morpheme: cant stand on its own
Decomposability: Some words can be broken down into meaningful pieces (called morphemes)
Systematicity: Children acquire this morphological system and can apply it to word they have never seen before.
morphology and phonology as they acquire language
Mental lexicon – must have some system underlying to help build words- children and adults can do this effortlessly
Are all complex words constructed “online” (as we are speaking) from pieces, or are some stored as wholes?
When would it store as a whole or in pieces well- What is its goal? – If we think of the mental lexicon as a computer (what is stored in
our brain to allow us to use language) where you have limited storage and memory so how does the mental lexicon allocate its
resources- happens very quickly so they must be doing it efficiently.
If its goal is efficiency and we can only allocate our mental resources maximally to either storage or computation- Language
comprehension/ word recognition must be quick
If we maximise efficiency in one, we don’t have much left for the other.
Full listing Model- (Butterworth) – words looked up in lexicon as whole and stored as whole- maximises computational
efficiency(don’t have to break up words in pieces) but minimises storage efficiency.
Full Parsing/Composition Model (Taft) – All words are decomposed into small morphological elements before their looked upmaximises storage efficiency we just have to store rules- but maximise computational efficiency need to do more work.
How about frequency? If a complex word is common enough can it become a single, stored unit? E.g. hearing the word Homework (a
compound word) over and over. Does this word eventually become lexicalised? (stored on its own as a word)
PINKER: both can happen what kind of word you have might determine whether it is a full listing type or parsing types: words that
are idiosyncratic (irregular- need to be memorised found-find) must be stored and looked up, if transparent word (easily and directly
build walk- walked) can be computed easily.
More recent research goes with dual or multiple route models: both works simultaneously when hearing or reading word- whichever
one arrives at the correct interpretation is the one that you use “wins”. Weather full listing or parsing wins depends on frequency of
words and if it has been lexicalised- brain makes use of different paths.
ANDREWS, MILLER AND RAYNER – eye tracking with compound words, participant gaze duration influenced by all three
frequencies, evidence of decomposition and lexicalisation = dual route
•
Efficiency doesn’t seem to be prioritised: multiple- route and race models do not try to maximise resource efficiency- used both storage
space and computational resources at the same time- need to use both resources. So perhaps the mental lexicon doesn’t prioritise
efficiency – Libben (2014) – the primary goal of the mental lexicon, what it should prioritise
The Mental Lexicon Summary: How are words stored in the brain? How are words stored in the brain? Evidence is conflicting. Different techniques: eye-tracking, MEG, reaction times… Different types
of constructions: affixing, inflection, compounding…Different influences: frequency, family size, meaning… Most likely: a combination of multiple strategies to maximise efficiency. But: efficiency of
what???
How do we create meaning?
•
•
•
Parsimony (simplicity of one word to represents and
idea) and Efficiency
•
•
•
•
•
Language is designed for communication- How much detail/specificity is needed to get the message across
We can have a maximally expressive language might have a different word for each unique event, thing, person, action etc. in the world
Advantage of this minimal room for misunderstanding, Disadvantage: overloaded with unnecessary detail communication becomes
very difficult.
So language needs to be efficient.
Principle of parsimony- Occam’s Razor: avoid needlessly multiplying entities
But too much efficiency leads to EXCESIVE AMBIGUITY e.g. all smallish furry animals (cat and dog) would be called fluff,
very efficient- only one word to remember, all share common attributes
Ambiguity is Unavoidable “I saw an elephant in my pyjamas” Who was wearing the pyjamas
Languages need a balance between expressiveness and efficency linguistic economy.
Reading
Theory Theory and
Conception
Combination Compound words in the
mental lexicon
Bouba-Kiki Effect
The Mental Lexicon
Ambiguity Resolution
Aside: A perfectly
Logical (Unambiguous
language)
There are a lot of interesting studies about how
people decide what combinations of words mean,
such as corn oil (oil MADE OF corn) vs baby
oil (oil FOR babies). The researchers Gagne and
Spalding have done a lot of work in this area;
Cuskley simmer & Kirby 2015 showed that people
have bias in both vowels and consonants for
different shapes
Compounding & Parismonuy Gary Libben
Ferreira, Christianson, & Hollingworth (2001):
Anna and the baby
Lobjan is a constructed language (conlang) made to
be perfectly logical and completely unambiguous
Is the link we create between compound words eg tea and pot also
necessary for words we heard frequently. (we have a finite set of relations
we can use to understand the relation of words in compound words)  the
goal of this experiment is to determine whether relational information is
used during the processing of familiar lexicalised compounds such as
snowball the effect of reprition priming presenting words and particants
saying the one they thought of show that these familiar compound words
are decomposed whether the task is deteminging sense of making a lexical
decision so compound words are decomposed
The effect where non-word names are assigned to abstract shapes (e.g.
rounded shapes are preferentially called buba over kiki)  most accounts
to the effect point to the the acoustic properties of sound and shape as the
mechanism underlying the effect. Letter curvature also found to have an
effect.
Although compound words often seem to be words that themselves contain
words, this paper argues that this is not the case for the vast majority of
lexicalized compounds. Rather, it is claimed that as a result of acts of
lexical processing, the constituents of compound words develop into new
lexical representations. These representations are bound to specific
morphological roles and positions (e.g., head, modifier) within a compound
word. The development of these positionally bound compound constituents
creates a rich network of lexical knowledge that facilitates compound
processing and also creates some of the well-documented patterns in the
psycholinguistic and neurolinguistic study of compounding
Theories of sentence comprehension have addressed both initial parsing
processes and mechanisms responsible for reanalysis. Three experiments
are summarized that were designed to investigate the reanalysis and
interpretation of relatively difficult garden-path sentences (e.g., While
Anna dressed the baby spit up on the bed). After reading such sentences,
participants correctly believed that the baby spit up on the bed; however,
they often confidently, yet incorrectly, believed that Anna dressed the baby.
These results demonstrate that garden-path reanalysis is not an all-ornothing process and that thematic roles initially assigned for the
subordinate clause verb are not consistently revised. The implications of
the partial reanalysis phenomenon for Fodor and Inoue’s (1998) model of
reanalysis and sentence processing are discussed. In addition, we discuss
the possibility that lan- guage processing often creates “good enough”
structures rather than ideal structures.
•
Lojban is a carefully constructed spoken language. It has been
built for over 50 years by dozens of workers and hundreds of
supporters.
•
Lojban's grammar is based on simple rules, and its linguistic
features are inspired by predicate logic
•
Lojban allows the expression of nuances in emotion using
words called attitudinals, which are like spoken
emoticons. ue marks that you're surprised; ba'u marks that
you're exaggerating.
•
You can be as vague or detailed as you like when speaking
Lojban. For example, specifying tense (past, present or future)
or number (singular or plural) is optional when they're clear
from context.
•
Lojban is machine parsable, so the syntactic structure and
validity of a sentence is unambiguous, and can be analyzed
using computer tools.
•
There is a live community of speakers expanding the Lojban
vocabulary day by day.
2) Bilingualism
Multilingualism
 People can be monolingual and bilingual- more than half of people in the world are bilingual
 Language is complicated thing to study as extensive amount of variables, everyone has
unique version of language and is exposed to multiple influences - Have to be able to
operationalise bilingual person
Bilingual Language Processing – How?
 Joint activation – both languages are being processed even only when one is being
used
Bilingual Testing Materials
 Bidirectional influence between languages, languages still influencing one another
when only one being used (monolingual context)
 Cognate – means same thing in two different language, usually related to one another
 Interlingual homographs – same form but does not mean same thing in different
language (pie mean foot it Spanish)
Separate or Connected:
Visual

Dijkstra, Grainger, & Van Heuven
(1999) Tested Dutch and English
Words in lexical decision tasks (is this a
real word or not?)
Compared to control bilinguals were:
Faster at lexical decisions for cognates
(“piano”), Slower for interlingual
homographs when the pronunciation
was different (“pie”)
No effect found for monolinguals
Two words in two language conflict
with each other on retrieval, lexical
access is not language specific



Auditory
Cross Modal





Lagrou, Hartsuiker, & Duyck (2011)–
Interlingual Homophones in Dutch and
English
Words that sound the same, but have
different meanings
interference found for both languages,
Influence of second language extend to
auditory domain, The second language (
L2) can influence the native language(
L1)

Morford, et al (2011) ASL/ English
bilinguals asked to judge whether two
written English words we semantically
related some of these words has ASL
signs with similar forms
Judgement were faster when the words
were related and the signs were similar,
Slower when the words were unrelated
and the signs were similar
Joint Activation- Overwhelming evidence

Both Languages are active even when only one is being used

Bidirectional influence between language- event ture for highly proficient bilinguals, and in strongly monolingual contexts- Kroll,
Drussias, Bogulski, & Valdes Kroff (2012)

So even in different modalities languages are being accessed simultaneous- joint activation- bilinguals are always accessing both
languages even when only hearing one- happens bi directionally


SO EFFICENCY IS NOT A PRIORTY IN LANAGUAGE- does not work like a computer because we would not expect both
languages to be processed. COMMUNICATION IS A PRIORITY leaves as many pathways as possible to be available for any type of
communication.
Maximises opportunity for understanding
Code Switching
 Joint activation means both languages are always active- How do bilingual speaker

use one and not the other?
“Code-switching” switching between languages, especially mid-conversation or
mid-sentence
Psycholinguistics of Code Switching
 Fricke & Kootstra (2016): Cross-language structural priming in spontaneous
bilingual conversations- Choice of codeswitching influenced by a variety of
factors, like which language was being spoken, whether other speakers had
codeswitched, etc.
 Kleinman & Gollan (2016): Distinction between top-down and bottom-up
processes- Bilinguals take advantage of priming, but also control their language
switching depending on task demands
 Complex picture of skill and proficiency drawing on both languages
simultaneously
Bilingualism Advantages
1) Cognitive
“As the bilingual mind is reconfigured to
Reconfiguration:
accommodate two language systems that
have different relations to each other, to
speaker intentions, to communicative
contexts, and to pragmatic goals, the
impact of that reconfiguration is felt
throughout cognitive networks.” Kroll &
Bialystok (2013)
2)Better at
Bilinguals must manage two competing,
Executive
overlapping language systems
Function/Inhibition
Control
Bilingualism Disadvantages
A note on Bilingualism: Research does not happen in a void. Cultural, social, political,
historical and personal/emotional aspects to bilingualism. Long history of supressing and
discouraging bilingualism- Native American languages in US e.g. Hawai’i. Welsh and other
minority language in UK. Language control as cultural, Societal and personal control. Can
inform educational and immigration policy- There can be no doubt that child reared in a
bilingual environment is handicaps in his language growth. One can debate the issue as to
whether speech facility in two languages is worth the consequent retardation in the common
language of the realm (George Thompson 1952)
1)Vocabulary
Learning
Less words

Under some
Circumstances
Disadvantageous
Theory Of Mind:
3)Memory
4)General
Cognition
Bilingual children better at: Finding
embedded figures (Bialystok, 1992),
Identifying correct grammar in odd
sentences  “Apples grow on
noses” (Bialystok, 1988)

Bilingual adults better at: Stroop
task (Bialystok et al., 2008), Simon
task (Bialystok et al., 2004)

Interaction with age: older bilinguals
showed a greater benefit

Counterexample: Kirk, Scott-Brown,
& Kempe (2013) with older
bilinguals
Philip, Gade, & Koch 2007; Philipp &
Koch, 2009. Trilingual English/German/
French Participants had to name a
number eg 2--? Two/deux/ zweil- Naming
slowed in ABA(English German English
sequences) compared to CBA
(French/German/English sequences) ABA
requires global inhibition of English to
access German response. Negative
Priming slows responses in English

Bilingual children must realize that
different people speak different
languages, Some people understand
them in one language, some in
another, Other people must think
differently than they do, So: They
may be better at theory of mind

Goetz (2003): Bilingual children
better on perspective-taking and
false-belief tasks

Kovacs (2009): Three tasks,
Standard ToM task – false-belief
task, Modified ToM task designed to
mimic language-switch scenario,
Control task, Bilingual children
performed better on both ToM tasks
but not on control task

Bilinguals have to learn and
remember multiple words for the
same/similar concepts/ideas So:
They may be better at remembering

Evidence Mixed: Bilingual
advantage on Simon tasks for more
complex paradigm requiring
working memory (Bialystok et al
2004)

No advantage on pointing task,
found an advantage on Corsi Block
task for younger but not older
bilinguals. (Bialystok, Craik & Luk
2008)

No advantage on COrsi Block task
(Feng 2008)

May exist but more studies needed
Bilinguals show improved performance on
particular cognitive tasks. Due to
increased demand for managing two
language systems.




Slower at
lexical
retrieval


Verbal
Fluency








Bialystok, Luk, Peets, & Yang (2010) Measured receptive vocabulary
in monolingual and bilingual children, Monolinguals scored higher
than bilinguals (indicating larger vocabulary)
However, depends on the type of word Words from school (astronaut,
rectangle): no difference in scoresWord from home (squash, pitcher):
lower scores for bilingual children
It may be that they do know these words, but in a different language
(L2)
Some suggestion that differences in vocabulary size may persist into
adulthood (see e.g. Bialystok, Craik, & Luk, 2008)
Slower at lexical retrieval (e.g. picture naming) Frequently replicated
(e.g. Roberts, Garcia, Desrochers, & Hernandez, 2002; Gollan,
Montoya, Fennema-Notestine & Morris, 2005; Kaushanskaya &
Marian, 2007)
Also experience more tip-of-the-tongue states (Gollan & Acenas,
2004)Suggested due to interference of other language
Semantic fluency (name category members)
Bilinguals produce fewer words (e.g. Bialystok et al., 2008)
Due to smaller vocabulary size, language competition?
English-speaking students in Spanish language environment for one
year performed worse on this task than untraveled monolinguals
(Linck, Kroll, & Sunderman, 2009)
Verbal Fluency 2: Phonological fluency (initial letter)  effortful
production
Requires monitoring and controlling attention
Bilinguals should (in theory) be better at this task
When vocabulary is matched, bilingual disadvantage disappears for
semantic task and advantage emerges for phonological task (Luo, Luk,
& Bialystok, 2010)
Reading Bilingualism
Separate or
Connected
Lagrou, Hartsuiker, & Duyck (2011):
Dutch and English interlingual
homophones
Morford, Wilkinson, Villwock,
Piñar, & Kroll (2011): Sign
Language
Code Switching
Van Assche, Duyck, & Gollan
(2013): Inhibition in code-switching
Fricke & Kootstra (2016):
Understanding bilingual language
through code-switching
Bilingual
Advantages
Overview: Garbin et al. (2010);
EF/Inhibition: Bialystok et al. 2008)
Counterexample: Kirk, ScottBrown, & Kempe (2013)
In two experiments Dutch–English bilinguals were tested with English words varying in their degree of
orthographic, phonological, and semantic overlap with Dutch words. Thus, an English word target could
be spelled the same as a Dutch word and/or could be a near-homophone of a Dutch word. Whether such
form similarity was accompanied with semantic identity (translation equivalence) was also varied. In a
progressive demasking task and a visual lexical decision task very similar results were obtained. Both
tasks showed facilitatory effects of cross-linguistic orthographic and semantic similarity on response
latencies to target words, but inhibitory effects of phonological overlap. A third control experiment
involving English lexical decision with monolinguals indicated that these results were not due to specific
characteristics of the stimulus material. The results are interpreted within an interactive activation model
for monolingual and bilingual word recognition (the Bilingual Interactive Activation model) expanded
with a phonological and a semantic component.
Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second
language judged the semantic relatedness of word pairs in English. Critically, a subset of both the
semantically related and unrelated word pairs were selected such that the translations of the two English
words also had related forms in ASL. Word pairs that were semantically related were judged more quickly
when the form of the ASL translation was also similar whereas word pairs that were semantically
unrelated were judged more slowly when the form of the ASL translation was similar. A control group of
hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken
together, these results constitute the first demonstration that deaf readers activate the ASL translations of
written words under conditions in which the translation is neither present perceptually nor required to
perform the task.
The current study investigated the scope of bilingual language control differentiating between wholelanguage control involving control of an entire lexicon specific to 1 language and lexical-level control
involving only a restricted set of recently activated lexical representations. To this end, we tested 60
Dutch-English (Experiment 1) and 64 Chinese-English bilinguals (Experiment 2) on a verbal fluency task
in which speakers produced members of letter (or phoneme for Chinese) categories first in 1 language and
then members of either (a) the same categories or (b) different categories in their other language. ChineseEnglish bilinguals also named pictures in both languages. Both bilingual groups showed reduced dominant
language fluency after producing exemplars from the same categories in the nondominant language,
whereas nondominant language production was not influenced by prior production of words from the
same categories in the other language. Chinese-English, but not Dutch-English, bilinguals exhibited
similar testing order effects for different letter/phoneme categories. In addition, Chinese-English
bilinguals who exhibited significant testing order effects in the repeated categories condition of the
fluency task exhibited no such effects when naming repeated pictures after a language switch. These
results imply multiple levels of inhibitory control in bilingual language production. Testing order effects
in the verbal fluency task pinpoint a lexical locus of bilingual control, and the finding of interference
effects for some bilinguals even when different categories are tested across languages further implies a
whole-language control process, although the ability to exert such global inhibition may only develop for
some types of bilinguals.
Structural priming has played an important role in research on both monolingual and bilin- gual language
production. However, studies of bilingual priming have mainly used priming as an experimental tool,
focusing on cross-language priming between single-language sen- tences, which is a relatively infrequent
form of communication in real life. We investigated priming in spontaneous bilingual dialogue, focusing
on a hallmark of bilingual language use: codeswitching. Based on quantitative analyses of a large corpus
of English–Spanish language use (the Bangor Miami Corpus; Deuchar, Davies, Herring, Parafita Couto, &
Carter, 2014), we found that key discoveries from the structural priming literature also apply to bilinguals’
codeswitching behavior, in terms of both the tendency to codeswitch and the grammatical frame of
codeswitched utterances. Our results provide novel insights into the different levels and modes of speech
at which priming mechanisms are at work, and they illuminate the differences and commonalities between
monolingual and bilingual language production.
Using two languages on an everyday basis appears to have a positive effect on general-purpose executive
control in bilinguals. However, the neural correlates of this effect remain poorly understood. To
investigate the brain bases of the bilingual advantage in executive control, we tested 21 Spanish
monolinguals and 19 Spanish-Catalan early bilinguals in a non-verbal task-switching paradigm. As
expected based on previous experiments on non-verbal task switching, we found activation in the right
inferior frontal cortex and the anterior cingulate of monolingual participants. While bilingual participants
showed a reduced switching cost, they activated the left inferior frontal cortex and the left striatum, a
pattern of activation consistent with networks thought to underlie language control. Overall, these results
support the hypothesis that bilinguals' early training in switching back and forth between their languages
leads to the recruitment of brain regions involved in language control when performing non-linguistic
cognitive tasks.
Previous work has shown that bilingualism is associated with more effective controlled processing in
children; the assumption is that the constant management of 2 competing languages enhances executive
functions (E. Bialystok, 2001). The present research attempted to determine whether this bilingual
advantage persists for adults and whether bilingualism attenuates the negative effects of aging on
cognitive control in older adults. Three studies are reported that compared the performance of monolingual and bilingual middle-aged and older adults on the Simon task. Bilingualism was associated with
smaller Simon effect costs for both age groups; bilingual participants also responded more rapidly to
conditions that placed greater demands on working memory. In all cases the bilingual advantage was
greater for older participants. It appears, therefore, that controlled processing is carried out more
effectively by bilinguals and that bilingualism helps to offset age-related losses in certain executive
processes.
Bilinguals rely on cognitive control mechanisms like selective activation and inhibition of lexical entries
to prevent intrusions from the non-target language. We present cross-linguistic evidence that these
mechanisms also operate in bidialectals. Thirty-two native German speakers who sometimes use the
Öcher Platt dialect, and thirty-two native English speakers who sometimes use the Dundonian Scots
dialect completed a dialect-switching task. Naming latencies were higher for switch than for non-switch
trials, and lower for cognate compared to non-cognate nouns. Switch costs were symmetrical, regardless
of whether participants actively used the dialect or not. In contrast, sixteen monodialectal English
speakers, who performed the dialectswitching task after being trained on the Dundonian words, showed
asymmetrical switch costs with longer latencies when switching back into Standard English. These results
are reminiscent of findings for balanced vs. unbalanced bilinguals, and suggest that monolingual dialect
speakers can recruit control mechanisms in similar ways as bilinguals
Disadvantage: Philipp & Koch,
2009
When people switch between languages, inhibition of currently irrelevant languages is assumed to occur.
The authors examined inhibition of irrelevant languages with a cued language-switching paradigm. A cue
indicated in which of 3 languages (German, English, or French) a visual stimulus was to be named. In 2
experiments, the authors found that naming latencies were increased in n-2 language repetitions (e.g.,
German/English/German) compared with in n-2 language nonrepetitions (e.g., French/English/German).
This difference (n-2 repetition costs) indicates persisting inhibition of abandoned languages. It is
important to note that n-2 language-repetition costs also occurred in conditions in which the language but
not the cue (Experiment 1) or the stimulus/response set (Experiment 2) repeated from trial n-2 to trial n.
These data demonstrate that inhibition is not restricted to a specific cue or stimulus/response set. Rather,
the data suggest more global inhibitory processes that affect the mental representation of competing
languages. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
Theory of Mind: Goetz (2003);
his research examines whether an individual's linguistic knowledge, either as a speaker of a particular
language or as a bilingual, influences theory of mind development. Three- and four-year-old English
monolinguals, Mandarin Chinese monolinguals, and Mandarin-English bilinguals were given appearancereality, level 2 perspective-taking, and false-belief tasks. All children were tested twice, a week apart; the
bilinguals were tested in each of their languages. The 4-year-olds in each group performed significantly
better than the corresponding 3-year-olds. Both monolingual groups performed similarly on the tasks, and
the bilinguals performed significantly better than the monolingual groups, although when the two testing
times were examined separately, they had only a near-significant tendency to perform better at the second
testing time. Possible explanations for this evidence of a bilingual advantage are greater inhibitory control,
greater metalinguistic understanding, and a greater sensitivity to sociolinguistic interactions with
interlocutors
Two studies are reported in which monolingual and bilingual children (Study 1) and adults (Study 2)
completed a memory task involving proactive interference. In both cases, the bilinguals attained lower
scores on a vocabulary test than monolinguals but performed the same on the proactive interference task.
For the children, bilinguals made fewer intrusions from previous lists even though they recalled the same
number of words. For the adults, bilinguals recalled more words than monolinguals when the scores were
corrected for differences in vocabulary. In addition, there was a strong effect of vocabulary in which
higher vocabulary participants recalled more words irrespective of language group. These results point to
the important role of vocabulary in verbal performance and memory. They also suggest that bilinguals
may compensate for weaker language proficiency with their greater executive control to achieve the same
or better levels of performance as monolinguals.
Memory: Feng (Bialystok 2008)
Bilingual
Disadvantage’s
Vocab size: Bialystok, Luk, Peets,
& Yang (2010)
Studies often report that bilingual participants possess a smaller vocabulary in the language of testing than
monolinguals, especially in research with children. However, each study is based on a small sample so it
is difficult to determine whether the vocabulary difference is due to sampling error. We report the results
of an analysis of 1,738 children between 3 and 10 years old and demonstrate a consistent difference in
receptive vocabulary between the two groups. Two preliminary analyses suggest that this difference does
not change with different language pairs and is largely confined to words relevant to a home context rather
than a school context.
Lexical retrieval: Roberts, Garcia,
Desrochers, & Hernandez, 2002;
Compared the effects of fluent uni- or bilingualism on Boston Naming Test (E. Kaplan et al, 1983, BNT)
scores, and the order of difficulty of test items. 42 unilingual English individuals (aged 20-52 yrs), 32
Spanish/English bilingualists (aged 29-54 yrs), and 49 French/English bilingualists (aged 25-55 yrs)
completed the BNT. Results show that mean scores for both sets of bilingual Ss were similar; both groups
of Ss scored far lower than did unilingual English Ss. Item difficulty showed some similarities but also
important differences across groups. It is concluded that the English language norms cannot be used for
bilingual speakers, even proficient ones. Cultural factors appear less important than bilingualism.
(PsycINFO Database Record (c) 2016 APA, all rights reserved)
Tip of the tongue: Gollan &
Acenas, 2004
The authors induced tip-of-the-tongue states (TOTs) for English words in monolinguals and bilinguals
using picture stimuli with cognate (e.g., vampire, which is vampiro in Spanish) and noncognate (e.g.,
funnel, which is embudo in Spanish) names. Bilinguals had more TOTs than did monolinguals unless the
target pictures had translatable cognate names, and bilinguals had fewer TOTs for noncognates they were
later able to translate. TOT rates for the same targets in monolinguals indicated that these effects could not
be attributed to target difficulty. Two popular TOT accounts must be modified to explain cognate and
translatability facilitation effects, and cross-language interference cannot explain bilinguals' increased
TOTs rates. Instead the authors propose that, relative to monolinguals, bilinguals are less able to activate
representations specific to each language. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
3)Language and Thought: How are perception and experience reflected in thought?
1)Embodiment: sensation and
perception influence thought and
language



Embodied Cognition: The
experience of living, sensing, and
perceiving the world
fundamentally informs our
conception of it
2)Metaphor: conceptual structure
of thought and language
Embodied Cognitions and
metaphor





Example: the experience of being in a room i.e. containment
You understand containment because of the embodied experience of: having been
contained:(locked in a room, being in an airplane)
knowing that the human body is subject to certain restrictions e.g. you have a physical
form that cannot pass through walls or teleport.
Lakoff & Johnson (1980): these embodied concepts underlie thought and language
through conceptual metaphors:
Eg an abstract state of being is a container:
I’m in the room/ He fell into a hole/ You’ll never get out of this lecture hall.
I’m in love/ He fell into depression/ You’ll never get out of trouble.
Not just “figures of speech” but fundamental conceptual frameworks
Is it possible to reason or communicate about abstractions without metaphor?
Up and Down


An Arm and A Leg




Pecher (2010): are these things found in sky or ocean? give list of words, then present words at
top or bottom of screen. React slower when type of word (sky or ocean) doesn’t match the
word’s orientation (up and down)
Results indicate: it may be that people perform a mental stimulation of the task-congruent
location, which directs spatial attention and facilitates processing of targets in that location
Pulvermüller et al. (2005)- Applied TMS to motor regions of the brain for the arm or the
leg
Faster lexical decisions for leg-related words (“kick”) with leg region stimulation and
faster decision for arm-related words (“pick”) with arm region stimulation
Language is not modular or abstract but an integrated part of experience
Thorough review: Fischer & Zwaan, 2008
Summary: The way we think is shaped by our physically embodied experience- Language is connected to physical representations
and processing centres- Language is not abstract and modular (ie disconnected from experience)
So as we learn before the priority of the language faculty seems to be more opportunity over efficiency.
3)Imagery; Sensory information
evoked by language









Zwan & Pecher (2012)- replicating Lynott (2007). Languge is more modular but an
integrated experience
Reading a sentence “Sarah stopped in the words to pick a leaf”- pick which leaf appeared
in sentence - Particpants faster from green leaf than brown
Connell and Lynott (2009)- Participants read a sentence implying a particular colour for
the target“Joe was excited to see a bear in the woods brown bear (typical prime)
“Joe was excited to see a bear at the North Pole”  white bear (atypical prime)
Stroop- Then asked to name the colour of a target word (bear) in three conditions:
Typical (brown) atypical (white) and unrelated.(yellow)
The colour they were thinking interfered with ability to read colour quickly- no matter
which condition typical and atypical people were quick to read brown- equally slow to
read yellow- But responses to white changed depending on sentence.
Suggests that the colour we expect something to be is automatically evoked by language.
Part of the concept/representation in the mind
Embodied Cognitions Summary: Language, thought, and concepts are fundamentally intertwined. Words reflect the embodied
experience of existing in the world. Simulation of important characteristics- Motion/action, direction, colour, etc.
Does Language Shape Thought?


Piaget- Thought comes before language, because language requires an underlying conceptual structure.
Vygotsky- Language comes before thought, which is a self-directed inner speech. They are therefore both influenced by social
systems and cultures. Children use language to think aloud, and later internalise these thoughts.
Cross-cultural studies: Does the language you speak shape the way you think?



Sapir- Speakers have to pay attention to different aspects of reality to produce sentences.
Whorf- (1989) Using different languages causes people to think differently because the grammar points us towards different
types of observations. Therefore, language influences thinking and perception of the world.
Proposed the idea of linguistic relativity based on studying Native American Languages.
The Sapir-Whorf hypothesis - Degrees of Whorfianism: Implication: Fundamental categories are not in the world, but are
imposed by culture, and can be challenged.
1)Linguistic determinism (strong Whorfianism) = Language
determines/constrains thought – different languages incorporate different
world views, which determines how people think. (George Orwell
newspeak)
2)Linguistic relativism (weak /better Whorfianism) = Language biases our
perception of the world – native language influences the way its speakers think
and perceive the world.


If your language doesn’t have “a word for” a particular
idea/concept, you can’t conceive of or understand it 
“untranslatable” words
What does it mean to “have a word for” something? 虹 “rainbow”
“arcenciel”
“Untranslatable” words: response is “oh yeah!” not “huh?”How far does this go?






For example, a culture that has different words for two related objects
will think about those objects differently, whereas a culture that only
has one word would treat them more similarly.
Whorf: Look at the patterns a language does/doesn’t have
Claimed that Hopi (Native American language) has “no words, grammatical
forms, construction or expressions that refer directly to what we call ‘time’”
and therefore the Hopi had "no general notion or intuition of time as a
smooth flowing continuum in which everything in the universe proceeds at
equal rate, out of a future, through the present, into a past“ (Whorf, 1956)
Malotki (1983): 600-page discourse on grammar of time in Hopi
Problems with the Sapir-Whorf Hypothesis
 Whorf’s ideas are circular – Apaches speak differently, so must think differently. We know they think differently because they speak
differently.  No independent evidence that they think differently.
 Word-for-word translations between languages sound clumsy – this doesn’t necessarily indicate a different way of thinking.
Evidence for Sapir-Whorf Hypothesis










The Hopi people have a different concept of time than European languages do – Europeans have a concept of time and matter due to conditioning by
their language. Most experiments have tested that words have an effect on memory or categorisation.
Linguistic Relativity: Colour perception: We see objects in different hues depending on the light they reflect, despite the fact that light is a continuous
dimension. Languages differ in their words for different colours: Depends on Language (Berlin & Key, 1979)
Labels help us to discriminate colour differences, even when the colours have the same magnitude of differences. Distinctions important for one’s
language makes us better at perceiving the differences between these colour chips.
The Munsell Colour System: Three dimensions: hue (colour), chroma (saturation) and value (lightness). Used in research to look at differences in
colour perceptions. Adjacent steps between colours have equal magnitude in differences between them
Gillbert et al.- Participants were presented with the circle of Munsell colour chips, and were asked to distinguish whether the different coloured chip
was on the left or right. Found it was easier for participants to do this if their language had a categorical distinction between the majority colour and the
separate colour. For English speakers, chips that fell within the ‘green’ or ‘blue’ categorisation took longer to be identified, despite all differences
between these four colours being of the same magnitude.
Heider - Studied the Dani tribe in New Guinea, who only have two colour terms – light and dark. They were presented with Munsell colour chips and
were asked to recognise which of two chips had already been seen. They were either a focal colour (bright colour, eg. pillarbox red) or non-focal colour
(dull, eg. maroon). The Dani didn’t have different names for these’ but could still recognise the focal colours more easily (as English speakers would),
and could distinguish between different colours that they didn’t have different names for. Despite no means of categorisation, the Dani had the same
categorical representation of colour. Therefore, their language didn’t influence their colour
perception.
Roberson et al. (2000): Participants given colour chips and asked to name the colour in one word.
Berinmo (New Guinea Tribe) need only 5 words for the entire colour spectrum and it was found
that colours within those regions look more similar to Berinmo than they do to English speakers. In
3 different tasks, showed that categorical perception was more closely aligned with the linguistic
categories of the language than with the underlying perceptual categories, i.e. English speakers
were much better at distinguishing between the different colours of the chips. Similarly, the
Berinmo tribe were better at making a distinction between wor and nol, whilst English speakers
would call both of these ‘green’ and so were slower to recognise the chip.
Across tasks (similarity judgements, category learning, recognition memory) categorical perception of colour was aligned with colour terms.
Suggest that perception/thought is guided by language categories Kay & Regier
In some studies, people remember colours that have readily available names in their language. However, even colours without names are recalled quite
well. Therefore, research doesn’t support a strong version of the linguistic determinism hypothesis. They are consistent with a weaker form of the
Sapir-Whorf hypothesis – our language influences our ways of thinking and categorising in a biasing way, rather than a definite way.
Evidence: Linguistic Relativity: Who Dunnit (Fausey &
Boroditsky, 2010)









English and Spanish speakers watched two clips of vase being
broken accidentally and intentionally
Study 1: What happened? Differences in language
for intentional events: no difference,
For accidents events English speakers used more agentive
descriptions (“She broke…”) than Spanish speakers
Study 2: Who did it? – Differences in Memory
For intentional events: no difference
For accidental events: English speakers remembered the correct actor
more frequently than Spanish speakers
Object orientation memory task: no baseline language differences in
memory ability
Conclusion: differences in language influenced the encoding/memory
of the event See also Boroditsky (2001): time metaphors in Mandarin
and English. Chen (2007): Failure to replicate. Fuhrman et al. (2011):
Additional evidence
Is thought possible without language?
 Pinker- A Mexican immigrant who was deaf and lacked language (didn’t sign, write or speak) was able to communicate concepts
when taught how to. Could talk about things that happened in his childhood Indicates that he did have thoughts, and could
subsequently communicate these, despite the lack of language when these memories were being formed.
 Feral children who lack language are also able to communicate their previous memories if later taught language.
 Pre-verbal babies are able to represent concepts and reason in the absence of language (can be studied by eye movements etc.).
 Those with speech and language deficits (like aphasia) do not necessarily impair thinking and reasoning.
 Adults with speech can think in non-verbal ways (movement sequences and visual imagery).
The language of thought
 Do we use English to think, or a language of thought, ‘mentalese’, that is not the same as any of the world’s languages?
 Pinker- English or any other language cannot serve as our internal medium of computation. For logical reasons, this must be the case –
a sentence in English could be processed without help from some understanding.
  Synonymy might give us clues – several sentences can refer to the same event. Mentalese has to represent that they all refer to the
same thing.
 Eg: Sam sprayed paint onto the wall/ Sam sprayed the wall with paint/ Paint was sprayed onto the wall by Sam The wall was sprayed
with paint by Sam
 We must represent this as: [Sam spray paint] cause [paint to go [on wall]]
 We therefore have basic mental ideas without language syntax since we can express the same idea in different ways.
Summary
 People think in ‘mentalese’ – a mental language which has symbols for concepts and arrangements of symbols but is different from
verbal languages due to being richer in some areas and simpler in others.
o Richer – one concept symbol must correspond to an ambiguous word
o Simpler – some words (like ‘a’ or ‘the’) are absent, and pronunciation and word order is unnecessary

Therefore, knowing a language is knowing how to translate mentalese into strings of words, and vice versa.
 We have moved from thinking that language shapes thought, and therefore that thought is relative, to thinking that language shapes our
thoughts which are universal across all languages.
Reading Language and Thought
Embodied
Cognition
Lakoff & Johnson (1980): Metaphors
We Live By
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Our conceptual systems is largely metaphorical- so everything we experience is
Most of our ordinary conceptual systems are ordinary in nature
Many of things we do in arguing are partially structured by the concept of war
The essence of metaphor is understanding and experiencing one kind of thing in terms
of another
Eg theories are buildings (construct)
Eg ideas are food (left a bad taste in my mouth)
Eg Ideas are people (gave birth)
Eg Ideas are plants (idea come to fruition)
Love is patient
Metaphorical language is embedded in culture-.
These examples of metaphors are conventional metaphors- structure the ordinary
conceptual system of our culture which is reflected in our everyday language.
Metaphors arise from our beliefs that is connected to our memories – love is work.
Metaphor does not merely entail other concepts, but very specific aspect of these
concepts
Metaphors can this be appropriate because they sanction actions, justify inference and
help us set goals- love is work- needs to be worked on
The meaning a metaphor will have will be dependent on a persons culture and past
experiences
Eg the solution to my problems- can be seen as an actual liquid solution that keeps on
being drunk by others and being spread- so our way of dealing with problems is a
metaphorical ability.
New metaphors have the power to create a new reality
The idea that metaphors can create realities goes against most traditional views of
metaphor
A metaphor may this be a guide for future action
Metaphor is one of our most important tools for trying to comprehend partially what
cannot be comprehended totally.
Pecher, D., Van Dantzig, S., Boot, I.,
Zanolie, K., and Huber, D. E. (2010):
Ocean vs sky words
We report an experiment that compared two explanations for the effect of congruency between a
word’s on screen spatial position and its meaning. On one account, congruency is explained by
the match between position and a mental simulation of meaning. Alternatively, congruency is
explained by the polarity alignment principle. To distinguish between these accounts we
presented the same object names (e.g., shark, helicopter) in a sky decision task or an ocean
decision task, such that response polarity and typical location were disentangled. Sky decision
responses were faster to words at the top of the screen compared to words at the bottom of the
screen, but the reverse was found for ocean decision responses. These results are problematic for
the polarity principle, and support the claim that spatial attention is directed by mental simulation
of the task-relevant conceptual dimension.
Pulvermüller et al. (2005): TMS and
motor regions
Transcranial magnetic stimulation (TMS) was applied to motor areas in the left languagedominant hemisphere while right-handed human subjects made lexical decisions on words related
to actions. Response times to words referring to leg actions (e.g. kick) were compared with those
to words referring to movements involving the arms and hands (e.g. pick). TMS of hand and leg
areas influenced the processing of arm and leg words differentially, as documented by a
significant interaction of the factors Stimulation site and Word category. Arm area TMS led to
faster arm than leg word responses and the reverse effect, faster lexical decisions on leg than arm
words, was present when TMS was applied to leg areas. TMS-related differences between word
categories were not seen in control conditions, when TMS was applied to hand and leg areas in
the right hemisphere and during sham stimulation. Our results show that the left hemispheric
cortical systems for language and action are linked to each other in a category-specific manner
and that activation in motor and premotor areas can influence the processing of specific kinds of
words semantically related to arm or leg actions. By demonstrating specific functional links
between action and language systems during lexical processing, these results call into question
modular theories of language and motor functions and provide evidence that the two systems
interact in the processing of meaningful information about language and action.
Zwaan & Pecher (2012) - replicating
Lynott (2007): leaves in the woods
he notion of language comprehension as mental simulation has become popular in cognitive
science. We revisit some of the original empirical evidence for this. Specifically, we attempted to
replicate the findings from earlier studies that examined the mental simulation of object
orientation, shape, and color, respectively, in sentence-picture verification. For each of these sets
of findings, we conducted two web-based replication attempts using Amazon's Mechanical Turk.
Our results are mixed. Participants responded faster to pictures that matched the orientation or
shape implied by the sentence, replicating the original findings. The effect was larger and
stronger for shape than orientation. Participants also responded faster to pictures that matched the
color implied by the sentence, whereas the original studies obtained mismatch advantages. We
argue that these results support mental simulation theory, show the importance of replication
studies, and show the viability of web-based data collection.
Connell and Lynott (2009): bears in
the woods
Sapir-Whorf
Hypothesis
Colour
Categories
Whorf (1956): Hopi and time
Malotki (1983): grammar of Hopi
(including time!)
Berlin & Kay (1979): categories by
language
Fausey & Boroditsky, 2010
Who dunnit?
Color is undeniably important to object representations, but so too is the ability of context to alter
the color of an object. The present study examined how implied perceptual information about
typical and atypical colors is represented during language comprehension. Participants read
sentences that implied a (typical or atypical) color for a target object and then performed a
modified Stroop task in which they named the ink color of the target word (typical, atypical, or
unrelated). Results showed that color naming was facilitated both when ink color was typi- cal
for that object (e.g., bear in brown ink) and when it matched the color implied by the previous
sentence (e.g., bear in white ink following Joe was excited to see a bear at the North Pole).
These findings suggest that unusual contexts cause people to represent in parallel both typical and
scenario-specific perceptual information, and these types of information are discussed in relation
to the specialization of perceptual simulations.
When bad things happen, how do we decide who is to blame and how much they should be
punished? In the present studies, we examined whether subtly different linguistic descriptions of
accidents influence how much people blame and punish those involved. In three studies,
participants judged how much people involved in particular accidents should be blamed and how
much they should have to pay for the resulting damage. The language used to describe the
accidents differed subtly across conditions: Either agentive (transitive) or non- agentive
(intransitive) verb forms were used. Agentive descriptions led participants to attribute more
blame and request higher financial penalties than did nonagentive descriptions. Further, linguistic
framing influenced judgments, even when participants reasoned about a well-known event, such
as the “wardrobe malfunction” of Super Bowl 2004. Importantly, this effect of language held,
even when people were able to see a video of the event. These results demonstrate that even
when people have rich established knowledge and visual information about events, linguistic framing can shape event construal, with important real-world
consequences. Subtle differences in linguistic descriptions can change how people construe what
happened, attribute blame, and dole
out punishment. Supplemental results and analyses may be downloaded from
http://pbr.psychonomic-journals .org/content/supplemental.
4)Unusual Abilities
 Unusual language – The many ways language varies amongst individuals- multilingualism, specific language
impairments, apraxia, agnosia, ASD, enhanced language production or processing
 Why study: understand nature of perception for someone with atypical experience, may lead to therapies with
experience, where it breaks down can tell about process
 Enhanced language production or processing- poets, writers, orators, memory athletes
Sign language - Wu & Addanki, 2015- “We describe an unconventional line of attack in our quest to teach machines how to
rap battle by improvising hip hop lyrics on the fly…”
Is Sign Language and
 Sign languages are distinct languages from the spoken “version”
Language
 Have distinct grammar, vocabulary, prosody, slang, etc
 BSL and ASL are not dialects like British and American English- Share some similar
words but are not mutually intelligible- Also different from Sign-Supported Language or
Makaton
 Arises independently from the surrounding spoken language- Example: spontaneous
development in Nicaragua (see Senghas, Kita, & Özyürek, 2004)
 Lesions to the left hemisphere, and Broca’s and Wernicke’s areas specifically, result in
similar patterns of impairment- Sign language recruits the same brain areas as spoken
language
Minimal Pairs
 Minimal pairs in spoken language  two words with different meanings that differ only by
one sound
 E.g. pat/bat/cat/sat/mat/hat all differ only by the sound of the first letter
 ASL also has minimal pairs- Emmory, 1993
Babbling in Sign Language
Differences to Spoken
Language

Babies exposed to sign “babble” with their hands the same way speech-exposed babies do
with sounds (“ba ba ba”) (Pettito & Marentette, 1991; Pettito et al., 2004)

Reilly et al., 1990: Grammaticized facial expressions
- Particular facial expressions are obligatory for grammatical communication
- Children acquired the signs and expressions incrementally
- Leads to enhanced facial discrimination ability (Bettger et al., 1997)
 Iconicity?
- Sign languages are not gesture or pantomime – but contain iconic elements
(Perniss, 2010)
- Can be used to test theories of embodiment/ grounded cognition (Borghi et
al., 2014)
Summary: Sign languages are productive, fully realized languages entirely separate from the spoken languages native to the
same country. Distinct systems of grammar, vocabulary, phonology/morphology, etc. They share underlying brain areas with
spoken languages Ie a language is a language, no matter what modality. They can provide insight into how language is acquired
and used
Synaesthesia “a mixing of the senses”
 Scientific: a neuropsychological condition in which a stimulus presented in one sensory modality automatically and
consistently induces a concurrent experience in the same or different modality- Still not quite right…
 Ordinal linguistic personification (OLP): Personalities for e.g. numbers and letters. Neither personalities nor letters are
a “sensory modality”
Grapheme-Colour Synaesthesia  Seeing words or letters automatically and consistently evokes experiences of colour:
 Where do these colours appear?
 Associator: in the mind’s eye
 Projector: in the visual field- Some coloured letters, some patches of colour







Where do the colours come from?
Are colours for letters and words idiosyncratically associated (at random)?
OR are dimensions of synaesthetic colour systematically mapped onto dimensions of
language or concepts?
If they are…
We can use synaesthesia to investigate how language is used
Like contrast dye in a brain scan – where do the same colours show up?
Interesting both for individual letters (grapheme processing) and for whole words
(lexical processing)
Synaesthesia; Colouring Trends




Testing the Apple hypothesis
-
Cross Linguistic Influence
Synaesthesia and Language
Synaesthesia and Language
Compounds
Consistently reported trends in letter-colour associations
Rich et al., 2005; Simner et al, 2005; Witthoft, Winawer, & Eagleman, 2015
Both synaesthetes and non-synaesthetes – but associations differ
Are these Explicitly learned? Innate? Conceptual/semantic?
 Mankin & Simner (2017)
Exp 1: is A actually for apple?
Exp 2: are apples actually red?
Exp 3: does index word colour
match letter colour?



















Root et al. (2018)
Tested different influences for the A  red connection in English, Dutch,
Japanese, Spanish, and Korean
Found that the first letter of the alphabet is red in all these languages
Different languages may have different letter  colour influences!
Typically a word is coloured by a particular letter (Ward et al., 2005)
First consonant: R  rain; first vowel: A  rain
Vowels carry prosody (stress, length, intonation)
Simner, Glover, & Mowat (2006): contrasting stress for vowel-colour
Cannon “CAN-non” vs cadet “ca-DET”
Word was coloured like the stressed vowel
Synesthesia sensitive to intonation/prosody even in purely visual input
WARNING: Single case study!
Compound word: composed of two constituent words
rain + bow = rainbow (remember from last week?)
If you have rain and bow, what colour is rainbow?
Number of colours as a measure of lexicalisation
One colour for rainbow  stored as a whole word  lexicalised
Two colours for rainbow  stored as constituents  decomposed
Depends on frequency  frequency-based lexicalisation
Morphology : Structure and
Processing
Interactions




Bankieris & Simner (2015): Synaesthetes performed better than non-synaesthetes
at guessing the meaning of words they didn’t know
 Sound symbolism and synaesthesia may have similar
underlying connections
 Atkinson et al. (2016): Fingerspelling-colour
synaesthesia
 Simner & Ward (2006): In tip-of-the-tongue states,
lexical-gustatory synaesthetes experience the
word’s taste before they can retrieve its form
Synesthesia Summary: Synaesthesia is meaningfully mapped onto features of language: Word frequency and
morphology, Prosody and stress, Sound symbolism and iconicity
Synaesthesia as “enhanced” language ability?- Evidence of enhanced creativity, memory, etc…not the same thing
Use synaesthesia as a tool to study normal/general language processes
Reading Unusual abilities: Sign Language & Synaesthesia
Sign Language
Senghas, Kita,
& Özyürek,
2004:
Development
of Nicaraguan
sign language
Cardin et al
(2016) (Links
to an external
site.): same
brain areas for
phoneme
differentiation
One of the central goals in research on language acquisition is to discover what knowledge and abilities
children bring to the learning situation. Never before in the history of language research has there been a better
opportunity to ask this question as the current situation in Nicaragua, where young children deprived of
exposure to any language are inventing a new one.
Only sixteen years ago, public schools for deaf children were first established in Nicaragua. Despite the fact
that these schools advocated an oral, rather than signing, approach to education, they served as a magnet for a
new community of deaf children who had not previously had contact with one another. Consequently, these
children created their own indigenous sign language. The language is not a simple code or gesture system; it
has already evolved into a full, natural language. It is independent from Spanish, the spoken language of the
region, and is unrelated to American Sign Language (ASL), the sign language used in most of North America.
The present study examines how this first generation of signers is imposing grammatical structure on their
sign language as it develops. The method which guides this work is one that is central to language acquisition
research: by examining the structure evident in the children's sign language production, and subtracting from
that the portion present in the language to which the children were originally exposed, one can discover the
children's contribution.
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the
language signal. Here we investigated the neurocognitive processes underlying the monitoring of two
phonological parameters of sign languages: handshape and location. Our goal was to determine if brain
regions processing sensorimotor characteristics of different phonological parameters of sign languages were
also involved in phonological processing, with their activity being modulated by the linguistic content of
manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure
and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign
language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British
Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological
parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing
nonsigners. Results show that the linguistic processing of different phonological parameters of sign language
is independent of the sensorimotor characteristics of the language signal. Handshape and location were
processed by different perceptual and task-related brain networks but recruited the same language areas. The
semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns
being associated with longer RTs and stronger activations in an action observation network in all participants
and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands
for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge
of signed languages. We suggest that the phonological characteristics of a language may arise as a
consequence of more efficient neural processing for its perception and production.
Pettito &
Marentette,
1991; Pettito et
al., 2004: Deaf
babies
"babbling"
with their
hands
The “ba, ba, ba” sound universal to babies’ babbling around 7 months captures scientific attention because it
provides insights into the mechanisms underlying language acquisition and vestiges of its evolutionary
origins. Yet the prevailing mystery is what is the biological basis of babbling, with one hypothesis being that it
is a non-linguistic motoric activity driven largely by the baby’s emerging control over the mouth and jaw, and
another being that it is a linguistic activity reflecting the babies’ early sensitivity to specific phonetic–syllabic
patterns. Two groups of hearing babies were studied over time (ages 6, 10, and 12 months), equal in all
developmental respects except for the modality of language input (mouth versus hand): three hearing babies
acquiring spoken language (group 1: “speech-exposed”) and a rare group of three hearing babies acquiring
sign language only, not speech (group 2: “sign-exposed”). Despite this latter group’s exposure to sign, the
motoric hypothesis would predict similar hand activity to that seen in speech-exposed hearing babies because
language acquisition in sign-exposed babies does not involve the mouth. Using innovative quantitative
Optotrak 3-D motion-tracking technology, applied here for the first time to study infant language acquisition,
we obtained physical measurements similar to a speech spectrogram, but for the hands. Here we discovered
that the specific rhythmic frequencies of the hands of the sign-exposed hearing babies differed depending on
whether they were producing linguistic activity, which they produced at a low frequency of approximately 1
Hz, versus non-linguistic activity, which they produced at a higher frequency of approximately 2.5 Hz – the
identical class of hand activity that the speech- exposed hearing babies produced nearly exclusively.
Surprisingly, without benefit of the mouth, hearing sign-exposed babies alone babbled systematically on their
hands. We conclude that babbling is fundamentally a linguistic activity and explain why the differentiation
between linguistic and non- linguistic hand activity in a single manual modality (one distinct from the human
mouth) could only have resulted if all babies are born with a sensitivity to specific rhythmic patterns at the
heart of human language and the capacity to use them.
Reilly et al.,
1990:
Grammaticized
facial
expressions
An unusual facet of American Sign Language (ASL) is its use of grammaticized facial expression. In this
study, we examine the acquisition of conditional sentences in ASL by 14 deaf children (ages 3;3–8;4) of deaf
parents. Conditional sentences were chosen because they entail the use of both manual signs and
grammaticized non-manual facial expressions. The results indicate that the children first acquire manual
conditional signs, e.g., SUPPOSE, before they use the obligatory grammaticized conditional facial expression.
Moreover, the children acquire the constellation of obligatory non-manual behaviors component by
component, rather than holistically.
Perniss, 2010:
Iconicity
Current views about language are dominated by the idea of arbitrary connections between linguistic form and
meaning. However, if we look beyond the more familiar Indo-European languages and also include both
spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact,
pervasive in language. In this paper, we review the different types of iconic mappings that characterize
languages in both modalities, including the predominantly visually iconic mappings found in signed
languages. Having shown that iconic mapping are present across languages, we then proceed to review
evidence showing that language users (signers and speakers) exploit iconicity in language processing and
language acquisition. While not discounting the presence and importance of arbitrariness in language, we put
forward the idea that iconicity need also be recognized as a general property of language, which may serve the
function of reducing the gap between linguistic form and conceptual representation to allow the language
system to “hook up” to motor, perceptual, and affective experience.
Synaesthesia
Borghi et al.,
2014: using
sign language
to test theories
of embodiment
One of the most important challenges for embodied and grounded theories of cognition concerns the
representation of abstract concepts, such as “freedom.” Many embodied theories of abstract concepts have
been proposed. Some proposals stress the similarities between concrete and abstract concepts showing that
they are both grounded in perception and action system while other emphasize their difference favoring a
multiple representation view. An influential view proposes that abstract concepts are mapped to concrete ones
through metaphors. Furthermore, some theories underline the fact that abstract concepts are grounded in
specific contents, as situations, introspective states, emotions. These approaches are not necessarily mutually
exclusive, since it is possible that they can account for different subsets of abstract concepts and words. One
novel and fruitful way to understand the way in which abstract concepts are represented is to analyze how sign
languages encode concepts into signs. In the present paper we will discuss these theoretical issues mostly
relying on examples taken from Italian Sign Language (LIS, Lingua dei Segni Italiana), the visual-gestural
language used within the Italian Deaf community. We will verify whether and to what extent LIS signs
provide evidence favoring the different theories of abstract concepts. In analyzing signs we will distinguish
between direct forms of involvement of the body and forms in which concepts are grounded differently, for
example relying on linguistic experience. In dealing with the LIS evidence, we will consider the possibility
that different abstract concepts are represented using different levels of embodiment. The collected evidence
will help us to discuss whether a unitary embodied theory of abstract concepts is possible or whether the
different theoretical proposals can account for different aspects of their representation
Witthoft,
Winawer, &
Eagleman,
2015: trends in
colour
associations
In this paper we estimate the minimum prevalence of grapheme-color synesthetes with let- ter-color matches
learned from an external stimulus, by analyzing a large sample of En- glish-speaking grapheme-color
synesthetes. We find that at least 6% (400/6588 participants) of the total sample learned many of their matches
from a widely available col- ored letter toy. Among those born in the decade after the toy began to be
manufactured, the proportion of synesthetes with learned letter-color pairings approaches 15% for some 5year periods. Among those born 5 years or more before it was manufactured, none have colors learned from
the toy. Analysis of the letter-color matching data suggests the only dif- ference between synesthetes with
matches to the toy and those without is exposure to the stimulus. These data indicate learning of letter-color
pairings from external contingencies can occur in a substantial fraction of synesthetes, and are consistent with
the hypothesis that grapheme-color synesthesia is a kind of conditioned mental imagery.
Mankin &
Simner (2017)
and Root et al.
This study investigates the origins of specific letter-colour associations experienced by people with graphemecolour synaesthesia. We present novel evidence that frequently observed trends in synaesthesia (e.g., A is
typically red) can be tied to orthographic associations between letters and words (e.g., 'A is for apple'), which
are typically formed during literacy acquisition. In our experiments, we first tested members of the general
population to show that certain words are consistently associated with letters of the alphabet (e.g., A is for
apple), which we named index words. Sampling from the same population, we then elicited the typical colour
associations of these index words (e.g., apples are red) and used the letter → index word → colour
connections to predict which colours and letters would be paired together based on these orthographicsemantic influences. We then looked at direct letter-colour associations (e.g., A→ red, B→ blue⋯) from both
synaesthetes and non-synaesthetes. In both populations, we show statistically that the colour predicted by
index words matches significantly with the letter-colour mappings: that is, A→ red because A is for apple and
apples are prototypically red. We therefore conclude that letter-colour associations in both synaesthetes and
non-synaesthetes are tied to early-learned letter-word associations.
Bankieris &
Simner (2015):
Guessing
soundsymbolic
words
Atkinson et al.
(2016):
Fingerspellingcolour
synaesthesia
Sound symbolism is a property of certain words which have a direct link between their phonological form and
their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand
the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well
understood. We examined whether sound symbolism might be mediated by the same types of cross-modal
processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or
cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger
colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if
so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To
test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic)
foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented
with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a twoalternative forced-choice task. Both groups showed superior understanding compared to chance levels, but
synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may
rely on the types of cross-modal integration that drive synaesthetes’ unusual experiences. It also suggests that
synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains
unrelated to the specific form of synaesthesia.
Many synesthetes experience colors when viewing letters or digits. We document, for the first time, an
analogous phenomenon among users of signed languages who showed color synesthesia for finger- spelled
letters and signed numerals. Four synesthetes experienced colors when they viewed manual letters and
numerals (in two cases, colors were subjectively projected on to the hands). There was a correspondence
between the colors experienced for written graphemes and their manual counterparts, suggesting that the
development of these two types of synesthesia is interdependent despite the fact that these systems are
superficially distinct and rely on different perceptual recognition mechanisms in the brain.
6)The Psychology of Thinking
1) Planning:




We should plan more as it reduces anxiety, reduces cost, it is way of ensuring you will achieve your
goals, but most people don’t do it.
Problem solving is goal directed- personal effort needed- involves conscious AND unconscious (eureka
moments) cognitive processes
3 parts of a problem- 1) the problem (start state) 2) The thing you might do (operators) 3) The solution
(goal state).
Tower of Hanoi- requires 7 moves- in order to do this, you have to first move away from the problem.
What is Planning

Planning: deciding on the order and intensity of decomposition of problem (you can’t solve a problem directly you have to decompose it into sub goals
into manageable chunks to tackle the problem), and determining consequences of alternative plans

Planning involves search through the problem space. (in our heads we mentally represent problems and mentally search for solutions)

Search is guided by heuristics (rules of thumb that are known to make progress towards a goal but are not guaranteed e.g. searching someone for drugs
following a series of rules of thumb that account for all possibilities).

Planning is constrained (by the size of the problem-memory- understanding)

Planning is mediated by external environments. (everything is planned to depend on sources available context applied.)
Components of a Problem

Initial/Start state: The problem as presented to you the only think you have certainty about (Hanoi discs in the wrong place)

Goal state: What is your aim/intention/desired outcome?

Operators: Things you can do/try/execute to move to goal state.

Constraints: Limitations on what you can do/try/execute. – time constraints

Additional requirements/rules (e.g., how quick you need to solve it, accuracy, latency, etc.)
Example of Problem Solving:

Brilliant design of a car- constrained by things like domain brief market tradition- requires
creativity to make. It is a design product that has gone through design planning

“Design may be the ultimate expression of human thought.” (H.A. Simon, Sciences of the
Artificial, 1981) – Herb Simon came up with the concept of “bounded rationality”- Economics
operated on the assumption that peoples choices are rational but he introduced the notion that
people aren’t rational- don’t always do the thing that will make the most progress.

How do you build the car? Example of problem decomposition or sub goal specification- all
different parts of car need to be thought of- constructed- decomposing the problem into much
smaller problems until its doable.

Herb theory - problem solving is a process of decomposing the problem into its sub goals until
you get to a point where you can apply an operator (actually do something)
Way to Decompose a Problem into lots of sub goals till you can get to a point where you can apply the operator.
3 Decomposition ways.
1) Breadth-first decomposition: looked at all the parts together first- overall goal- before looking at the different parts. Advantage – minimal commitment, you
don’t commit to applying operators yet (doing things)
2) Opposite of this is Depth-first: Applying operators at individual parts.
Advantage: immediate feedback; lower cognitive load- don’t have to think of every single move before making a move.
3) Not great way- Opportunistic: Capitalizing on current state- “just so happens an expert on window is here so well start on that first” – Opportunism.
Advantage- effective use of time limited resources.
A skilful planner will know when to switch between breath, depth and opportunistic planning. - Ball, L.J. & Ormerod, T.C. (1995). Structured and opportunistic
processes in design: A critical discussion. International Journal of Human-Computer Studies, 43, 131-151.
What is the problem space: The Mental Representation of a Problem?
1. State space: all the different spaces you find yourself in the distance between initial state and goal state
2. Task environment: What the problem is presented as
3. Information processing system: the human mind – what will do the problem solving.
Herb- Model Computer programme to solve problem- search route from initial state to goal state.
The State Space: All possible paths between the initial state and goal state.



The larger it is, the harder a problem will be to solve
Newell & Simon (1972) – concepts of ‘bounded rationality’ and satisficing- idea that as humans we would like to think
that we consider every possibility and optimize planning but we are bounded by limitation of our information
processing system and a state space and resources and environment that doesn’t allow us all time- we are not optimizers
we are satisfiers.
Towers of Hanoi- All possible ways to go out about it
The Task Environment: the way a problem is presented to the solver

Format (display type)

Thematic content (e.g., familiarity, meaning)

Conditions under which you have to perform task (e.g., criticality; risk)

Zhang, J., & Norman, D. A. (1994). Representations in distributed cognitive tasks. Cognitive Science,
18, 87-122. Experiment 1B (“Waitress and coffee cups”- tower of Hanoi upside down)- performed
much better by making the constraint explicit in the environment the environment does the thinking
for you. Much design now is designed to solve the problem
both Hanoi and cup task are analogous (identical expect for superficial features)
Information Processing System
•
Working memory
•
Constraint on planning steps- limit in how much we info we can only in working memory at one time
•
Chess: Example that everything can’t be planned- From any given position there are on average 35 possible moves. If a chess game lasts
100 moves = 35 100possible moves. Impossible to calculate.
•
Long-term memory
•
Prior experience- Knowledge of solutions, operators and constraints
•
Expertise- don’t need to plan as much as novices
What is Planning: Search Using Heuristics.
Means-ends analysis: I’m here what do I have to do get there, hierarchy sequence of sub-goals.E.g., fix a car tire Make situation safe; Remove wheel; Loosen
nuts; Raise car; Undo nuts; Slide wheel off; Replace wheel, lower car; tighten nuts, etc.
Operator selection: Select operator that maximizes reduction of distance between initial and goal states- Set as ‘sub-goal’ to apply the operator
Means-ends analysis and the TOH: Sequence to Achieve goal
A way of planning what sequence you will apply operators that will allow you to achieve a goal. Picking an operator to maximize the reduction of the distance
between initial state and goal state- and you set sub goal to apply that operator (become the new problem) but doesn’t work very well. - will encourage you to
move to goal state too quickly.
Other Heuristics: Designed to make space of possible solutions as small as possible to search it

Hill-climbing (variant of mean- end analysis) do the thing that makes the most progress
don’t need to set sub goals

Trial and error (do anything see what happens)

Heuristics for choice- sampling: Anchoring, Representativeness Etc.…
Problems of not Planning
9 ball problem- you have 9 ball all the same- but one is a bit heavier- you have to use a balance scale
to find which one but you can only use it twice- typically people use hill climbing heuristic- want to
make us much progress as I can- most do 4 vs 4-and if it balances it’s the one that not included- trying
to maximise progress but doesn’t work (detour problem- have to go back away from the thing that
makes the most progress). Best to do 3 vs 3 and do 1 against 1. Allot of people do 4 against 5
completely irrational as one will definitely go down. Yet people who do 4 against 5 are more likely to
solve it.
Experience of failure means they are forced to plan. Better performance because of bad start.
Essay Writing Plan: Planning the macro structure & microstructure. All essays goal should be to
“change their mind” – will someone think differently after reading this?
2) Insight: A change in conceptual understanding that allows a solution to a problem to be
discovered- gaining insight – an be able to repeat in the future
Nine Dot Problem

Need to draw 4 straight lines so that each dot has a line through it but you can’t take your hand off the page. Try with 2 and 1“you need to think outside the box”- in order to solve this problem, you have to draw lines that extend outside the box- we
impose a structure – a square- and thus limit our search within the square. Yet even when you tell people this not much increase
in solving it- why is a such a smile problem so difficult to solve?

This is an example of Phenomenon: Simple to state, hard to solve. 1) Fixation (functional fixedness) fixated on a stubborn
solution to the problem,2) Impasse- run out of ideas, ‘Aha’, 3) Incubation (if you’re stuck often the best thing to do is stop
trying to solve it- your mind will subconsciously work on the problem for you)

Gestalt accounts – Perceptual limitations to problem- perceptual ‘whole’ limit moves to inside the square- assign a whole to be
greater than the sum of its parts e.g. seeing the square which limits our percept. (however, doesn’t explain why even when
showing participants, the problem they don’t)
The Importance of Insight- e.g. thinking outside the box to solve problems

Consciousness: Insightful seems magical so do we control our own thinking? – automatic processing.

Determinism: Productive vs. reproductive (not possible) thought? - extent of which we are able to create new ideas

Modularity: Is insight a ‘special process’? Fmri scanning- particular brain regions- right hemisphere- parietal frontal cortex- uniquely responsible for
insight (lecture doesn’t believe them)
Three Theories of Insight
1)
2)
3)
Representational change theory: Knoblich et al (1999)- The reason insight problems are hard is because we impose prior knowledge, it’s what you
know that makes the problem hard.
Opposite Theory- Criterion of Satisfactory Progress: MacGregor, Ormerod & Chronicle (2001)- it’s what you do that makes the problem hard- People
try things if doesn’t work they try something else.
Multiple factor theory: Kershaw & Ohlsson (2004) Everything matters and insight problems are hard because you have to use all of them. a) Perceptual
factors (Gestalt)-b) Knowledge factors - Search factors- yet limits to study that they change the problem space- giving participants part of the solution
so make it easier.
Testing the Effects of Knowledge








Testing Representational Change Theory: Matchsticks- move one matchstick to make sum work- data shows first one is much easier than secondturn plus to minus but most without being a mathematician would not think to do this 3= 3= 3 , scope of prior knowledge imposes constraint on
problem solving and makes it difficult
Testing Criterion of Satisfaction Theory: 9 dot problem and with first line going through within the
square given- Criterion theory argues this would be the best for problem solving.
Some got a line which extended outside a dot, every other theory predicted this would be the most
helpful.
Criterion argues that people aren’t really using the insight to go outside the square- their trying to
make progress and measuring how much progress their making- and if they don’t that triggers a
search for alternatives
This proved right- more people solved it with line within the square then the other- people realize
their failing early and look for alternatives- typical late moves- people start going outside square
fining alternatives they can adapt
Theories testing the effects of knowledge and strategy: 8-coin problem- task to transform 8 coin so
that each coin touches exactly three other- moving only two coins- Insight- to solve this you need to
use 3 dimensions and stack the coins. Experiment tested how much a visual hint would help
participants solve this- Criterion account- people are trying to make progress and people assume
progress would me to move coin to where it touches other- when no such moves available people
experience criterion failure- failing and then searching for alternative ideas. Moves available condition
argued that people make less progress when not failing early to experience criterion failure.
Verbal Hint reducing problem state place- “the solution requires the creation of two groups of coins”
and later “the solution requires the use of three dimensions” – Data shows there was a small but no sig
change for visual hint- two group hint was much more useful in no move available visual hint
condition. Irrespective of visual hint or not performance is best in no move available conditions
Shows that at different stages of problem solving both prior and search are having an effect. Ultimately
strategy for search is determining performance- for a small select sample visual hints to help but most
need hint on how to use the hint.
Enhancing Insight: Analogy






Analogical learning is fundamental for learning- but it is rarely spontaneous- you have to be told to use it
Analogy and Insight problem solving Gick & Holyoak, 1980, Expt. 4; FORTRESS problem 
RADIATION problem
They presented people with the story that a doctor needs to destroy a tumour with a ray gun, but the ray is
too strong and will kill the person. They presented an analogy problem with a solution. You’re a general
attacking a fortress and your army can’t get down to the fortress because there mines. You can send down
small groups of generals. General splits army up in groups and synchronise watch to converge at the same
time. 99% of participants solved the problem with this analogy as opposed to 25% without the analogy
Analogy is a powerful mechanism people don’t use.
Analogy to solve the 9-ball problem- (Ormerod, T and MacGregor, J N (2017) Enabling spontaneous
analogy through heuristic change. Cognitive Psychology, 99. pp. 1-16.) Hill theory believes people will
solve problems by making as much progress as they can. With a 7-ball variation of this problem is easier
as maximising will just solve this problem. By telling some participants it cost 1 pound to weigh each
problem and you only have 8 pounds people experience early criterion failure. People are aware that
making progress to quickly is not the best idea. Results found that when people failed earlier, they solved
it quickly. Results showed that 7 ball problem was easier than the 9 ball. Adding money to 7 ball showed it made it harder. When they added money to
9 ball problem became easier
This was used as training to solve a further analogous problem A nuclear reactor is in danger of exploding. There are eight plutonium rods in the
reactor core, and one of them has a fault, causing excess heat generation. There is a device that can test for the fault. To operate, you load a number of
rods into each of two ‘bins’, and the device measures differences in heat production between the two bins. Unfortunately, you only have time for two
tests before the reactor turns critical. (instead of balls you have rods here- if 9 ball problem done first with the money people solved this faster) So
people have spontaneous analogy- people get implicit info about the dangers of maximisation- more likely to avoid making as much progress as they
can so look for alternatives.
Enhancing Insight: Incubation (a long delay to solve a problem)
•
•
•
Divergent thinking – any kind of incubation helps
Linguistic insight– facilitation only with low cognitive load (doing an easy task to allow knowledge to become deactivated)
Visual insight – facilitation only after lengthy preparation period (only get positive effect if people fail- incubation allows time for you to seek
alternative solutions)
Enhancing Insight: Sleep Sio, UN., Monaghan, P., & Ormerod, T.C. (2013). Sleep on it, but only if it is difficult: Effects of sleep on problem solving.


Effects of sleep on problem solving.
Study used the Remote associate’s tasks (RATs): What word goes with: Cottage, Swiss, Cake? What word goes with: Board, Mail, Magic? - found that
with easy problem incubations was the most effective way for problem solving. Difficult problems solved more greatly with sleep. So different
mechanisms impacting on linguistic search of knowledge.
3) Proof





Instructions are full of logical connectives (a healthy pig has a healthy snout and a moist eye) we use to try and make an argument and
provide evidence.
A sick or cold pig- uncertainty or is known as disjunction.
Overview: Types of inference- Theories of inference- Effects of structure and knowledge
Proof: Evidence or argument establishing a fact or the truth of a statement = drawing an inference
Other tasks related to proof: Explanation, Diagnosis, Prediction, Imagination
Types of Inference

Deduction (Specific inference): the act of drawing a specific inference or conclusion from general statements- inference is the result
of this.

Induction (General inference): contrasts with deduction drawing a general inference from lots of specific instances “all swans must be
white”. Deduction better then induction. The problem with induction is that if you think you’re unlikely to test it (deduct further). Karl
Popper- science should proceed through falsification

Abduction (‘mix of both for best explanation available’)

How does the mind undertake deduction? We use the structure of the sentence(form) the semantics (meaning), or statistical
information e.g. frequency- we use all.
Inference as Logical reasoning
•
Assumption: Individuals draw conclusions from premises by applying stored rules of logic to
derive a single valid inference.
•
Types of inference
•
Classical syllogisms: All artists are beekeepers, some beekeepers are chemists,
therefore …?
•
Conditional inferences: If I work hard, then I will get a pay rise. I didn’t get a pay
rise. Therefore…?
•
Transitive inferences: John is faster than Mike, John is slower than Bill,
therefore....?
The Structural View
•
Formal logic  the use of syntactic structure of sentence (form) to determine the
validity of an argument
•
Piaget Stage of ‘formal operational thinking’- can think about abstract ideas
making a transition to formal logic.
•
Braine & O’Brien (1994); Rips (1983) – Natural deduction•
Direct inferences: When “p or q” and “not p” are held in memory, then conclusion
“q” follows
•
Indirect inferences: When “if p then q” and “not q” are held in memory, “not p” is
inferred by applying inference rules
•
Formal Logic isn’t always the case
The Statistical View of how humans’ reason

You can predict the probability of a hypothesis given a set of circumstance using this formula:

Example: 1% of women under 40 have births complicated by Downs syndrome A blood test is 90% reliable. What is the probability
that a positive test outcome for a woman under 40 will predict a Downs syndrome birth?
Information Gain view of how humans’ reason (Oaksford &
Chater 1994; 2007)

Information = reduction in uncertainty

Reasoning is about expected information gain (“what
if….”)  utility

Rarity – most events/things are rare compared with the
number of instances where they don’t occur but are
belived more.

Example: A Ford Fiesta is common, but rare when
compared to the times when you don’t see one
Dual Systems Accounts


We are capable both of reasoning with frequency
and statistics and abstract thought.
We have 2 systems for how we reason to make
decisions.
4) Choice


Overview: Types of choice: 1) Normative models of decision-making, 2) Descriptive accounts of decision-making,3) Phenomena of choices
People choose if they are told to choose! Nisbett, R. E., & Wilson, T. D. (1977)- don’t say I don’t know- particpants made a choice of which tightes
were better yet they were all the same and tried to give rational explanations.
Types of Choices

Reducing uncertainty: Diagnostic hypothesis testing, Predicting outcomes of choice. (evidence to justify decisions)

Choosing between alternatives: Rational vs. irrational decision-making.
Normative/Prescriptive Models
•
Idea that we are Rational we will make optimal selections. Economics was based on this idea of rational choices
•
What they thought gets optimized? 2 normative models:
•
Expected Value: making decision based on what will give me the Highest resource reward (e.g.,
monetary) value = objective value x probability.
•
Expected Utility: Highest psychological value (e.g., in reducing risk or uncertainty) = subjective
utility x probability.
Violations of Expected Utility Theory of rational choices: Certainty and Framing

Framing makes a huge difference to psychological influence on our choices
Dual
thinking.
Fast, slow
An alternative to how people make decisions: Prospect theory (a descriptive theory) Kahneman & Tversky (1979) describes how people actually come to
decisions not what they should do – descriptive not normative
Model of how people make decisions 2 stages
Loss aversion- less worried about gain then we
Probability Weighting
Stage 1: Editing – selecting desired outcomes
are about choice. We prefer certainty.

People attribute excessive weight to events
against a reference point via heuristics
•
Faced with a risky choice leading to gains,
with low probabilities and insufficient
Do this by Applying heuristics “rules of thumb”- individuals are risk-averse, preferring
weight to events with high probability.
availability- (using available information) -solutions that lead to a lower expected utility

Rare events cause irrational panic
anchoring, - (finding a place to start from)
but with a higher certainty

People attribute excessive weight to events
Representativeness- (using the method that is the
•
Faced with a risky choice leading to losses,
with low probabilities and insufficient
most familiar)
individuals are risk-seeking, preferring
weight to events with high probability.
solutions that lead to a lower expected utility
2) Then we evaluate the outcomes– Value
as long as it has the potential to avoid losses.
judgement based on calculation of anticipated
utilities x probabilities








Representativeness: Chose B even though it’s not rational,
A has to be true for B to be true so why pick the less
probable? Conjunction fallacy- we think that things that cooccur are more likely to occur. Bias is the outcome of
applying a heuristic in a particular way based on experience
Effect of anchoring: on credit card payments, minimum
payment needed urges people to pay more. When choosing
how much to repay anchor anything better than minimum
makes as feel better. Reference point against which we
make our decision.
Availability: rational thing to would be to find alternative
theories that contradict- yet tend to get more info about
thing that agree with your decision.
Looking at how investigators investigate false claims
benefits. When asking experts and students to decide
whether they should investigate a claim with 6 pieces of
informationexperts were highly diagnostic if it was a complicated case
but were must more likely to pursue a single idea if case is
easy
So Availability heuristic shows that strategy we adopt if
easy we are irrational if hard we are much more rational
because we take a biggest risk.
Testing prospect theory which class would you sign up for?
50/50 but if told you have to drop once they drop the one
they initially preferred- reversing preference when task
changes
Which chance do you like best: typically, people will choose first because they are risk averse but to sell they will put a higher price on what to sellpreference reversal would choose something that you know you has a lower value? Switch between ownership and value- anchoring places value
1) Thinking Short Summary of book Thinking Fast and Slow
Introduction
The book is about research conducted into cognitive bias, prospect theory and happiness. The aim of the book is to provide a language for analysing errors of judgment. He describes two systems of
thought and the biases that stem from them, such as framing effects or substitution. The overall conclusion is that people are too confident in human judgment. We assume certain things
automatically without having thought through them carefully. These heuristic assumptions (thinking errors) lead to muddled thinking, these effects will be named and listed.
Part 1: Two Systems
Kahneman starts off with an explanation of two modes of thought, intuitive versus intentional thought, and the errors that derive from this dichotomy.
Chapter 1- The Two Systems
On every occasion your brain shifts between fast thinking (System 1) and slow thinking (System 2). System 1 is an intuitive response to your surroundings and sensory input based on mental
conventions both learned and natural, and cannot be turned off. System 2 however is a deliberate effortful thought process that normally runs in low priority mode, is able to make limited
computations, and monitors your behavior.
System 1 triggers System 2, while System 2 is able to program System 1 with a task set of following certain instructions. Conflicts may arise when System 2 programs System 1 but the tasks of both
systems are contradicting. System 1 operates on heuristics (mental shortcuts) that may not be accurate. System 2 requires effort evaluating those heuristics and is prone to error.
Chapter 3 - The Lazy Controller
System 2 has a low natural speed, physical activity drains the ability of complex thought. The law of least effort: one has a propensity for intuitive thoughts and coherent thoughts require discipline
and self-control. When System 2 is busy this leads to temptation, the cognitive loads weakens your self- control, unless you are in a state of flow of effortless deep concentration. Cognitive,
emotional and physical effort all draw from the same energy source (glucose), hence a lack of energy makes me people prone to let System 1 take over.
Chapter 4 - The Associative Machine
System 1 input from observing your environment triggers memories and physical responses, it subconsciously sparks associatively and emotional responses to make sense of the world and provide a
context for future events. The consequence of such a network of ideas is called priming
(Heuristic #1) : associations in memory effectuate thoughts, actions and emotions. Reciprocal priming: for example if you act calm and nice, you become even more calm and nice.
Chapter 5 - Cognitive Ease
Heuristic #2: Cognitive ease: Things that are easier to compute, more familiar, and easier to read seem more true than things that require hard thought, are novel, or are hard to see. Judgements
based on impressions of cognitive ease leads to illusions.
Illusions of Remembering: recognizing memory leads to a false sense of familiarity.
Illusions of Truth: when a statement is repeated often one accepts it as truth. A statement can be made more persuasive by maximizing legibility, using simple language, repetition, memorable
illustrations and easily pronounced sources.
Chapter 7 - A machine for jumping to conclusions
Heuristic #3: Confirmation bias: tendency to search for and find confirming evidence for a belief while overlooking counter examples. Jumping to conclusions is efficient if the conclusions are likely
to be correct and the costs of an occasional mistake acceptable, and if the jump saves much time and effort, according to System 1. When System 1 makes a mistake System 2 jumps in to slow us
down and consider alternative explanations. We are prone to over-estimate the probability of unlikely events (irrational fears) and accept uncritically every suggestion (credulity).
Heuristic #4: Halo Effect: tendency to like or dislike everything about a person—including things you have not observed. It enables exaggerated emotional coherence: first impressions spoil further
information. System 1 does not allow absent information and has trouble staying objective. The resolution is to seek independent judgment of observations: “what you see is all there is”.
Part 2: Heuristics and biases
Chapter 11 - Anchors
Heuristic #5: Anchoring effect: subconscious phenomenon of making incorrect estimates due to previously heard quantities. When we are presented with a particular value (anchor) for an unknown
value we stay closer to that first value when actually estimating the unknown value. Anchoring evolves from an adjustment process (premature conclusion) by System 2 and a priming effect in
System 1. The anchoring effect can be measured by calculating the ratio of the differences between two anchors and estimates respectively.
Chapter 12 - The Science of Availability
Heuristic #6: Availability heuristic: under or over estimating the frequency of an event based on ease of retrieval rather than statistical calculation. Answers are easier to retrieve when we have had
an emotional personal experience. We’re prone to give bigger answers to questions that are easier to retrieve. The reverse occurs when it is difficult to retrieve answers or instances to support or
judgement. The first few instances often come easy, but fluency decreases steeply. Since the availability heuristic is about causes and expectations System 1 is able to handle it, but System 2 may
enact resetting expectations.
Chapter 14 - Tom W’s Specialty
Heuristic #7: Representativeness: the intuitive leap to make judgments based on how similar something is to something we like (similar to profiling or stereotyping). An automatic activity of System
1 is to activate any association with a stereotype, even in the face of contradictory odds. This representativeness heuristic tends to neglect common statistics and the quality of any evidence
provided. Enhanced System 2 activity aids to increase predictive accuracy to overcome the automatic process of System 1.
Chapter 16 - Causes Trump Statistics
Heuristic #8: Overlooking statistics: When given purely statistical data we generally make accurate inferences (statistical base rate). But when given statistical data and an individual story we tend
to go with the story rather than statistics (causal base rate). Statistical base rates are underweighted or neglected in the face of causal base rates. It may lead to stereotyping and profiling.
Chapter 17 - Regression to the Mean
Heuristic #9: Overlooking luck: attaching causal interpretations to the fluctuations of random processes. Random fluctuations in variables such as outliers typically regress to the mean, which means
they naturally approach the average. Regression toward a mean is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second
measurement. These regressions to the mean- statistical regularities - are explanations but not causes. Whenever the correlation between two scores is imperfect, there will be regression to the
mean. But our System 2 finds that difficult to accept, partly because of System 1’s insistent demand for causal interpretations.
Chapter 18 - Taming Intuitive Predictions
Heuristic #10: Intuitive predictions: Conclusions we draw with strong intuition (System 1) feed overconfidence. Just because a thing “feels right” (intuitive) does not make it right. We need System 2
to slow down and examine our intuition. The bias could be partially resolved by calculating the discrepancy between your intuition and a base rate and adjust your estimate depending on your
expectation of the correlation.
Part 4: Choices
Chapter 26 - Prospect Theory
People tend to be more risk averse when it comes to betting for extra gains opposed to a sure gain which is lower. For losses this functions the opposite. This is a flaw in Bernoulli’s model. It is
missing a reference point from which gains and losses are valuated. Kahneman distinguishes three factors in his prospect theory:
- Evaluation is relative to a neutral reference point, the value of money is less important than the subjective experience of changes in one’s wealth.
- We experience diminished sensitivity to changes in wealth: losing $100 hurts more if you start with $200 than if you start with $1000.Loss aversion.
- When directly compared or weighted against each other, losses loom larger than gains
Chapter 28 - Bad Events
Heuristic #11: loss aversion: people dislike losing more than they like winning, they will work harder to avoid losses than to achieve gains. . The brain processes threats and bad news faster. Attitude
towards loss/gain is asymmetrical.
lOMoARcPSD|4190424
Chapter 29 - Fourfold Pattern
When evaluating an object, people assign weights to its characteristics. Weights are related to the probabilities, but are not identical.
Heuristic #12: expectation principle: decision weights that people assign to outcomes are not identical to the probabilities of these outcomes. Two types:
- Possibility effect: When highly unlikely outcomes are weighted disproportionately more than they deserve (buying lottery tickets).
- Certainty effect: Outcomes that are almost certain are given less weight than their probability justifies (lawyers offering a less than perfect settlement before trial which would result in an “almost certain victory”).
The fourfold model is presented, which shows that people attach values to gains and losses rather than to wealth and the decision weights that they assign to outcomes are different from probabilities.
When people face a very bad option, they take desperate gambles. Risk taking of this kind often turns manageable failures into disasters. This is where businesses that are losing ground to a superior technology waste their remaining
assets in futile attempts to catch up.
7) Memory: Advances in Memory Research.
Three Key Areas
1) Working in Memory: Visuospatial Bootstrapping
2) Amnesia: Impairments in LTM & working memory.
3) Encoding-retrieval match versus diagnostic valuewhen we encode memories if there’s a match between that situation and when we retrieve that
info than were better at retrieving that information.
1) Working Memory Systems: Visuospatial Bootstrapping Task
Working Memory: Involved with keeping information in memory as we process it (a
short-term memory system- with three systems)

Working memory model- systems that help you remember- first didn’t have episodic buffer system
1)Phonological Loop (sound properties)
•
Limited capacity slave system within the working
memory model (extremely limited capacity)
•
Involves specific processes and storage of verbal
information
•
A limited Phonological store: stores auditory
information.
•
Process performedArticulatory rehearsal (talking in
your head): converts visual information (writing) to
auditory information
•
and maintains auditory information via rehearsal
(vocalisation or sub-vocalisation)
•
Once it’s in spoken form it can be stored within the
phonological store.
•
Hearing Speaking phonological store
•
Written info converted to go to phonological store
through articulatory rehearsal (talking in your head)
•
AR- coverts and maintain information as were
processing and working with it
Evidence for the phonological loop comes from:
1.
Phonological similarity effect (Baddeley, 1966).
(sequences of similar sounding letters or words are
remembered less well than sequences of letters or
words that sound different) e.g. bpc less remembered
that zfq so information is stored phonologically if it
was stored visually there should be a difference in
words that look similar or not but we don’t.
2.
Irrelevant speech effect (Colle & Welsh, 1976;
Salame & Baddeley, 1982). (idea that memory for
visually presented consonantal digits is impaired by
the simultaneous presentation of speech)  letters
written down and playing sound of someone talking
at the same time make it much more difficult to
remember visually written presented letters.  verbal
information is stored phonologically because if the
visual wasn’t converted to a phonological for the
speech you hear simultaneously wouldn’t be able to
interfere with ability to process it.
3.
Word length effect (Baddeley, Thomson &
Buchanan, 1975). (idea memory for short one syllable
words is much better than memory for long 5 syllable
words, our capacity is influenced by a speech trait) so
articulatory rehearsal has a limited capacity that’s
linked to duration it takes to say something. (like a
recording tape you can only fit a certain amount on)
4.
Articulatory suppression (Baddeley, Thomson &
Buchanan, 1975).  if you have to say the same
word over and over again and trying to remember
some digits, you having to repeat the word impairs
ability to remember what is being presented to you
visually once again you must be converting that
visual written info into an auditory sound otherwise
there wouldn’t be interference with you having to
speak at the same time.you saying a word over and
ver again is preventing you ability to subvoclaise the
visual information youre seeing (saying something
makes you unable to say new info in your head and
prevent you from being able to remember the info)
Strong evidence to support idea of phonological loop
consisting of a limited capacity phonological store and
process called articulatory rehearsal that coverts visual
verbal information into phonological information for
storage (memory)
2)Visio-Spatial Sketchpad or (Scratchpad)
•
Limited capacity slave system for storing
visual and spatial information.
3)Episodic Buffer (integration)
Evidence for visuo-spatial scratchpad comes
from:
1.
Spatial-Corsi block tapping test (Milner,
1971). participant sees a series of
randomly positioned blocks and the
experiment taps a sequence on those
blocks and we see the longer that
sequence gets the less able the participant
is to accurately repeat the same sequence
that the experimenter tapped out  so we
have a limited capacity system for spatial
information ( if it wasn’t linked to
capacity the experiment could do really
long sequences and wed still be able to get
them right)
2.
Visual- Phillips matrix task (1974)- idea
that participants are given a matrix of
squares getting increasingly bog and some
of those squares are coloured in black and
some white and participant has to
remember the pattern of the black and
white squares to recreate it - the more
complex, the bigger the matric the more
complicated the pattern the less able they
are to reproduce that pattern limited
capacity system for spatial information
3.
Double dissociation between spatial and
visual span in brain damaged patients
(Della Sala, Gray, Baddeley, Allamano &
Wilson, 1999).  there are some brain
damaged parents who can perform spatial
task but can’t perform visual tasks and
vice Versa shows there are distinct
system to process visual and spatial
information within the visuo-spatial
sketchpad
•
•
•
1.
2.
3.
Strong evidence for a limited capacity system
within working memory that deals with spatial
information and visual information.
4.
5.
Limited capacity storage for ‘chunks’ or
episodes.
Integrates information from different modalities
and LTM. The episodic buffer was needed
because:
(will integrate phonological information with
Visio and spatial information and also with long
term storage)  this was added to the model
later for. Number of reasons
Articulatory suppression (an effect supporting
the phonological loop) reduces but does not
eliminate digit span for visual information
Baddeley, Lewis & Valla, 1984). when we
stop people from being able to sub vocalise we
reduce their memory but don’t get rid of it
completely which means there must be some
other processing going n somewhere else that
allows us to remember that bit of information
that we do remember reduced digit span to
about 5 reduction but not an elimination so
something else is involved
Some amnesiacs (damage to long term memory)
show immediate recall for complex information,
beyond limited capacity of existing storage
systems (e.g. prose).  so cant be the long term
memory) they have amnesia) or phonological
loop (it is too limited) that process information
so we need another system.
It is unclear from the tripartite model (original
two system model) how information from
different modalities and from LTM is bound
and stored (e.g. Morey, 2009) imagine a
spider twerking e.g. the thing you imagined you
had never seen before, but you managed to
integrate knowldge from LTM (knowledge of
what spider and twerking is) and follow verbal
instruction (you had to read the sentence and
hear Kirsty telling you to visualise it covert to
phonological information) AND coming up
with visual imagery (SO combining LTM
memory + verbal information + visual-spatial
information) that was done in short
term/working memory and the episodic buffer
explains how all that information is integrated
It is unclear how rehearsal operates outside of
articulatory rehearsal.  how do you rehearse
something visual especially if it has many
properties?
LTM has been found to influences working
memory (e.g. Brown, Forbes, & McConnell,
2006; Baddeley, Hitch, & Allen, 2009).  info
like sentence structure can enhance our working
memory capacity in certain situation where can
draw on LTM systems our working memory is
better so there is some kind of integration going
on between LTM and working memory.
Episodic buffer explains that.
Exploring the episodic Buffer experimentally
•
Visuospatial Bootstrapping (Darling &
Havelka 2010)
•
•
•
•
Participants did a digit span
task presented with a series of
digits and have to remember
that sequence of digits they
presented that sequence in one
of three different ways
(control- number just flashed
up and in the same place next
number flashes in same
location)
(Linear- way of presenting
numbers- locations for number
was arranged horizontally
across the screen- linear
condition has a verbal memory
task but also includes spatial
information so this condition combines both a verbal task and a spatial task)
(Typical- numbers arranged in a key pad arrangement we all know from phones- this way combines
verbal information (digit span task)+ spatial information+ LTM )
Results: Digit span was superior in typical condition when people could use verbal information
LTM information and spatial information shows we no longer are limited by capacity of
phonological loop, no longer limited by capacity of viso spatial sketch pad another system with a
greater capacity that is combining all this information.
What Visuospatial Bootstrapping Study
tell use about episodic Memory 1
Visuospatial bootstrapping supports a multimodal system (combines verbal & spatial processing)
Allen et al. (2015):  supports idea of combing verbal and spatial information evidence
1. Recall in control condition impaired to greater extent than recall in keypad condition
during articulatory suppression used at the same time so when we stop someone from
being able to use phonological loop has severe effect in control condition that just uses
verbal information but has less effected typical condition so there is another system
involved.
2. Recall in keypad condition no longer superior when spatial task is performed
concurrently when we prevent them from being able to use spatial information then it
equals out different conditions  task must involve spatial working memory as well as
other process (integration of information not just a verbal/ digit span task)
Visuospatial Bootstrapping and episodic
Memory 2
Visuospatial bootstrapping supports a system that is separate to central executive processes:
central executive in working memory is to do with switching tasks allocating resources, focusing
attention. Etc one argument could be that we are able to do keypad task better because were not
using PL or VS were using the central executive BUT this evidence shows that this isn’t the case
i.e. need for a separate system (episodic buffer)
1. Bootstrapping does not decline with age; however, to central executive performance
does decline with age (Calia et al., 2015) evidence of two systems- central executive
system and the episodic buffer which bootstrapping relies on that does decline with age.
Visuospatial bootstrapping is evidenced at the same age as episodic buffer matures
(Darling et al., 2014).
2. Evidence from children at 9 years old can perform viso spatial task showing superior
performance in keypad task,
3. Visuospatial bootstrapping supports a system that links working memory & LTM
4. Visuospatial bootstrapping effect found in patients with anterograde amnesia (cant
require new memories) (Race et al., 2015).  superiority of keypad condition found in
these patients some preserved link between LTM and working memory even when
there damage that prevents information from Working memory going into LTM so
there’s a separate system that maintain a link (episodic buffer)
2)Amnesia: Impairment in LTM & Working memory
•
•
•
•
•
•
Sub systems that form the basis of LTM- idea of fractionation- LTM is divided into a number of different
sub systems that deal with different types of information in memory
We distinguish between Declarative (explicit memory) and Non-declarative (Implicit memory- not
conscious of can’t verbal or recall to conscious purposely or easily)
Declarative  Semantics (facts and knowledge) Episodic (memories for events that have occurred in
our lifetimes)
Non-Declarative  Procedural Memory (knowledge of doing thing- i.e. riding a bike), Priming
(exposure to something can alter later performance of something unconsciously), Associative learning.
(pavlov- dog) , Non associative learning (automatic reflexes)
Trying to make sense of patterns of impairment seen in amnesia and weather these patterns support this
idea of LTM as series of subsystems
Amnesia: Distinguish Retrograde, Antegrade
Temporal Gradient in amnesia (within retrograde amnesia we see something called temporal gradient)
Retrograde amnesia often presents with better memory for older memories (remote memories, e.g. from childhood
and early adulthood) than more recent memories (Ribot, 1882) the further we get away from when the amnesia
starts the better those memories are called the “Ribbet effect why does this happen?
Evidence is not conclusive, but possible theories are two conflicting theories
1.
Standard model of consolidation: Older memories are strengthened by secondary consolidation over
years, but newer memories don’t get the chance to undergo secondary consolidation when the hippocampus sustains damage so they are lost so really retrograde amnesia is a form
anterograde amnesia inability to consolidate and reconsolidate memories. Those that have can be recalled less so cant so shows temporal gradient leading up to onset of amnesia
2.
Multiple trace model: Older memories converted to semantic memories (really old memories are no longer episodic memories remembered differently then present event their
remembered more as facts. Because of this conversion they are no longer dependent on the hippocampus. Newer memories however are episodic in nature and dependent on hippocampus
for retrieval so still need the hippocampus. Amnesia often has hippocampus damage so when damaged we can no longer retrieve episodic memories dependent on it.
(New List of words asked to remember at beginning of lecture  identify which of these words you saw at the beginning
Remember- Know Paradigm- For each of the words in list you’ve written write an R, K or G
•
•
•
•
R = Remember: you clearly recollect the word and memories associated with it, e.g. the word reminded me of ... (linked to episodic memory).
K = Know: the word seems familiar to you, but you don’t have any distinct recollections of it (linked to semantic memory).
G = Guess: you have no real idea and are just guessing
This paradigm is a very robust effect we can see differentiation if we use different types of task between when people produce remember and when people tend to produce know repose
Remember Reponses with clear collection correspond to episodic memory, Know Reponses familiar but no specific recollection belong to semantic memory.
This is Important for understanding pattern of Impairment to LTM
•
Tend to support idea of fractionation and sub systems in LTM
Preserved
•
Tend to see preservation of procedural memory task e.g. the famous amnesia patient HM could do
mirror drawing tasks and learn to improve at mirror drawing over time could acquire some procedural
memories
•
We also see preservation of repetition priming so if you repeatedly expose someone to particular
stimulus it will enhance later performance
•
We tend to see preservation of recognition (so in Remember Know task this would be the Know
Reponses- a feeling of familiarity but no specific recollection
•
Also tense to be a preservation of old semantic memories
Impaired
•
Episodic memory distinctly impaired
•
Recall detriment where you have freely recall without prompt
•
Greater detriment to recognition responses when they correspond to Remember responses within the
remember-know paradigm those that are associated with a specific recollection this makes sense if
we’re seeing impairment to episodic memory
BUT
•
Debate weather people with amnesia can acquire new semantic memories e.g. people with amnesia that develops incrementally over a period of time such as Korsakoff syndrome amnesia
(linked to alcoholism) its been found with that kind of developmental amnesia people are able to acquire new semantic memories. However- that ability seems to be much more impaired
then in healthy people
•
Also, controversy over nature, extent and cause of deficit in recognition memory perhaps it is about specific between episodic and semantic but conflicting data
•
So does amnesia distinguish between explicit and implicit memory explicit memories are impaired but implicit intact but then how would we explain semantic memories unless they are
implicit in nature, or are we distinguishing between episodic and semantic memories when it comes to amnesia
Recall Vs Recognition

-

We know Recall deficits are greater than recognition deficits (e.g. Baddeley et al., 2001).  so someone with amnesia would be less able to recall information then they would be able to
recognise it.
But Conflicting evidence surrounding whether:
- hippocampus is important in recollection So some researcher argue that when we look at tasks like the Remember-Know paradigm when we look at recollection its dependent on the
hippocampus (episodic memories), an so when it is impaired we see a deficit to these kinds of recognition responses (remember responses)
- para-hippocampal regions important in familiarity- based recognition ague that perhaps THOSE are dependent in familiarity based recognition so Know type responses might be if
damage is specific to hippocampus were seeing a preservation of these kinds of memories with amnesia
(however, it always depends on the extend of the damage in patient that we might see impairment to both kind o recognition responses.
Yoenlinas et al., (2002): in the remember-know paradigm, patients with amnesia show fewer remember responses for recognised items than controls (consistent with a distinction between
impaired episodic and intact semantic systems). So we see a split between Remember and Know responses in healthy, but we see a higher number having a know responses with people
in amnesia so perhaps there is a distinction in periodic memory being impaired and semantic remining intact.
Impairments to Working memory
•
•
•
•
•
•
Amnesia tends to impair LTM, while working memory remains intact
But some Conflicting evidence surrounding whether there is some impairment to working memory in specific to
- Some spatial abilities.
- Relational binding (between an object and context).
(so, when we perceive objects, they have a number of characteristics colour, shape, location so binding combines all those characteristics, when we acquire episodic memories we are
encoding the characteristics of the object and the context and processes so much more binding processes going on between much more complex scenes.
SO, impairment in amnesia may be between the binding of object to context important to episodic memory.
Impairments to working memory may be because of impairment to working memory may just be because of due to tasks exceeding working memory capacity (and reliance on LTM) and
because LTM is impaired they aren’t able to perform these tasks.
Impairments may relate only to tasks involving highly precise binding (hippocampus may be involved in high- resolution binding).
Why do we think the pattern of memory and impairment remain murky? because the methodology across studies is not consistent across studies, also differen causal factors in creating amnesia
also different types of brain damage also individual differences in brains.
3)Memory Encoding. Is when we acquire memories and how we encode them into long term memories
Theories of Memory Encoding
Memory retrieval can be enhanced in several different ways
1.
Memory retrieval enhanced by more meaningful encoding (levels
of processing (if we exaggerate things with more deeper details
more likely to remember them) active learning in memory.
theory, Craik & Lockhart,1972).
2.
When the encoding context matches the retrieval
context (encoding specificity principle, e.g. Godden
& Baddeley, 1975). Match can enhance ability to retrieve
information Encoding specificity principal (idea that encoding
context matches retrieval context) so for example Godden and
Badley research participants had to learn stimuli under water land
and had to retrieve in underwater and land- and if context which
information was learned matches context where it was retrieved
we see enhanced retrieval of information
Based on encoding retrieval match is state dependent encoding
3.
4.
When a person’s internal state during encoding
matches their internal state during retrieval (statedependent encoding, e.g. Eich & Metcalfe, 1989). mood,
arousal levels, intoxication eg so revise in same kind of state for
when youll have to do the exam
When the task used for encoding matches the task
for retrieval (transfer appropriate processing, Bransford,
Franks, Morris & Stein, 1979).  if we use similar tasks during
encoding as we do during retrieval then we see enhance retrieval
eg so when you revise do similar tasks for what will happen in the
exam.
(High diagnostic value Clues that help us discriminate between
correct answer and possible contenders for correct answer)
Reponses to retrieval findings more about diagnostic value then encoding
retrieval match
Absolute versus diagnostic value
Nairne (2002): Argue it is not the similarity between encoding context and
retrieval context that is important in enhancing retrieval, but the presence of
features with diagnostic value. I.e. feature that can help to distinguish the
correct answer from other possible contenders E.g. if things in retrieval
context just first with the answer in encoding context then we get the answer
better
Supported by Goh & Luh (2012) asked participants to study pairs of words
(cue and target eg earcat) in test phase had to remember their matched
target (cat?) , or could have additional clues eg (meow?) theses clues
could have high diagnostic value or low diagnostic value (high -meow?,),
low- animal?), high diagnostic value had greater retrieval. When the
manipulated encoding retrieval match enhancing diagnostic value then it also
enhanced retrievals (so wasn’t match per se that was important in enhancing
retrieval it was the diagnostic value that the context provided)


participants studied cue-TARGET pairs during
learning phase and presented with cues during test phase. Greater
diagnostic value of cues lead to higher retrieval.
When encoding-retrieval match enhanced diagnostic value it also
enhanced retrieval.
Memory: 2 systems (cogs) that function for processing and normal life LTM & STM
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Working Memory/Short-term memory: The memory system that is responsible for holding onto a small amount of information that has recently
been taken in from the environment.
It has a limited capacity in the amount of time it can hold on to information and the amount of information it can hold on to limit influence how much
information can be passes on to other cognitive system for additional processing e.g. LTM
The ability to simultaneously store and process information (keeping information heard active whilst processing additional incoming information) 
When you go to the shop… get milk. and chocolate… (store & process)
E.g. Keeping a phone number just hear in memory long enough to write down.
The opposite of LTM which has a large capacity and contains memory for experience * information that accumulated thought your lifetime
1956 George Miller: observed that we could hold only a limited number of items in short term memory specifically 7 +- 2. Item- refers to a chunk – a
memory unit of things related to each other e.g. 7 sequence of numbers. Or 7 items of sequences.
2 methods two investigate STM 1) Brown/Peterson & Peterson Technique shown sequence of letters asked to remember and then asking them to
continue a sequence of numbers backwards.eg 20,22,24. The counting activity prevented them from rehearsing the words during the delay (Rehearsal
means repeating the items silently). Then asked to recall letters originally shown. 2) Serial Position Effect refers to the U-shaped relationship
between a word’s position in a list and its probability of recall. A recency effect better recall for items at the end of the list (because tems still in STM
at time of recall) strong primacy effect better recall for items at the beginning of the list (don’t need to compete with earlier items, people rehearse
them more frequently)
Semantics (meaning of words & sentences) also influences STM  1976 Wickens showed that showed that we have trouble remembering more things
because of proactive interference previous thing interfering however if we shift semantic category of the new thing to e.g. a shape, we experience a
release from pi. Results show that when the new thing is less related it has less chance of interfering and better remembered.
STM depends of chunking strategies 7 word meaning.
Early model for memory was the Atkinson-Shiffrin model memory involves a sequence of separate steps information transferred from 1 step to the
other. 1) info enters sensory memory which record info from each of the senses- model propose info is stored in sensory system for 2 seconds and some
passes to short term memory- according to this model only a fraction of this information passes on to LTM which has an enormous capacity- also
proposed control processes intentional strategies like rehearsal that help memory.
Shift away from STM to working memory  focus on its limited capacity to store information for a short period of time to be usedWorking memory the brief, immediate for the. Limited amount of material you are processing actively cooperating with your mental activities
items kept active doesn’t just store information it actively works with that information.
Working Memory Approach Baddely proposed working memory is not unitary- multiple components for our working memory that also holds both
new material and old material that you have retrieved from storage LTM - Phonological Loop, Visuospatial Sketchpad, Episodic Buffer.
Evidence for separate systems in working memory: spatial and rehearsal tasks don’t seem to interfere with each other.
Phonological Loop: stores information in terms of sounds  memory errors can be trace to acoustic confusion  confuse similar-sounding stimuli.
People make fewer error remembering sequence of C,B,V than Q,X,O  evidence that we covert visual letters into sound based codes to be able to
rehearse them our inner voice is used to remember (rehearse) what we read our hear
Information you can recall has been processed from phonological loop to LTM- you also use phonological loop during self-instruction to remind
yourself of something, and when you learn new words.
Neuroscience research on phonological loop activated in part of te frontal love and part of temporal lobe (left hemisphere of brain)  TMS study
studied the left frontal lobe (activated when you rehearse verbal material) &left parietal lobe (store acoustic info)  neither impacted for tasks
involving simple sentences for long TMS in left p lobe made many errors I complex sentences if either TMS many errors made so both involved in
rehearsal of complex length sentences.
Visuospatial Sketchpad: (picture the details of an object or scene you are trying to recall back to memory) processes visual and spatial
information this type of working memory processes both visual and spatial information allows you to picture what you are remembering, is also
limited in capacity difficulty processing spatial and visual components at the same time (picturing football game and car drifted to side) (difficulty
testing because you can’t control participants not to use their phonological loop instead)
Central Executive (an executive supervisor in an organization) integrates information from the phonological loop, sketchpad and episodic buffer
and LTM, also plays a role in focusing attention, selecting strategies, transforming information, and coordinating behaviour, switching between tasks,
(but does not store information)
Episodic Buffer a temporary storehouse that can combine information from PL, VS, and LTM.  needed in model because the central executive
that does this doesn’t store information integrated information from different modalities. (e.g. I was rude to my friend, this has happened before in
the past, remember her facial expression. Etc) also has a limited capacity.
Long Term Memory: high capacity storage system that contains information for experiences and information that you have accumulated throughout your
lifetime.
•
Episodic memories: memory of events that happened to you personally
•
Semantic memory: facts
•
Procedural Memory: ride a bike
•
Encoding (process information and represent it in your memory), retrieval (locate access information)
•
Encoding specificity principle: recall is better if the context during retrieval is similar to the context during encoding.
•
Retrograde amnesia: loss of memory for events occurred pior to damage
•
Anterograde; after
Reading:
1)Visuospatial Bootstrapping: When
Visuospatial and verbal memory work
together -Darling
2)Classic and recent advances in
Understanding Amnesia -Richard J Allen
3)Testing the myth of the encoding retrieval
match – Winston
Name given to a phenomenon whereby performance on visually presented verbal serial-recall tasks is better when stimuli is presented in a
spatial array then in single location implied communication between systems involved in STM for verbal and visual information alongside
connection to LTM evidence for episodic buffer.
Neurological amnesia has been and remains the focus of intense study, motivated by the drive to understand typical and atypical
memory function and the underlying brain basis that is involved. There is now a consensus that amnesia associated with
hippocampal (and, in many cases, broader medial temporal lobe) damage results in deficits in episodic memory, delayed recall,
and recollective experience. However, debate continues regarding the patterns of preservation and impairment across a range of
abilities, including semantic memory and learning, delayed recognition, working memory, and imagination. This brief review
highlights some of the influential and recent advances in these debates and what they may tell us about the amnesic condition and
hippocampal function.
Abstract The view that successful memory performance depends importantly on the extent to which there is a match between the
encoding and retrieval conditions is common- place in memory research. However, Nairne (Memory, 10, 389–395, 2002) proposed that
this idea about trace–cue compatibility being the driving force behind memory retention is a myth, because one cannot make
unequivocal predictions about performance by appealing to the encod- ing–retrieval match. What matters instead is the relative
diagnostic value of the match, and not the absolute match. Three experiments were carried out in which participants memorised word
pairs and tried to recall target words when given retrieval cues. The diagnostic value of the cue was varied by manipulating the extent to
which the cues subsumed other memorised words and the level of the encoding–retrieval match. The results supported Nairne’s
(Memory, 10, 389–395, 2002) assertion that the diagnostic value of retrieval cues is a better predictor of memory performance than the
absolute encoding–retrieval match.
Second Year Paper
Download