THEORIES OF PERCEPTION

advertisement
THEORIES OF PERCEPTION
In order to receive information from the environment we are equipped with sense organs eg eye,
ear, nose. Each sense organ is part of a sensory system which receives sensory inputs and
transmits sensory information to the brain. A particular problem for psychologists is to explain
the process by which the physical energy received by sense organs forms the basis of perceptual
experience. Sensory inputs are somehow converted into perceptions of desks and computers,
flowers and buildings, cars and planes; into sights, sounds, smells, taste and touch experiences.
A major theoretical issue on which psychologists are divided is the extent to which perception
relies directly on the information present in the stimulus. Some argue that perceptual processes
are not direct, but depend on the perceiver's expectations and previous knowledge as well as the
information available in the stimulus itself. This controversy is discussed with respect to Gibson
(1966) who has proposed a direct theory of perception which is a 'bottom-up' theory, and
Gregory (1970) who has proposed a constructivist (indirect) theory of perception which is a 'topdown' theory.
EXPLAINING PERCEPTION - A TOP-DOWN APPROACH
Helmholtz (1821-1894) is considered one of the founders of perceptual research. He argued that
between sensations and our conscious perception of the real world there must be intermediate
processes. Such processes would be, for example, 'inferential thinking' - which allows us to go
beyond the evidence of the senses (these inferences are at an unconscious level). Thus Helmholtz
was an early Constructivist who believed perception is more than direct registration of
sensations, but that other events intervene between stimulation and experience.
An early illustration that supports the idea of perceptions as modifiable constructions rather than
the direct responses to pattern of stimulation is the 'Ames Room'. This room is of an irregular
shape with a receding rear wall and decorated in a special manner.
C
B
A
The true wall, AC on the diagram, is
decorated so as to appear to be in the
position AB. Viewed from the front
peephole with one eye the room appears to
be rectangular but a person moving from
A to C will appear to shrink.
Viewing Point
One explanation for the Ames Room illusion is that the perceiver is in a situation of having to
choose between two beliefs built up through experience - (a) rooms that look rectangular and
normal, usually are just that, (b) people are usually of 'average' size. Most observers choose (a)
and therefore consider the people to be 'odd'.
The interesting thing about the Ames Room illusion is that it does not disappear when you learn
the true shape of the room.
1
PERCEPTIONS AS HYPOTHESES - R L GREGORY (B 1923)
Gregory proposes that perceiving is an activity resembling hypothesis formation and testing. He
says that signals received by the sensory receptors trigger neural events, and appropriate
knowledge interacts with these inputs to enable us to makes sense of the world.
Gregory has presented evidence in support of his theory, some of which is outlined below:
1. 'Perception allows behaviour to be generally appropriate to non-sensed object
characteristics'.
For example, we respond to certain objects as though they are doors even though we can only see
a long narrow rectangle as the door is ajar.
How do we know from
this stimulus alone that
this is a door?
Gregory argues that
surely to do this we
must be using more than
just sensory inputs.
2. 'Perceptions can be ambiguous'
The Necker cube is a good example of this. When you stare at the crosses on the cube the
orientation can suddenly change, or flip'. It becomes unstable and a single physical pattern can
produce two perceptions.
2
3. 'Highly unlikely objects tend to be mistaken for likely objects'.
Gregory has demonstrated this with a hollow mask of a face. Such a mask is generally seen as
normal, even when one knows and feels the real mask. There seems to be an overwhelming need
to reconstruct the face, similar to Helmholtz's description of 'unconscious inference'.
What we have seen so far would seem to confirm that indeed we do interpret the information that
we receive, in other words, perception is a top down process. However:….
EVALUATION OF THE TOP-DOWN APPROACII TO PERCEPTION
1. The Nature of Perceptual Hypotheses
If perceptions make use of hypothesis testing the question can be asked 'what kind of hypotheses
are they?' Scientists modify a hypothesis according to the support they find for it so are we as
perceivers also able to modify our hypotheses? In some cases it would seem the answer is yes.
For example, look at the figure below:
3
This probably looks like a random arrangement of black shapes. In fact there is a hidden face in
there, can you see it? The face is looking straight ahead and is in the top half of the picture in the
centre. Now can you see it? The figure is strongly lit from the side and has long hair and a beard.
Once the face is discovered, very rapid perceptual learning takes place and the ambiguous
picture now obviously contains a face each time we look at it. We have learned to perceive the
stimulus in a different way.
Although in some cases, as in the ambiguous face picture, there is a direct relationship between
modifying hypotheses and perception, in other cases this is not so evident. For example, illusions
persist even when we have full knowledge of them (e.g. the inverted face, Gregory 1974). One
would expect that the knowledge we have learned (from, say, touching the face and confirming
that it is not 'normal') would modify our hypotheses in an adaptive manner. The current
hypothesis testing theories cannot explain this lack of a relationship between learning and
perception.
2. Perceptual Development
A perplexing question for the constructivists who propose perception is essentially top-down in
nature is 'how can the neonate ever perceive?' If we all have to construct our own worlds based
on past experiences why are our perceptions so similar, even across cultures? Relying on
individual constructs for making sense of the world makes perception a very individual and
chancy process.
The constructivist approach stresses the role of knowledge in perception and therefore is against
the nativist approach to perceptual development. However, a substantial body of evidence has
been accrued favouring the nativist approach, for example:
Newborn infants show shape constancy (Slater & Morison, 1985); they prefer their mother's
voice to other voices (De Casper & Fifer, 1980); and it has been established that they prefer
normal features to scrambled features as early as 5 minutes after birth.
4
3. Sensory Evidence
Perhaps the major criticism of the constructivists is that they have underestimated the richness of
sensory evidence available to perceivers in the real world (as opposed to the laboratory where
much of the constructivists' evidence has come from).
Constructivists like Gregory frequently use the example of size constancy to support their
explanations. That is, we correctly perceive the size of an object even though the retinal image of
an object shrinks as the object recedes. They propose that sensory evidence from other sources
must be available for us to be able to do this.
However, in the real world, retinal images are rarely seen in isolation (as is possible in the
laboratory). There is a rich array of sensory information including other objects, background, the
distant horizon and movement. This rich source of sensory information is important to the
second approach to explaining perception that we will examine, namely the direct approach to
perception as proposed by Gibson.
A DIRECT APPROACH TO PERCEPTION - GIBSON 1966
Gibson claimed that perception is, in an important sense, direct. He worked during World War II
on problems of pilot selection and testing and came to realise:
In his early work on aviation he discovered what he called 'optic flow patterns'. When pilots
approach a landing strip the point towards which the pilot is moving appears motionless, with the
rest of the visual environment apparently moving away from that point.
The outflow of the optic array in a landing glide.
According to Gibson such optic flow patterns can provide pilots with unambiguous information
about their direction, speed and altitude.
Three important components of Gibson's Theory are 1. Optic Flow Patterns; 2. Invariant
Features; and 3. Affordances. These are now discussed.
5
1. Light and the Environment - Optic Flow Patterns
Changes in the flow of the optic array contain important information about what type of
movement is taking place. For example:
2
3
Any flow in the optic array means that the perceiver is moving, if there is no flow the
perceiver is static.
The flow of the optic array will either be coming from a particular point or moving towards
one. The centre of that movement indicates the direction in which the perceiver is moving. If
a flow seems to be coming out from a particular point, this means the perceiver is moving
towards that point; but if the flow seems to be moving towards that point, then the perceiver
is moving away. See above for moving towards an object, below is moving away:
The Optic Flow pattern for a person looking out of the back of a train.
2. The role of Invariants in perception
We rarely see a static view of an object or scene. When we move our head and eyes or walk
around our environment, things move in and out of our viewing fields. Textures expand as you
approach an object and contract as you move away. There is a pattern or structure available in
such texture gradients which provides a source of information about the environment. This flow
of texture is INVARIANT, ie it always occurs in the same way as we move around our
environment and, according to Gibson, is an important direct cue to depth. Two good examples
of invariants are texture and linear perspective.
6
Linear Perspective
Texture Gradient
giving the
appearance of
depth
Parallel lines, eg railway tracks,
appear to converge as they recede
into the distance.
3. Affordances
Are, in short, cues in the environment that aid perception. Important cues in the environment
include:
OPTICAL ARRAY
The patterns of light that reach the eye
from the environment.
RELATIVE
BRIGHTNESS
Objects with brighter, clearer
images are perceived as closer.
TEXTURE
GRADIENT
The grain of texture gets
smaller as the object recedes. Gives the
impression of surfaces receding into
the distance.
RELATIVE SIZE
When an object moves further away
from the eye the image gets smaller.
Objects with smaller images are seen
as more distant.
SUPERIMPOSITION
If the image of one object blocks the
image of another, the first object is
seen as closer.
HEIGHT IN THE
VISUAL FIELD
Objects further away are
generally higher in the visual field.
7
EVALUATION OF GIBSON'S DIRECT APPROACH TO PERCEPTION
Visual Illusions
Gibson's emphasis on DIRECT perception provides an explanation for the (generally) fast and
accurate perception of the environment. However, his theory cannot explain why perceptions are
sometimes inaccurate, eg in illusions. He claimed the illusions used in experimental work
constituted extremely artificial perceptual situations unlikely to be encountered in the real world,
however this dismissal cannot realistically be applied to all illusions.
For example, Gibson's theory cannot account for perceptual errors
like the general tendency for people to overestimate vertical
extents relative to horizontal ones.
Neither can Gibson's theory explain naturally occurring illusions. For example if you stare for
some time at a waterfall and then transfer your gaze to a stationary object, the object appears to
move in the opposite direction .
Bottom-up or Top-down Processing?
Neither direct nor constructivist theories of perception seem capable of explaining all perception
all of the time. Gibson's theory appears to be based on perceivers operating under ideal viewing
conditions, where stimulus information is plentiful and is available for a suitable length of time.
Constructivist theories, like Gregory's, have typically involved viewing under less than ideal
conditions.
Research by Tulving et al manipulated both the clarity of the stimulus input and the impact of the
perceptual context in a word identification task. As clarity of the stimulus (through exposure
duration) and the amount of context increased, so did the likelihood of correct identification.
However, as the exposure duration increased, so the impact of context was reduced, suggesting
that if stimulus information is high, then the need to use other sources of information is reduced.
One theory that explains how top-down and bottom-up processes may be seen as interacting with
each other to produce the best interpretation of the stimulus was proposed by Neisser (1976) known as the 'Perceptual Cycle'.
PERCEPTUAL SET
The concept of perceptual set is important to the active process of perception. Allport, 1955
defined perceptual set as:
"a perceptual bias or predisposition or readiness to perceive
particular features of a stimulus".
Perceptual set is a tendency to perceive or notice some aspects of the available sensory data and
ignore others. According to Vernon, 1955 set works in two ways: (1) The perceiver has certain
expectations and focuses attention on particular aspects of the sensory data: This he calls a
'Selector'. (2) The perceiver knows how to classify, understand and name selected data and what
inferences to draw from it. This he calls an 'Interpreter'.
8
It has been found that a number of variables, or factors, influence set, and set in turn influences
perception. The factors include:
• Expectations
• Emotion
• Motivation
• Culture
1. EXPECTATION
(a) Bruner & Minturn, 1955 illustrated how expectation could influence set by showing
participants an ambiguous figure '13' set in the context of letters or numbers e.g.
The physical stimulus '13' is the same in each case but is perceived differently because of the
influence of the context in which it appears. We EXPECT to see a letter in the context of other
letters of the alphabet, whereas we EXPECT to see numbers in the context of other numbers.
(b) We may fail to notice printing/writing errors for the same reason. For example:
1. 'The Cat Sat on the Map and Licked its Whiskers'.
2.
Once
in a
a lifetime
(a) and (b) are examples of interaction between expectation and past experience.
(c) A study by Bugelski and Alampay, 1961 using the 'rat-man' ambiguous figure also
demonstrated the importance of expectation in inducing set. Participants were shown either a
series of animal pictures or neutral pictures prior to exposure to the ambiguous picture. They
found participants were significantly more likely to perceive the ambiguous picture as a rat if
they had had prior exposure to animal pictures.
9
2. MOT1VATION AND EMOTION
Allport, 1955 has distinguished 6 types of motivational-emotional influence on perception:
(i) bodily needs (eg physiological needs)
(ii) reward and punishment
(iii) emotional connotation
(iv) individual values
(v) personality
(vi) the value of objects.
(a) Sandford, 1936 deprived participants of food for varying lengths of time, up to 4 hours, and
then showed them ambiguous pictures. Participants were more likely to interpret the pictures as
something to do with food if they had been deprived of food for a longer period of time.
Similarly Gilchrist & Nesberg, 1952, found participants who had gone without food for the
longest periods were more likely to rate pictures of food as brighter. This effect did not occur
with non-food pictures.
(b) A more recent study into the effect of emotion on perception was carried out by KunstWilson & Zajonc, 1980. Participants were repeatedly presented with geometric figures, but at
levels of exposure too brief to permit recognition. Then, on each of a series of test trials,
participants were presented a pair of geometric forms, one of which had previously been
presented and one of which was brand new. For each pair, participants had to answer two
questions: (a) Which of the 2 had previously been presented? ( A recognition test); and (b)
Which of the two was most attractive? (A feeling test).
The hypothesis for this study was based on a well-known finding that the more we are exposed
to a stimulus, the more familiar we become with it and the more we like it. Results showed no
discrimination on the recognition test - they were completely unable to tell old forms from new
ones, but participants could discriminate on the feeling test, as they consistently favoured old
forms over new ones. Thus information that is unavailable for conscious recognition seems to be
available to an unconscious system that is linked to affect and emotion.
3. CULTURE
(a) Deregowski, 1972 investigated whether pictures are seen and understood in the same way in
different cultures. His findings suggest that perceiving perspective in drawings is in fact a
specific cultural skill, which is learned rather than automatic. He found people from several
cultures prefer drawings which don't show perspective, but instead are split so as to show
10
both sides of an object at the same time. In one study he found a fairly consistent preference
among African children and adults for split-type drawings over perspective-drawings. Splittype drawings show all the important features of an object which could not normally be seen
at once from that perspective. Perspective drawings give just one view of an object.
Deregowski argued that this split-style representation is universal and is found in European
children before they are taught differently.
Elephant drawing split-view and top-view perspective. The split elephant drawing was generally
preferred by African children and adults.
(b) Hudson, 1960 noted difficulties among South African Bantu workers in interpreting depth
cues in pictures. Such cues are important because they convey information about the spatial
relationships among the objects in pictures. A person using depth cues will extract a different
meaning from a picture than a person not using such cues.
Hudson tested pictorial depth perception by showing participants a picture like the one below. A
correct interpretation is that the hunter is trying to spear the antelope, which is nearer to him than
the elephant. An incorrect interpretation is that the elephant is nearer and about to be speared.
The picture contains two depth cues: overlapping objects and known size of objects. Questions
were asked in the participants native language such as:
‘What do you see?’
‘Which is nearer, the antelope or the elephant?’
‘What is the man doing?’'
The results indicted that both children and adults found it difficult to perceive depth in the
pictures.
11
The cross-cultural studies seem to indicate that history and culture play an important part in how
we perceive our environment. Perceptual set is concerned with the active nature of perceptual
processes and clearly there may be a difference cross-culturally in the kinds of factors that affect
perceptual set and the nature of the effect.
VISUAL CONSTANCIES
Perceptual constancies involve seeing visual objects accurately, regardless of their distance away
from us, or other factors that distort the retinal image. For example, the door is 'seen' as a
rectangular shape even when open and the retinal image is of a trapezium.
1. SIZE CONSTANCY
12
When we observe an object, the light falling on the retina is known as the 'retinal image'. Light
rays enter through the lens in the front of the eye and are focused on a particular area of the
retina at the back of the eye. The intriguing question is 'how does the brain interpret the light
image on the retina to arrive at an accurate perception?' This is particularly interesting when we
try to explain how objects are perceived as the same size even when seen at a distance. This can
be demonstrated:
Hold both hands in front of you, your left hand at arms length and your right hand about half
way to your face. Both hands are 'perceived' as the same size, but the retinal image of the right
hand will be much larger. Now move your right hand so that it overlaps the left hand, stin with
the left at arms length and the right hand halfway towards your face. You should find that the
right hand 'swamps' the left - this is because the two hands are actually stimulating different
sized retinal images. The diagram below shows the different sizes of retinal images projected by
the same sized hand at different distances.
Retina
Lens
Hands, near and far.
Note that the
nearer hand
has a much
larger image
on the retina.
How is it that we 'perceive' the hand as being the same size in spite of these differences in the
retinal image? One explanation for this (ie for 'size constancy') is that the brain receives
information both about size of the retinal image and distance of the object. The visual system
seems to automatically make allowances for distance. For example, even thought the retinal
image of the left hand is small, distance cues inform the brain that it is further away than the
13
right hand and this can explain the smaller retinal image. Taking account of both size and
distance in the visual system the brain would probably conclude that the hands are the same size.
Several distance cues have been identified which could aid the process of constancy scaling, and
two of these are:
A. RETINAL DISPARITY
The retina in each eye receives a slightly different image. To demonstrate this: close one eye and
line your finger up with the corner of the room. Now close that eye and open the other eye - the
finger appears to move. Normally, when using both eyes the visual system calculates how far
away the finger is by combining information from the differences between the two images on the
retinas.
B. MOTION PARALLAX
To a moving observer distant objects appear to move more slowly than near objects. For
example, when you look out of the window of a moving train nearer objects like telegraph poles
flash by faster than distant telegraph poles. The visual system can use this information to
calculate how far away the telegraph poles are.
2. SHAPE CONSTANCY
Knowledge of the 'real' shape of an object means that it is still perceived as being the same
regardless of the angle from which it is viewed. For example, I 'perceive' the wall clock as
circular even though from the angle I am now looking at it the retinal image is of an elliptical
shape.
3. COLOUR AND BRIGHTESS CONSTANCY
This is where familiar objects retain their colour (or hue) in a variety of lighting conditions.
Knowledge of the 'real' colour of the object means that it is still perceived as being that colour,
regardless of the actual colour wavelength of the light that reaches the eye. Thus at night we still
perceive our red car as 'red' even under the night light.
4. LOCATION CONSTANCY
Knowledge that objects don't generally move means that things are seen as remaining in the
same place even when the observer moves around and the retinal image changes. As we move
around the environment we produce a constantly changing pattern of retinal images yet we do
not perceive the world as spinning; this is due to kinaesthetic feedback. The brain subtracts the
eye movement commands from the resulting changes on the retina and this helps to keep us and
the environment stable.
14
VISUAL ILLUSIONS
Constancy scaling seems to happen automatically; it doesn't require us to think about it.
Normally the visual system receives accurate information about the size and distance of objects
eg by the use of distance cues. Psychologists have been particularly interested in instances where
the visual system makes errors as it does when conflicting information is received. Look at the
Ponzo Illusion:
Which horizontal line
looks longer?
In the Ponzo Illusion the top horizontal line looks longer than the line below it despite the fact
that they are the same size and therefore must have the same retinal image. One explanation for
why we perceive the top line as longer (paradoxically, we usually perceive the nearest line as
longer) is that these type of illusions contain false depth cues which trigger the size constancy
mechanism inappropriately. In the case of the Ponzo Illusion the converging lines are the false
depth cues which suggest the top line is further away than the bottom line. The eye is tricked by
the depth cues in the converging lines into 'thinking' the top line is further away. An object which
is further away produces a smaller retinal image. The size constancy mechanism therefore
expands the perceived size of the top line. In most cases automatic triggering of the size
constancy mechanism by a simple depth cue would result in an accurate perception. It is when
perception goes wrong that psychologists have been given insight into how the automatic scaling
mechanism might operate.
ILLUSIONS IN THE NATURAL WORLD
15
Illusions are relatively rare in the natural world and so there has been no evolutionary pressure to
produce a perceptual system that overcomes this. One illusion that does occur in the natural
world is the 'Moon Illusion'.
The moon (and sun) appears larger when low down on the horizon than when high in the sky.
The size of the retinal image does not change. You can 'black out' the moon by holding a 1/4
inch disc at arm's length, whether the moon is high or low in the sky. Why then does it appear to
be much larger when it is near the horizon? One explanation is constancy scaling. When the
moon is high in the sky, there is no depth/distance information visible, so you see the moon at its
correct (ie retinal image) size. However, when the moon is low down, near the horizon, depth
cues operate. The horizon is as far away as it is possible to see so constancy scaling
automatically increases the size. If the moon (whose retinal image remains the same) appears to
be further away when it is closer to the horizon then we conclude it must be larger.
FOUR TYPES OF ILLUSIONS
Gregory, 1983 has identified 4 types of illusions:
Distortions (eg Ponzo Illusion) where we make a perceptual mistake;
Ambiguous figures (eg the Necker Cube) where the same input results in different perceptions;
16
Paradoxical Figures (eg the Penrose Trident) we assume this is a 3-dimensional object;
Fictions (eg the Kanizsa triangle) where we see what is not in the stimulus, ie a second triangle.
ATTENTION
"Everyone knows what attention is. It is taking possession by the mind, in clear and vivid form,
of one of what seems several simultaneously possible objects or trains of thought.... it implies
withdrawal from some things in order to deal effectively with others "
W James 1890.
Do we attend simultaneously to everything in our environment or do we attend selectively to
certain types of information at any one time? The topics of perception and attention merge into
17
each other since both are concerned with the question of what we become aware of in our
environment. We can only perceive things we are attending to, we can only attend to things we
perceive. Therefore some of the same questions are at issue, notably the question of whether
attention is governed by 'bottom-up' sensory processes (as proposed by the information
processing models) or whether 'top-down' processes like memory/expectations etc play an
important part in attention.
HUMANS AS INFORMATION PROCESSORS
When we are selectively attending to one activity, we tend to ignore other stimulation, although
our attention can be distracted by something else, like the telephone ringing or someone using
our name. Psychologists are interested in what makes us attend to one thing rather than another
(selective attention); why we sometimes switch our attention to something that was previously
unattended (e.g. Cocktail Party Syndrome), and how many things we can attend to at the same
time (attentional capacity).
One way of conceptualising attention is to think of humans as information processors who can
only process a limited amount of information at a time without becoming overloaded. Broadbent
and others in the 1950's adopted a model of the brain as a limited capacity information
processing system, through which external input is transmitted.
INFORMATION PROCESSING SYSTEM
STIMULUS
Input
Storage
Output
processes
processes
processes
RESPONSE
Information processing models consist of a series of stages, or boxes, which represent stages of
processing. Arrows indicate the flow of information from one stage to the next.
Input processes are concerned with the analysis of the stimuli.
Storage processes cover everything that happens to stimuli internally in the brain and can
include coding and manipulation of the stimuli.
Output processes are responsible for preparing an appropriate response to a stimulus.
ATTENTION THEORIES are concerned with how information is selected from incoming
stimuli for further processing in the system - therefore operate at the input processes end of the
model.
Basic Assumptions of the Information Processing Approach to Cognitive
Processes
The information processing approach is based on a number of assumptions, including:
(1) information made available by the environment is processed by a series of processing
systems (eg attention, perception, short-term memory);
18
(2) these processing systems transform or alter the information in systematic ways;
(3) the aim of research is to specify the processes and structures that underlie cognitive
performance;
(4) information processing in humans resembles that in computers.
A number of Models of attention within the Information Processing framework have been
proposed including:
Broadbent's Filter Model (1958), Treisman's Attenuation Model (1964) Deutsch and Deutsch's
Late Selection Model (1963)
and these will be outlined and evaluated. However, there are a number of evaluative points to
bear in mind when studying these models, and the information processing approach in general.
These include:
1. The information processing models assume serial processing of stimulus inputs.
Serial processing effectively means one process has to be completed before the next starts.
Parallel processing assumes some or all processes involved in a cognitive task(s) occur at the
same time.
Attention
Selective Attention
Divided Attention
(Processes only one
input)
eg
Broadbent/Treisman
Dichotic Listening
Tasks
Auditory
(Processes all inputs)
eg Kahneman Dual
Task Experiments
Visual
Eg Shadowing, fate
of unattended
stimuli
Task Similarity
19
Practice
Eg effects on
automaticity
Task Difficulty
There is evidence from dual-task experiments (examples are given later) that parallel processing
is possible. It is difficult to determine whether a particular task is processed in a serial or parallel
fashion as it probably depends (a) on the processes required to solve a task, and (b) the amount
of practice on a task. Parallel processing is probably more frequent when someone is highly
skilled; for example a skilled typist thinks several letters ahead, a novice focuses on just 1 letter
at a time.
2. The analogy between human cognition and computer functioning adopted by the information
processing approach is limited. Computers can be regarded as information processing systems
insofar as they:
(i) combine information presented with stored information to provide solutions to a
variety of problems, and
(ii) most computers have a central processor of limited capacity and it is usually assumed
that capacity limitations affect the human attentional system.
BUT (i) the human brain has the capacity for extensive parallel processing and computers
often rely on serial processing;
(ii) humans are influenced in their cognitions by a number of conflicting emotional and
motivational factors.
3. The evidence for the theories/models of attention which come under the information
processing approach is largely based on experiments under controlled, scientific conditions.
Most laboratory studies are artificial and could be said to lack ecological validity. In everyday
life, cognitive processes are often linked to a goal (eg you pay attention in class because you
want to pass the examination), whereas in the laboratory the experiments are carried out in
isolation form other cognitive and motivational factors. Although these laboratory experiments
are easy to interpret, the data may not be applicable to the real world outside the laboratory.
More recent ecologically valid approaches to cognition have been proposed (eg the Perceptual
Cycle, Neisser, 1976).
Attention has been studied largely in isolation from other cognitive processes, although
clearly it operates as an interdependent system with the related cognitive processes of perception
and memory. The more successful we become at examining part of the cognitive system in
isolation, the less our data are likely to tell us about cognition in everyday life.
4. The Models proposed by Broadbent and Treisman are 'bottom-up' or ‘stimulus driven’
models of attention. Although it is agreed that stimulus driven information in cognition is
important, what the individual brings to the task in terms of expectations/past experiences are
also important. These influences are known as 'top-down' or 'conceptually-driven' processes. For
example, read the triangle below:
Paris
in the
the Spring
20
Expectation (top-down processing) often over-rides information actually available in the
stimulus (bottom-up) which we are, supposedly, attending to. How did you read the text in the
triangle above?
MODELS OF ATTENTION
BOTTLENECK MODELS OF ATTENTION
A bottleneck restricts the rate of flow, as, say, in the narrow neck of a milk bottle. The narrower
the bottleneck, the lower the rate of flow. Broadbent's, Treisman's and Deutsch and Deutsch
Models of Attention are all bottleneck models because they predict we cannot consciously attend
to all of our sensory input at the same time. This limited capacity for paying attention is therefore
a bottleneck and the models each try to explain how the material that passes through the
bottleneck is selected.
BROADBENT’S FlLTER MODEL
Donald Broadbent is recognised as one of the major contributors to the information processing
approach, which started with his work with air traffic controllers during the war. In that situation
a number of competing messages from departing and incoming aircraft are arriving continuously,
all requiring attention. The air traffic controller finds s/he can deal effectively with only one
message at a time and so has to decide which is the most important. Broadbent designed an
experiment (dichotic listening) to investigate the processes involved in switching attention which
are presumed to be going on internalb in our heads.
Broadbent argued that information from all of the stimuli presented at any given time enters a
sensory buffer. One of the inputs is then selected on the basis of its physical characteristics for
further processing by being allowed to pass through a filter. Because we have only a limited
capacity to process information, this filter is designed to prevent the information-processing
system from becoming overloaded. The inputs not initially selected by the filter remain briefly in
the sensory buffer, and if they are not processed they decay rapidly. Broadbent assumed that the
filter rejected the non-shadowed or unattended message at an early stage of processing.
Broadbent (1958) looked at air-traffic control type problems in a laboratory.
21
Broadbent wanted to see how people were able to focus their attention (selectively attend), and to
do this he deliberately overloaded them with stimuli - they had too many signals, too much
information to process at the same time. One of the ways Broadbent achieved this was by
simultaneously sending one message (a 3-digit number) to a person's right ear and a different
message (a different 3-digit number) to their left ear. Participants were asked to listen to both
messages at the same time and repeat what they heard. this is known as a 'dichotic listening task'.
Right Ear
7
5
6
Left Ear
4
8
3
1. Order of presentation
74
58
63
(ii) Ear by ear
756
483
In the example above the participant hears 3 digits in their right ear (7,5,6) and 3 digits in their
left ear (4,8,3). Broadbent was interested in how these would be repeated back. Would the
participant repeat the digits back in the order that they were heard (order of presentation), or
repeat back what was heard in one ear followed by the other ear (ear-by-ear), He actually found
that people made fewer mistakes repeating back ear by ear and would usually repeat back this
way.
SINGLE CHANNEL MODEL
22
Results from this research led Broadbent to produce his 'filter' model of how selective attention
operates. Broadbent concluded that we can pay attention to only one channel at a time - so his is
a single channel model.
In the dichotic listening task each ear is a channel. We can listen either to the right ear (that's one
channel) or the left ear (that's another channel). Broadbent also discovered that it is difficult to
switch channels more than twice a second. So you can only pay attention to the message in one
ear at a time - the message in the other ear is lost, though you may be able to repeat back a few
items from the unattended ear. This could be explained by the short-term memory store which
holds onto information in the unattended ear for a short time.
Broadbent thought that the filter, which selects one channel for attention, does this only on the
basis of PHYSICAL CHARACTERISTICS of the information coming in: for example, which
particular ear the information was coming to, or the type of voice. According to Broadbent the
meaning of any of the messages is not taken into account at all by the filter. All SEMANTIC
PROCESSING (processing the information to decode the meaning, in other words understand
what is said) is carried out after the filter has selected the channel to pay attention to. So
whatever message is sent to the unattended ear is not understood.
BROADBENT’S FILTER MODEL
Senses
e.g. eye,
ear
Short Term
Memory
Store
FILTER
Input
channels
Selected input
for attention.
Selection on the
basis of physical
characteristics only.
Because we have only a limited capacity to process information, this filter is designed to prevent
the information-processing system from becoming overloaded. The inputs not initially selected
by the filter remain briefly in the sensory buffer store, and if they are not processed they decay
rapidly. Broadbent assumed that the filter rejected the non-shadowed or unattended message at
an early stage of processing.
EVALUATION OF BROADBENT'S MODEL
(1) Broadbent's dichotic listening experiments have been criticised because:
(a) The early studies all used people who were unfamiliar with shadowing and so found it very
difficult and demanding. Eysenck & Keane (1990) claim that the inability of naive participants to
shadow successfully is due to their unfamiliarity with the shadowing task rather than an inability
of the attentional system.
(b) Participants reported after the entire message had been played - it is possible that the
unattended message is analysed thoroughly but participants forget.
23
(c) Analysis of the unattended message might occur below the level of conscious awareness. For
example, research by von Wright et al (1975) indicated analysis of the unattended message in a
shadowing task. A word was first presented to participants with a mild electric shock. When the
same word was later presented to the unattended channel, participants registered an increase in
GSR (indicative of emotional arousal and analysis of the word in the unattended channel).
More recent research has indicated the above points are important: eg
Moray, N (1969) studied the effects of practice. Naive subjects could only detect 8% of digits
appearing in either the shadowed or non-shadowed message, Moray (an experienced 'shadower')
detected 67%.
2. Broadbent's theory predicts that hearing your name when you are not paying attention should
be impossible because unattended messages are filtered out before you process the meaning thus the model cannot account for the 'Cocktail Party Phenomenon'.
3. Other researchers have demonstrated the 'cocktail party effect' under experimental conditions
and have discovered occasions when information heard in the unattended ear 'broke through' to
interfere with information participants are paying attention to in the other ear. For example, Gray
& Wedderburn (1960) found that students could put material from both ears together so that it
made sense. This implies some analysis of meaning of stimuli must have occurred prior to the
selection of channels. In Broadbent's model the filter is based solely on sensory analysis of the
physical characteristics of the stimuli.
GRAY & WEDDERBURN'S EXPERIMENT
Gray & Wedderburn found that participants were able to give a category by category response
(ie one that made sense of the material heard, as shown in the diagram below). Broadbent's Filter
Model predicts this would not be possible. It is now certain that the unattended message can be
processed far more thoroughly than was allowed for in Broadbent's theory.
Right Ear
Jack
2
Jill
Left Ear
1
and
3
Possible Responses
Ear by Ear
Jack 2 Jill
1 and 3
24
Order of Presentation
Jack 1
2 and
Jill 3
Category by Category
123
Jack and Jill
ANNE TREISMAN’S (1964) ATTENUATION MODEL
Selective attention requires that stimuli are filtered so that attention is directed. Broadbent's
model suggests that the selection of material to attend to (that is, the filtering) is made early,
before semantic analysis. Treisman's model retains this early filter which works on physical
features of the message only. The crucial difference is that Treisman's filter ATTENUATES
rather than eliminates the unattended material. Attenuation is like turning down the volume so
that if you have 4 sources of sound in one room (TV, radio, people talking, baby crying) you can
turn down or attenuate 3 in order to attend to the fourth. The result is almost the same as turning
them off, the unattended material appears lost. But, if a nonattended channel includes your
name? for example, there is a chance you will hear it because the material is still there.
TREISMAN'S ATTENUATION
Senses
e.g. eye,
ear
Input
channels
Attenuating
Filter
Semantic
Analysis
Filter
Selected input
for attention.
Inputs, including
attenuated inputs
are passed on for
semantic analysis.
Treisman agreed with Broadbent that there was a bottleneck, but disagreed with the location.
Treisman carried out experiments using the speech shadowing method. Typically, in this method
participants are asked to simultaneously repeat aloud speech played into one ear (called the
attended ear) whilst another message is spoken to the other ear.
In one shadowing experiment, identical messages were presented to two ears but with a slight
delay between them. If this delay was too long, then participants did not notice that the same
material was played to both ears. When the unattended message was ahead of the shadowed
message by upto to 2 seconds, participants noticed the similarity. If it is assumed the unattended
material is held in a temporary buffer store, then these results would indicate that the duration of
material held in sensory buffer store is about 2 seconds.
In an experiment with bilingual participants, Treisman presented the attended message in English
and the unattended message in a French translation. When the French version lagged only
slightly behind the English version, participants could report that both messages had the same
meaning. C1early, then, the unattended message was being processed for meaning and
Broadbent's Filter Model, where the filter extracted on the basis of physical characteristics only,
25
could not explain these findings. The evidence suggests that Broadbent's Filter Model is not
adequate, it does not allow for meaning being taken into account.
Treisman's ATTENUATION THEORY, in which the unattended message is processed less
thoroughly than the attended one, suggests processing of the unattended message is attenuated or
reduced to a greater or lesser extent depending on the demands on the limited capacity
processing system. Treisman suggested messages are processed in a systematic way, beginning
with analysis of physical characteristics, sy11abic pattern, and individual words. After that,
grammatical structure and meaning are processed. It will often happen that there is insufficient
processing capacity to permit a full analysis of unattended stimuli. In that case, later analyses
will be omitted. This theory neatly predicts that it will usually be the physical characteristics of
unattended inputs which are remembered rather than their meaning. To be analysed, items have
to reach a certain threshold of intensity All the attended/selected material will reach this
threshold but only some of the attenuated items. Some items will retain a permanently reduced
threshold, for example your own name or words/phrases like 'help' and 'fire'. Other items will
have a reduced threshold at a particular moment if they have some relevance to the main
attended message.
EVALUATION OF TREISMAN'S ATTENUATION MODEL
1. Treisman's Model overcomes some of the problems associated with Broadbent's Filter Model,
e.g. the Attenuation Model can account for the 'Cocktail Party Syndrome'.
2. Treisman's model does not explain how exactly semantic analysis works.
3. The nature of the attenuation process has never been precisely specified.
4. A problem with all dichotic listening experiments is that you can never be sure that the
participants have not actually switched attention to the so called unattended channel.
EARLY VS LATE SELECTION MODELS OF ATTENTION
When does selectivity occur? Does it happen in the early stages of recognition - when
constructing a description of the input - or during the later stages, when comparing the input's
descriptions to those of stored objects? The issue is important because it concerns whether we
can selectively ignore something before we know what it means - EARLY SELECTION - or
only after we know its meaning - L ATE SELECTION.
Broadbent and Treisman agree that selection of a single channel occurs at an early stage before
recognition processes begin and so their models are called EARLY SELECTION MODELS.
An alternative view is that information from all channels is transmitted to the semantic analysis
recognition stage and it is only after this that a selection is made. The general framework for a
late selection theory of this kind was first proposed by Deutsch and Deutsch (1963) and was later
elaborated by Norman (1968).
DEUTSCH AND DEUTSCH’S LATE SELECTION MODEL (1963)
DEUTSCH AND DEUTSCH (1963) solved the problems posed by the Broadbent model in a
different way to Treisman. Their model suggests that all inputs are subject to high level semantic
analysis before a filter selects material for conscious attention. Selection is therefore later
26
because it occurs after items have been recognised rather than before as in Broadbent's model.
Selection is also 'top-down' as opposed to Broadbent's and Treisman's Models which are known
as 'bottom-up' in that an item which has relevance to you, your name for example, or is in
context, is likely to be selected. Material is identified or recognised, its relevance, value and
importance weighed and the most relevant is passed upwards for conscious attention.
DEUTSCH AND DEUTSCH'S PERTINENCE MODEL (1963)
Senses
e.g. eye,
ear
Semantic
Analysis
Conscious
Attention
Output.
(Unconscious)
Input
channels
Top Down
Factors
DEUTSCH AND DEUTSCH (1963) proposed a more radical departure from Broadbent's
position in their claim that all inputs are fully analysed before any selection occurs. The
bottleneck or filter is thus placed later in the information processing system, immediately before
a response is made. Selection at that late stage is based on the relative importance of the inputs.
EVALUATION OF THE DEUTSCH AND DEUTSCH (1963) MODEL
FOR THE MODEL
1. Some support for a late selection model is offered by research which shows an unattended
message in a dichotic listening task can affect behaviour even though the listener has no
conscious awareness of hearing the unattended message. For example, Moray (1969) paired an
electric shock with a word over several trials so that the person became conditioned to produce a
detectable change in GSR (Galvanic Skin Response) when the word was spoken. He found that
several of his participants produced a change in GSR when the word occurred in an unattended
message even though they were not aware of hearing it.
2. McKay (1973) using ambiguous words like 'bark' instructed participants to shadow an
ambiguous sentence while, in the unattended ear a word was played which could clarify the
meaning of the sentence. Later, participants who were quite unaware at a conscious level of the
word in their unattended ear, chose meanings for the ambiguous sentence they had shadowed
which were in line with the unattended word.
27
Right Ear (Attended
and Shadowed Ear)
Left Ear (Unattended
Ear)
“the bark was not like
anything she was
familiar with”
Either (a) tree
or (b) dog.
3. More recent studies have also shown that under some circumstances unattended material may
receive some degree of analysis. For example, Wexler (1988) found that a GSR response varied
not only according to ear of presentation but also according to the personality of the listener,
indicating that processing of unattended material is more complicated than the early models
suggest.
AGAINST THE MODEL
1. However, the assumption made by Deutsch and Deutsch that all stimuli are analysed
completely, but that most of the analysed information is lost immediately, seems rather
uneconomical.
2. The research to support the Deutsh and Deutsch Model can also be explained by Treisman's
model. A word in the unattended ear could have a reduced threshold because of its relevance.
Physiological evidence also supports Treisman. When a measure of brain-wave activity known
as the evoked potential is recorded, it typically shows that the initial response to the unattended
message is much weaker than the response to the attended message, suggesting attenuated
processing of the unattended message.
3. Treisman and Geffen (1967) asked participants to shadow one of two simultaneous messages,
and at the same time monitor BOTH messages in order to detect target words. Detection was
indicated by tapping. According to Treisman's theory, detection on the unattended message
should be less than the shadowed message, whereas Deutsch and Deutsch's Model would predict
no difference (as both messages would be fully analysed). As Treisman's Model predicts,
detection was significantly higher on the shadowed message.
Perhaps on grounds of economy and explanatory powers of the available experimental
data, Treisman's is the model most appropriate at present.
AUTOMATIC PROCESSING
Researchers interested in attention have suggested a distinction between AUTOMATIC and
CONTROLLED processing (Posner ~ Snyder, 197S). The basic idea is that some mental and
physical processes are under an individual's conscious control, while others tend to occur
automatically, without conscious awareness or intention.
28
A frequently quoted example of this is learning to drive a car. When you first learn to drive, such
things as steering, braking and changing gear, all require a great deal of concentration. Problems
often arise for the learner driver when they are required to do two or more things at once eg
brake and change down gear. Also, as any driving instructor will tell you, learner drivers can
become so engrossed in such things as changing gear that they fail to attend to what is happening
on the road in front of them! Yet to drive competently frequently requires a driver to do two or
more things virtually simultaneously.
How does the transformation from learner to expert occur? The concepts of AUTOMATIC and
CONTROLLED processing have been used to explain this transformation. The basic idea is
simple - with practice, skills which initially required a considerable amount of attention become
virtually automatic. The development of automatic processing has a major advantage in that it
reduces the number and amount of things that we have to attend to consciously. Thus the scarce
resource of conscious attention is released for other tasks.
However, psychologists such as Gleitman (1981) have pointed out that automatic processing can
produce interference which actually lowers performance on certain tasks. A classic example of
this is the STROOP EFFECT, named after JR Stroop (1935) who devised a colour naming
experiment. The experiment involved participants naming colours as quickly as possible. In one
condition participants named patches of colour, in a second condition participants had to name
the ink colour in which words were printed, but the words themselves were colour names. For
exarnple, participants would see the word BLUE but it would be written in red ink and their task
would be to say RED'. Stroop found that participants were much slower at naming the ink
colours when the stimuli were themselves colour words.
One explanation for the Stroop Effect is that we automatically process the meaning of words.
Thus when a participant sees the word BLUE but is supposed to respond to the ink colour and
say RED, the name of the word is automatically processed. This interferes with the participants'
ability to process and name the ink colour (RED), thus delaying their response. In particular it
has been suggested that the Stroop effect produces a 'mental race' between the 2 processes
involved in naming colours, the reading response wins the race and slows the colour naming.
The difficulty experiences in naming the ink colour of the colour words is therefore the
consequence of an overlearned skill, and cannot be brought under conscious control.
DIVIDED ATTENTION
Can we do two things at once?
TASK DIFFICULTY
An obvious factor determining how well we can perform two tasks together is their level of
difficulty. However, a task that is difficult for one person may be straightforward for another (eg
when we first learn to drive). We should also consider the difficulty of each of the tasks
separately.
PRACTICE
While experienced drivers can converse and drive at the same time, learner drivers have
difficulty doing the 2 tasks. Spelke et al l976 demonstrated the value of practice with 2 subjects
29
(Diane and John) who were given approximately 90 hours of training on a variety of tasks. The
students were first of all asked to read short stories for comprehension while writing down words
at dictation. To begin with their reading speed and their handwriting during dictation both
suffered substantially. After 30 hours of practice, however, their reading speed and
comprehension had both improved up to the levels they displayed when not taking dictation and
their handwriting was also better quality.
TASK SIMILARITY
It may well be that the inability to report much about the non-shadowed message in the
shadowing situation is due to the great similarity between the 2 inputs - both English prose
passages presented in an auditory fashion.
Allport et al 1972 found when 2 shadowing tasks were dissimilar - for example, the standard
shadowing task and the task of learning pictorial information - 90% of the pictures were
recognised.
The extent to which 2 tasks can be performed successfully together seems to depend on a number
of factors:
3. Two dissimilar, highly practised and simple tasks can typically be performed well together,
whereas
4. 2. two similar, novel and complicated tasks cannot.
Dual task experiments imply that some well-learnt skills are virtually automatic. Once a decision
has been made to drive somewhere, the actual driving of the car goes into 'autopilot'. And yet
some form of unconscious monitoring of environmental requirements must be going on to enable
us to deal with sudden emergencies.
KAHNEMAN'S CAPACITY THEORY OF ATTENTION
Kahneman (1973) proposed that there is a certain amount of ATTENTIONAL CAPAClTY
available which has to be allocated among the various demands made on it. On the capacity side,
when someone is aroused and alert, they have more attentional resources available than when
they are lethargic. On the demand side, the attention demanded by a particular activity is defined
in terms of MENTAL EFFORT; the more skilled an individual the less mental effort is required,
and so less attention needs to be allocated to that activity. If a person is both motivated (which
increases attentional capacity) and skilled (which decreases the amount of attention needed), he
or she will have some attentional capacity left over.
People can attend to more than one thing at a time as long as the total mental effort required does
not exceed the total capacity available. In Kahneman's model allocation of attentional resources
depends on a CENTRAL ALLOCATION POLICY for dividing available attention between
competing demands.
Once a task has become automatic it requires little mental effort and therefore we can attend to
more than one automatic task at any one time, e.g. driving and talking.
Kahneman's Capacity Model of Attention
30
(1) Attention is a central dynamic process rather than the result of automatic filtering of
perceptual input.
(2) Attention is largely top-down process as opposed to the Filter Models which suggest a
bottom-up process.
(3) The focus of interest is the way the central allocation policy is operated so as to share
appropriate amounts of attention between skilled automatic tasks and more dimcult tasks which
require a lot of mental effort.
(4) Rather than a one-way flow of information from input through to responses, attention
involves constant perceptual evaluation of the demands required to produce appropriate
responses.
EVALUATION OF KAHNEMAN'S MODEL
1. Cheng (1985) points out that when tasks have been learnt we change the way we process and
organise them, but this is not necessarily 'automaticity'. For example, if asked to add ten two's
you could add 2 and 2 to make 4, add 4 and 2 to make 6, add 6 and 2 to make 8, add 8 and 2 to
make 10 etc. Indeed young children when first learning arithmetic would do just this. When we
have more arithmetical knowledge and realise that adding ten two's is the same as multiplying
2 x 10, the solution can be produced in one step. The answer is quicker because we have
processed the information differently, using different operations, not because we have added ten
two's 'automatically'.
2. A MAJOR PROBLEM with Kahneman's theory is that is does not explain how the allocation
system decides on policies for allocating attentional resources tasks.
The need for a homunculus to make decisions is a weakness of a psychological theory, as what
makes the homunculus make decisions - another little person inside him(her) ????
31
Download