Psyc236 Letures: Lecture 1 – Intro Sensation: Sensation refers to how our senses (i.e. sight) transform physical properties of the environment (i.e. patterns of light) into electric nerve signals, which are then relayed to the brain. Sensation is the stimulation of our sensory organs (eyes, ears, skin, nose, tongue) Perception: Perception is the process of turning these neuronal signals into a meaningful experience. Like a colour, or a face. Perception is the “selection, organization and interpretation of sensory input. Perception is conscious. So when you are sleeping there may be sensation, but typically no perception. Cognition: Cognition refers to “all the processes by which sensory input is transformed, reduced, elaborated, stored, recovered and used” (Neisser, 1967). So this includes for example things like attention, memory, thought. Perception and cognition: There is obviously some overlap between what is considered perception and what cognition. For example face perception cannot be perfectly separated from face recognition, the latter requires memory. We have 5 senses – Smell, taste, touch hearing and sight Typical focus on visual perception 27 % vision = 250 cm2 8% auditory = 75 cm2 7% somatosensory 6% motor 0.5% olfactory Visual perception: Light (i.e. from the sun) shines on an object in the environment, where part of the light is reflected and falls into our eyes. There photoreceptors detect/measure the light and transform it into electric neuronal signals, which are send to our brain. (sensation). Then the visual cortex transforms, interprets this signals in very specific ways resulting in perception of the world surrounding us. Perception, “I perceive; I see …” We intuitively tend to understand perception as an objective, somewhat passive process, where we neutrally observe our surrounding. q Nothing could be further from the truth, there is little passive about perception (and less objectiveness than we like to believe). Interestingly we say “ I perceive … “ semantically acknowledging the active role of the ‘I’. q Indeed we even have a vivid perception of ’self’. We take our perception as ground truth Unified experience/parallel processing: Probably because our brain is slow, and we are under time pressure visual perception consists of several parallel processes. Determining colour, texture, motion, position even faces or object categories occurs in parallel and often independently and even at different speeds. Yet we have a uniform percept: The binding problem. When a tree is falling you can’t wait until you identified the tree before you take evasive action. Visual perception is understood as very accurate, solving complex problems. And if it’s inaccurate we typically ignore that because we don’t know any better. Some difficulties 2 eyes, but see 1 world Retina 2-D, but see world as 3-D Move head, but world seems stable Only get partial views of objects, but see complete objects Blink, but see continuous vision. Psychophysics: Making physical measurements of Behaviour Can be combined with eye movement recordings Sebastien Millet, Harrold Hill, Me, Simone Favelle, Steven Palmisano actually many more in some form EEG: Sort of similar setup + add in an EEG recording system Instead of measuring behaviour, it measured brain activity Bob Barry, Adam Clarke, Nadia Solowij, though often not so much measuring Perception and Cognition per se. fMRI: Measuring brain activity, in that sense like EEG, but obviously very different kind of measurements. Lecture 2: Cones: Three types for color Up to two bipolar cells per receptor (divergence) Low sensitivity (need proper light) 6-7 millions Photopic vision Barely 20 degress Rods: One type only Many receptor cells per bipolar cell (convergence) High sensitivity (good for night vision) 120 million Scotopic vision None in the fovea centralis (0.5 degree) Anatomy of retina: 0.4mm thick layered structure: 3 dark layers of cell bodies 3 light layers of axons and synapses Fovea: High spatial resolution • Two bipolar cells per receptor (Divergence) • Low sensitivity (needs much light) • 25% of ganglion cells (=axons to the brain) • Another 25% for parafovea • 1% of receptor cells • Very high receptor density • Small receptors • Only cones \ Periphery: • Low spatial resolution • Many receptor cells per bipolar cell (convergence) • High sensitivity (good for night vision) • 50% of ganglion cells • 99% of all receptors • Lower receptor density • Larger receptors (sensitive) • Mostly rods • While there is more cones than rods in the periphery and almost no rods (or even none in the very foveola) in the centre that does not mean peripheral vision relies on rods only. • Instead during day light even perception relies on cones – even in the periphery. • Defining a clear boundary between fovea and periphery is difficult as there are a number of steps / boundaries at different eccentricities. • Accuity decreases gradual without noticeable steps. • Approximately: 0-2 degrees: Fovea 2-5 degrees: Parafovea 5-16 degrees: near periphery 60 to 90 degrees: far periphery Types of ganglion cells/paths: P-path: : Fairly small cells (midget ganglion cells and parvocellular cells in LGN). About 80%, small receptive fields. High spatial resolution. M-path: Large cells (parasol ganglion cells and magnocellular cells in LGN). About 10%, large receptive fields. Fast. Respond to low contrast. K-path: Small cells (bistratified ganglion cells and konicellular cells in LGN). 8-10%, large receptive fields. blue on, red green off receptive fields Visual span – approximately 180 degrees 3 types of ganglion cells in retina are part of 3 pathways - Parvo system houses the midget ganglions - Magno system houses the Parasol ganglions -Konio system houses bistratified ganglions Parvo path: - Fairly small cells, 80%, small receptive fields and high spatial resolution - Magno path: Large cells (parasol and magnocellular cells) 10%, large, large receptive fields, they are blind to colour, low spatial resolution but fast Konio path: Small cells (bistratified and koniocellular cells in LGN) 8-10%, large receptive fields, Blue on (S-cones are rare), red green off receptive fields Week 2 – Color perception: - White light: Mixture of different lights frequencies. Week 4 – Lecture 1: Size perception: Delboeuf Illusion: Alice in Wonderland Syndrome which involves the distortion of perceived size as well as of distance and self body size. Imagine how disturbing misperceiving size might be and, conversely, how being able to explain that this occurs might help reduce anxiety. Keywords: Size: visual angle spatial frequency perceived size Size constancy: Size distance scaling emments law Illusions: Ponzo, Titchner, Ames room Theories: Constructivist, Direct The problem: - - “The problem” is that the size of the image arriving at the retina is a function of both the size and of the distance from us of the object that we are looking at in the world. a small image could be of a small object relatively close or a much large object further away. - Geometrical optics allows us to determine the size of the image if we know the size of the object and its distance, what the brain has to do is the opposite: determined the size and distance of the object knowing only the size of the image. This is like having one known but two unknowns. - The size of the object is the distal stimulus but the visual system only has retinal image size the proximal stimulus. - The geometry of image formation, geometrical optics, means that the image of an object that has a constant size in the world will vary as a function of viewing distance as shown below - Size is inversely proportional to viewing distance hence double the distance and you halve the image size. The two trees are the same size (above) but the size of their images is very different. The image of the one twice as far away is half the size. Some, despite this, we experience size constancy, that is objects do not appear to change in size as a function of viewing distance. A tree does not appear to grow as we approach it even if we are aware that it takes over more and more of our field of view, what we can see. Geometry: Visual angle: - - Visual angle is used as “A measure of image size on the retina, corresponding to the number of degrees the image subtends from its extremes to the focal point of the eye” Palmer (1999) “Vision Science: Photons to Phenomenology” (Italics in original). This is illustrated where the visual angle subtended by an object (the bar) increases as its distance from the eye decreases although the size of the bar itself does not change. The angle referred to is the angle between the lines where they cross. Use visual angle to describe the size of the proximal stimulus on the retina. 1º of visual angle corresponds to 0.288mm on the retina in humans. The width of your thumb at arm’s length subtends 2º and can be used to measure the approximate visual angle of objects. Visual angle equation: H = Height of object A = visual angle D = distance between object and eye Arctan = the inverse of the trigonometric tan function. Estimated for small angles being <10 degrees) Physiology: The receptive fields of receptor cells are (retinal image) size specific - they only respond to light from a limited area on the retina. This is also true of ganglion cells including ones with centre surround organisation - the centre and the surround have clearly defined sizes. - receptive fields are thus well suited to encoding retinal image size - their response to a stimulus will vary as a function of the size of the stimulus (a circle of light above). However, this does not solve "the problem" as the same cells are agnostic with respect to (do not know about) distance and, as we have seen, retinal image size is a function of viewing distance as well as object size. - V1 receptive fields come in a range of sizes (as illustrated above) and thus well suited to encoding a range of retinal image sizes (but still do not encode viewing distance. They respond best to an image of a certain size regardless of how far away the object projecting that image is). After effects. Psychologist micro- electrode: After-effects can be used to investigate how physical properties are encoded by the visual system. Spatial frequency: Counting the number of bars" in a given area is more formally referred to as spatial frequency and defined as the number of cycles in a degree of visual angle (c/deg.). See STT Figure 4.2 below. Formal experiments (e.g. Blakemore and Campbell, 1969) have used adaptation to characterise the spatial frequency response of the human visual system. For example the data blow show the effect of adapting to a 7.1 cycle/degree grating T he results show that sensitivity (left) reduces most for the adapting frequency and effects close frequencies to a lesser extent. Perceived size: What do we perceive? STT Figure 4.25 below shows "objects" A and B that are the same size and object C that is half the size. Viewed from the same distance object C would have twice the spatial frequency as objects A and B. However when object B is presented twice as far away its spatial frequency (and retinal image size) will now match that of object C. The responses of cells tuned to spatial frequency would now be the same to B and C but different to A although "in the world" B is the same size as A not C. Is that what we see? The size of images on the retina is not important, the size of objects in the world is. When asked to judge spatial frequency (size) of objects presented at the same distance people can easily judge either object or image size (because size is all that is changing). However when the objects are presented at different distances people find it easier to judge whether or not object size matches than image size. Put another way we see that A and B above as the same size (which they are!) even though the retinal images of B and C are more similar. This demonstrates that humans show size constancy, the "ability to perceive the real size of objects regardless of their distance from us” STT p.122 Size contancy: Size constancy, the “ability to perceive the real size of objects regardless of their distance from us” (STT p.122) can usefully be considered the goal of size perception. To successfully interact with the world we need to know about the size of objects, not the size of images. Holway & Boring (1941) ✎ EditSign conducted a classic psychology experiment to test size constancy. The Observer's task was to adjust the size of a comparison stimulus (Sc) presented at a constant distance (Dc) of 10 feet (~3 m) until it matched the size of a standard stimulus (Ss)which was presented at variety of distances (Ds). The Observer (O) was seated at the corner of a corridor and so could choose to look at either the comparison or the standard (not both) at any one time. The comparison stimuli were designed so that they always projected the same visual angle (1°). i.e. the ones further away were bigger than the closer ones (10' indicates 10 feet) Holway and Boring predicted two possible patterns of results depending on whether people responded on the basis of visual angle or objects size. Responding on the basis of visual angle should mean that the comparison circle was always adjusted to the same size (as the visual angle of the standard stimulus did not change). If the observers responded on the basis of object size they should adjust the comparison to be bigger for the more distant stimuli (as these were bigger!). They refer to these competing predictions as the law of visual angle and the law of size constancy respectively. As well as varying distance and size of the standard stimuli, Holway and Boring also varied the visual information about depth available on the basis that accurate perception of depth may be necessary for size constancy. Most depth was available with both eyes open and least with one eye looking through an artificial pupil (1.8 mm pinhole) and a reduction tunnel (poster tube?!) the blocks out environmental cues to depth. The results are shown below. The take home message is that when multiple sources of information about depth were available binocularly (both eyes open)Observers were pretty good at matching the comparison stimulus to the actual (distal) size of the standard i.e. they showed size constancy. However when little information about depth was available and all they could see was the circle itself their adjustments were more in line with matching visual angle. The results suggest that size constancy can be achieved but that it depends on information about depth being available. Size distance scaling: The Holway and Boring (1941) results suggest that the perception of distance is critical to being able to perceive object size and achieve size constancy. One possibility is that the brain in effect does inverse optics/geometry. As we saw the size of an object and its distance determine visual angle. If distance is known (a big if!) then this relationship can be reversed and visual can be used to determine size. Emmerts law = for a given retinal image size, perceived size is proportional to perceived distance. Equation: S = perceived size K = a constant R = retinal image size D = perceived distance This effectively reverses the geometrical rule of image formation that, for a given object size, image size is inversely proportional to distance. The Ponzo illusion: The Ponzo illusion is a classical geometric illusion first published by Ponzo in 1911. As you probably know the two horizontal lines are the same length but people normally report experiencing the upper one as longer. Although they are actually the same distance from you on the screen the top line appears to be further away thanks to the converging parallels. Size distance scaling would explain this as the size of the upper line would be scaled by its greater perceived distance. The simple geometrical illusion on the left can be thought of a reductionist stimuli where only thee key elements remain (if you got rid of the converging lines the illusion should no longer work). A more complicated scene like the on the right can work as well or better. With regards physiology there is evidence that the V1 response reflects perceived size not just image size, at least when the illusions is attended to suggesting top-down influences (Fang, Boyaci, Kersten et al 2008). V1 involvement is also suggested by findings where the inducing components (the converging lines) are presented in one eye and the test components (the bars) are presented in the other eye (Song et al 2011). The responses of single cells in macaque V1 are also claimed to reflected perceived size (Ni, Murray, and Horwitz, 2014). Macaques, and all the other species that have bene tested (including rats, pigeons and horses), appear susceptible to the Ponzo illusion. This is also the case for human infants from around 7 months old (Yonas, Granrod, Arterberry & Hanson, 1986). Children who have sight restored having been blind from birth are reported to be immediately susceptible to the Ponzo illusion (Gandhi, Kali, Ganesh and Sinha, 2015). Ponzo, Theoretical explanations: There are a number of explanations of the Ponzo illusions which suggests none of them are right. While misapplied constancy scaling would appear to explain the classical version of the illusion and relate nicely to accounts of size constancy that involve scaling retinal image size by perceived distance, misapplied size distance scaling does not appear to explain many variants of the Ponzo illusion. Constructivist theories like Gregory's start from an assumption that the retinal image is inadequate and ambiguous. As retinal image size is a function of both distance and objet size the image alone is not sufficient to support or explain perception. Gregory viewed perception as very much like science with the brain testing hypotheses against the available data (the image). Sideways rules, such as scaling perceived size by perceived distance, and "top-down" knowledge based on experience are both also required to interpret the image. Alternative explanations emphasise "bottom-up" effects of local properties of the image over a global scene based interpretation. For example if the upper bar gets assimilated (joined to) the oblique lines due to its proximity that would simply explain why it appears bigger. Other explanations invoke size scaling by local background information, not unlike the Titchner or Ebbinghaus illusion (STT figure 4.14) It is not immediately clear how this can be applied to the simplest variation of the Ponzo illusion unless the gap to the oblique lines provide the context more like the concentric circles of the Delboeuf illusion. A role of the background in scaling apparent size would certainly be possible when the bars are shown. The Ames Room: 1. A reduced change in apparent size has been claimed for people, particularly females, viewing their significant other (Dion & Dion, 1976). Labelled the Honi effect (Honey effect?!) this may not be entirely reliable (Ong, Luck & Olson 1980) limiting its possible application as a lovemeter. Direct perception of size: As explained briefly during the lecture drop-in a direct perceptionist in the tradition of JJ Gibson would argue that there are in variants in the structure of the optic array, the pattern of light reaching a point that directly specify size. This avoids any need to take distance in to account when inferring size. Two touch invariants are the number of texture elements occluded and the horizon ratio. These are outlined and illustrated next. Do all the red disks ("checkers") below look the same size? Hopefully the two lower in each image but how about the two higher ones? Does theone in the first or the second image look bigger? According to the Gibsonians the checker that covers (occludes) most checks (texture elements) should look biggest. i.e the one higher up in the second image. In terms of retinal image size (or size on the screen) the one higher up on the left is smaller than the other three. Clearly we do not live on a checkerboard but statistical properties of ground coverings like grass (e.g. density of blades) would perform the same function IRL (in real life) The constructivists might argue that it looks bigger because it looks further away and size distance scaling has been applied by the brain. More on Gibsonians and constructivists in the final lecture but, basically, these are two theories visual perception. Misapplied constancy scaling is a constructivist approach while the number of occluded texture elements occluded is a Gibsonian invariant directly available from the light reaching the eye.. Horizon Ratio: Horizon ratio (Sedgwick 1973)is another Gibsonian invariant that might allow us to perceive size directly. It is illustrated below. Hopefully the two cylinders look about the same size in the world though one is clearly smaller in the image. The Gibsonians would argue that is because the horizon ratio is the same in both cases. The horizon ratio is the amount the cylinder sticks up above the horizon cpompared to the amount that is below the horizon (the ratio of those two heights). This is labelled below. The ratio of the two red arrows should be the same in each case indicating that the cylinder is the same height. The fact cylinder cuts the horizon also tell you that the cylinder is bigger than you (so don't mess with it!). This is because the horizon is at your eye height all illustrated below. The line of sight that goes to the horizon is approximately horizontal i.e. at you eye height Ponzo variant: Do the men all look the same size? Can you explain this variant of the Ponzo illusion in terms of misapplied constancy scaling? Can you explain it in terms of texture element occlusion and/or horizon ratio? Remember the vanish point, where parallels lines converge to, would be on the horizon. Week 5: Lecture 1 – Depth Perception: Types of distance and depth cues: 1. 2. 3. 4. Oculomotor cues Pictorial Cues: Stereoscopic cue Motion cue Texture gradients: Equally spaced texture elements of equal size (blades of grass) will appear to be packed closer together as distance increases These texture elements might be used as a scale to judge both distance and size Properties of texture gradients: An object of equal size will cover an equal number of texture elements • An object that is twice as far away will have twice as many texture elements between it and the observer • e.g. The further cylinder is twice as far away as the near cylinder (and both are the same size). Does foreshortening tell you about depth? • Orientation is the first derivative of depth – • The rate of change of depth Curvature is the second derivative of depth – The rate of change of orientation…. – General point: Depth perception is closely related to the perception of 3D shape Pictorial Cue: Familiar size: • Under certain conditions, knowledge of an object’s true size can influence our perception of its distance from us • Epstein (1965) Experiment - observers were presented with equal sized photo’s of a dime, a quarter and a 50 cent coin in a darkened room. • These photos were physically positioned at the same distance from observers and illuminated with a spot of light • When viewed with one eye, the dime was perceived to be nearer than the quarter, which was perceived to be nearer than the 50 cent coin. • • Can potentially provide absolute depth – • Whether it does or not is an “empirical question” Familiar size has been used historically as a range finder for artillery (Morgan, 2003) – At what distance from the observer does a model soldier of known size exactly match the real soldier? Relative and familiar size and opacity: • Nearer objects will o_ _ _u_e further ones • Perspective is a combination of relative size and _o_e_ _o_ _e_i_ _ • Familiar size is a potential cue to a_ _o_u_e distance but the Ames room demonstrates that it can be overridden Position relative to horizon: • Height in the visual field is a cue to depth order and relative depth: – objects below the horizon appear to be further away when they are higher – objects above the horizon appear to be further away when they are lower • The horizon can be used as a reference point to determine distance • There are three types of horizon: • Geometrical horizon: where horizontal lines converge • True horizon takes into account the curvature of the earth • Visible horizon is the most distant visible boundary in the scene Atmospheric Perspective: • Not all pictorial cues are based on geometric effects of distance between viewer and object/environment • For example, distant objects often less clear, due to particles in the air • May also be partially caused by differences in clarity of detail (spatial frequency) Shading and lighting: • The variation in light coming from a surface as a function of its angle with respect the light source • For a Lambertian/matte surface this in independent of the viewing angle. Shading is ambiguous: • Equivalent convex and concave (hollow) surfaces lit from opposite directions give rise to identical images Assumprtions: • Light comes from above + objects are convex Stereoscopic cues: Stereoscopic cues are used in 3-D movies etc to make objects appear in front or behind the screen. We have 2 horizontally separated eyes with overlapping visual fields. As a result, the left eye has a slightly different view of the same scene to the right eye. Motion CUES: • Perception is almost never static • Either the observer or the object (or both) are moving • This motion provides information about the 3-D layout of the environment and the shape of objects in it Kinetic occlusion Cue Acceleration/ Deletion Motion Parallax: • As your head moves relative to objects, nearer objects appear to move faster than further away objects • The relative speed of the objects movement provides powerful cue to their relative distance • Also the direction of their perceived movement changes with fixation • Objects further than the fixation point move in the direction of the observer’s head • Objects closer than fixation point move in opposite direction to the observer’s head • Points closer than the point of fixation will move in the same/opposite direction as/to the observer’s direction of movement • As distance from the point of fixation increases, image movement speed will increase/decrease Cue Combination: • No one cue dominates • Compliment and compensate for each other – • Provide different types of information based on different evidence The more cues the better the impression of depth Palmer, S. (1999) Vision Science No occlusion Absolute + quantitative = absolute egocentric depth Relative + quantitative = relative Relative + qualitative = Ordinal or 3D shape/slant and curvature Lecture 2 – Object recognition: Detection of target picture: Humans are able to recognize objects through recognition of what is meant to be observed Eg. Captcha quizzes Whats the problem? • Scene segmentation • • • What is object and what is background? Viewing conditions • Viewpoint • Lighting Partial occlusion Obscuring vison in a isolated spot preventing visual expectations of an image. Eg Wearing sunglasses causes a partial occlusion when visualizing a face Visual Agnosias: • Simultagnosia: - Simultanagnosia (or simultagnosia) is a rare neurological disorder characterized by the inability of an individual to perceive more than a single object at a time. • Apperceptive/Associative – - is a failure in recognition due to deficits in the early stages of perceptual processing. Associative agnosia is a failure in recognition despite no deficit in perception. Associative agnosia patients can typically draw, match or copy objects while apperceptive agnosia patients cannot - Associative: can copy and describe* and match but not name shapes. - Apperceptive: cannot copy or describe or match shapes. Can draw from memory. Can also recognize object by touch • Prosopagnosia (face blindness) - means you cannot recognise people's faces. Face blindness often affects people from birth and is usually a problem a person has for most or all of their life. It can have a severe impact on everyday life. • Object agnosia • • Animate/inanimate Alexia (reading) inability to recognize or read written words or letters, typically as a result of brain damage. What is the Aim? • To know what is where by looking” Marr (1982) • • What (ventral) and where (dorsal) pathways Shape constancy • Ability to perceiving an object as having the same shape despite changes in the retinal image of that object Wolfe 2021 – path of perception: 2D image based: Naïve template making: Invariant features: Bruce, Green & Georgeson (2003) Visual perception: physiology, psychology and ecology. Chapter 9 Soviet tanks all have T in the name and have the invariant feature of a rounded rear turret O’Kane, Biederman, Cooper & Nystrom (1997) Pandemonium model: A method for combining simple visual features? Classic model Can predict likely mistakes (shared features) Problems with this account Does not code relations between features or number of features Recognition by components: Parts based recognition scheme. Non-Accidental Properties: Non accidental image properties (NAP) Image properties that are invariant over orientation and depth. Lowe (1985) Co-linearity Curvilinearity Symmetry Parallelism Co-termination Lowe (1984) Edges associated with depth and orientation discontinuities Same in the image and the world For all non-accidental viewpoints Relations between Parts: • Geon structural description • Non accidental relations • Relative size (G1>G2, G1=G2, G1<G2) • Verticality (G1 above/below/side G2) • Centering (End-to-end, end-side centered/not centered) • Relative size of joined surface (longer/shorter) Evidence against: Viewpoint dependence: Week 6 Week 7 Week 8: Lecture 1 – Cognition: How do we know the outside world? Cognition: - The acquisition, storage, retrieval and use of knowledge Includes perception, attention, memory, language, problem solving, imaging and reasoning Cognitive psychology = study of mental proesses we use to make sense of our environments Brief history of cognitive psych: - Looking inward – considering the inner workings of the mind Rejecting inner processes as acceptable objects of study Rejecting the rejection – taking up the challenge to scientifically explore mental processes Scientific Data are observable (and quantifiable) Brain is the seat of cognitive processes Functional Not tabula rasa Structuralism: Understanding the elements of consciousness & structure of mind Wilhelm Wundt - First Psychology lab (late 1870s) Application of the scientific method to psychology Method of introspection - systematically vary a stimulus and observe the effects Describe experience in basic terms Thoughts, images, feelings Functionalism: • Psychologists should examine the processes of the mind rather than the contents • William James (1890) • Function and pragmatism • Study of consciousness, “Stream of thought” • Learning through association • Psychology as a natural science Behaviourilsm: • Emerges 1913, dominant from 1920s - 1950s • James Watson and B.F. Skinner • Only observable behaviour can be studied • No role for mental representations • All behaviour learned through conditioning and explainable as chains of stimulus-response connections • Investigating behaviour requires environmental control and directly observing organism response • Behaviour shaped by the presence/absence of rewards/punishments • Contiguity, frequency and reinforcement Counters to behaviourism: - Reinforcement not needed for learning to occur The specific example of human language Information processing post WWII Unshaped behaviour: Learning is not always constrained by environment or S-R relationship Chomsky’s challenge Chomsky (1959) - “scathing review of Skinner’s ideas” Behaviourist principles cannot explain human language abilities ○ Language is creative ○ Language is not constrained by reinforcement contingencies The information age: Post WWII Information transmission (e.g., Development of computer technology Information theory Information processing: Miller (1956): 7 ± 2 the capacity of short-term memory Mental processes demonstrated information processing qualities ○ ○ ○ Broadbent (1958) - described a human information processing system Neisser *1967) cognitive psychology Behaviourism is inadequate because it does not yield any insight into how people think Cognitive psychology is “…all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered and used." Neuroimaging: • Cognitive neuroscience approach to human cognition • PET • EEG/ERP • Magnetic Resonance Imaging and fMRI • TMS Magnetic resonance Imaging (MRI) Maps brain anatomy/structure Powerful magnetic field aligns protons, disrupted with RF pulse Records energy signal of protons returning to alignment Functional MRI (fMRI) • fMRI detects changes in blood flow over time (i.e., during task performance) • • BOLD (blood-oxygen-level dependent) effect Good spatial and temporal resolution Transcranial Magnetic Stimulation: • Large magnetic pulse creating virtual lesions -> causal inferences • Map cortical activity • Limited spatial resolution Advantages of functional neuroimaging: • Can help localise function in healthy controls • Has revealed activity in areas previously thought to be uninvolved in cognition (e.g. cerebellum) • Can be combined to provide converging evidence So why not rely? • Imaging provides descriptive information. • • How does neural activity result in mental phenomena? • Imaging reveals associations, not causality • fMRI has been called “phrenology with magnets” • • Ecological validity of tasks • Measurement issues Cognitive Neuropsychology: • Aims to relate theories of cognitive function to knowledge of brain structure and function • Use of clinical and normal or “typically developed” populations • Localize where and when cognitive processes occur within brain structures • Implications for model confirmation/development • Constraints for theory, e.g., Assumptions of Cognitive Neuropsychology: Modularity - Large number of fairly independent processing modules • Functional specialisation • E.g. AV entertainment system • Neurological specificity (isomorphism) - there is a correspondence between the organisation of the mind and the organisation of the brain • > both modularity & specificity lead to the locality assumption • Transparency - observable behaviour will indicate which module is dysfunctional • Subtractivity - Performance reflects total cognitive system minus the impaired module(s) • Universality - There are no individual differences in the organisation of cognitive modules Associations: • Implies a link or connection between two phenomena • Between two cognitive deficits • • Between a cognitive deficit and a lesion site • • e.g. e.g. Problems: • Determining causality, nearly always exceptions • May have damage to more than one process Dissociations: Patient A: Performance on task X impaired, but performance on task Y intact • E.g. task X = task Y = • Implication is that tasks are handled by different sets of cognitive processes. • But… • It could be argued that tasks X and Y involve one process (e.g. recognition of "something") but that one is a very hard task and the other is a much easier task. Double dissociations: Patient B: Performance on task X intact, but performance on task Y impaired • E.g. • The performance of patients A & B together provide a double dissociation • Strong evidence that there are cognitive processes involved in Task X that are not involved in Task Y and vice versa > modularity Lecture 2 – Models of cognition: Information processing and neural networks: - Learning outcomes: Describe and compare the information processing (IP) and neural network (also known as connectionist) models of cognition Provide an example of an IP and a neural network model of cognition - Discuss the advantages and limitations of IP and neural network models Information Processing approach: Computational approach: Marr, 1982 - - Mental processes could be understood as information processing events ( obiding by a set of laws) Components cannot be understood in isolation ( being part of a system) Fundamental components of an IP system: A representation is a internal model of the external world Processes are the active parts which transform or operate on information changing one representation to the next. Information processing model of object recognition: • Stage (hierarchical) model of object recognition (Riddoch & Humphreys, 2001) • Structure (representations and processes) • Evidence • Neurophysiological evidence: Feature analysis – analysing certain characteristics of an object in order to identify it • Hubel & Wiesel (1962) single cell recording in the visual cortex of cats Clinical evidence: - Visual agnosia – charaterised by having an inability to visually recognize objects despite having intact knowledge of the objects characteristics Impairment is perceptual and cognitive however not sensory No loss in intelligence – don’t have to be “dumb for this to happen” Different types of agnosia: • Agnosics may have difficulty recognizing the geometric features of an object (apperceptive agnosia) Or they may be able to perceive the geometric features but not know what the object is used for (associative agnosia) Apperceptive agnosia: CAN: name colours, navigate, distinguish areas of brightness, detect edges of shape CAN NOT: • recognise objects • copy simple shapes • match shapes • When patients are able to identify objects, they do so based on inferences using colour, size, texture and/or reflective cues to piece it together. • E.g., fail to discriminate despite clear differences in shape and surface features Herpes simplex Encephalitis Patients • Problems with recognising or describing objects, typically natural/living objects. • Case study Giulietta (Sartori et al, 1993) • Can distinguish and match overlapping figures • Unable to draw from memory or match parts to whole objects • Difficulty with verbal descriptions of visual form • Can make semantic decisions Associative agnosia: Perception is intact Can match and copy objects Impairment is to the association of the percept to its meaning – perception without meaning Eg: Patient JB: • Poor at associative matching task with objects (but not with words), thus intact associative knowledge Anomia: • Intact visual and semantic knowledge • Can match and copy objects, do object decision tasks, describe objects, etc • Inability to name things, people, and places • Anomic patients cannot reliably find and use nouns in conversation. • Frequent use of thing and stuff. The connectionist Model: • • Neurally inspired • Based on assumption that cognition depends on the millions of interconnected neurons in the brain – “Neural Networks” • Biologically plausible Timing issue • Processing would take too long if done in sequential steps Connectionist models and brain structure: • Units = neurons • Activation = firing rate • Connections = synapses • Connection weight = synaptic strength (excitatory or inhibitory) Parallel distributed processing – PDP • Large number of small, independent, yet highly interconnected units processing simple functions in parallel • All processing is assumed to be parallel • Parts are processed simultaneously with the whole (interactive) Localized vs Distributed Representation: Concepts represented by single units or distributed patterns of action Localized coding: 000000001000 – Can represent 12 different concepts Distributed coding: 010001100101 – can represent 212 or 4096 different concepts Localized representation: • Each unit or node represents a property or proposition • • Word nodes, sound nodes, feature nodes Jets and Sharks example (Westside Story) McClelland (1981) Distributed representation • Representations are stored as a pattern of activation across a set of units • Two classes of distributed representation: • Units may represent conceptual primitives or • Units may have no meaning as individual elements Distributed representation • Representation are stored as a pattern of activation across a set of units • Two classes of distributed representation: • Units may represent conceptual primitives or • Units may have no meaning as individual elements Advantages of distributed representations • Economy – provides high information capacity over few units • Generalisation – capitalises on similarity when dealing with new stimuli (general and specific information) • Learning - can explain how the “adult” system came about • Graceful degradation - if the system is damaged it does not collapse completely but shows impairment in function directly related to the magnitude of the damage Disadvantages of distributed representations • Can be difficult to decipher exactly what is going on in the network • Resemblance to the brain is fairly superficial • • Fails to capture the full scope of cognitive phenomena • Week 9 Different types of neurons E.g., emotion, social dimensions