EDUC 5233 - Educational Psychology II Text: Sousa, D. A. (Ed.), (2010). Mind, brain, & education: Neuroscience implications for the classroom. Table of Contents: Chpt. One: How science met technology Chpt. Two: Neuroimaging tools and the evolution of educational neuroscience Chpt. Three: The current impact of neuroscience on teaching and learning Chpt. Four: The role of emotion and skilled intuition in learning Chpt. Five: The Speaking brain Chpt. Six: The Reading brain Chpt. Seven: Constructing a reading brain Chpt. Eight: The mathematical brain Chpt. Nine: The calculating brain Chpt. Ten: The computing brain Chpt. Eleven: The creative-artistic brain Chpt. Twelve: The future of educational neuroscience Course Topics Introduction - Educational neuroscience is a legitimate scientific area of study that overlaps psychology, neuroscience and pedagogy. [See diagram – p. 2] [ See Fig. 1.2 and Fig 1.3 Diagrams of the Brain ] Chpt 1 – How Science Met Pedagogy A. Technology used to study the living brain: i. CAT Scan – 1970s ii. MRI – 1980s iii. PET Scan – 1970s iv. fMRI – 1990s B. Professional Development i. Learning Styles – Dunn & Dunn 1970s ii. Multiple Intelligences – Gardner 1980s iii. Triarchic theory of Intelligence – Sternberg 1980s iv. Emotional Intelligence – Goleman 1990s v. Neurogenesis – Kempermann & Gage 2000s vi. Neuroplasticity – Shaywitz 2000s, Simos et al 2000s vii. Memory Levels and Learning – Sousa 2000s Chpt 2 - Neuroimaging – Evolution of Educational Neuroscience A. We now understand that executive functions of the brain ‘connect’ activities among various areas that deal with specific activities. B. We now realize that at an early age (6-10 months), during language development, phoneme discrimination becomes more language-specific (English, Mandarin). This impacts a child’s capacity to learn a second language. NOTE: Interesting finding that human social interactions impact phoneme discrimination differently than audiovideo recordings do. C. Although most of the neural networks are common to all people, their efficiency varies, partly due to genetic variations. The expression of these genetic variations is influenced by experience. There is evidence that aspects of the culture in which children are raised can influence the way in which genes shape neural networks – ultimately influencing child behaviour. Example: The major development of the executive function occurs between four and seven years of age. Training on conflict management during that period produced improved conflict resolution skills as compared to other training techniques. Similarly, working memory training tasks and meditation produced improved students’ attention in classrooms where they were provided. This means that this type of attention training could be beneficial for students who have poorer initial efficiency. [i.e – some forms of attention can be taught.] D. Studies in early education show that, with practice the connectivity between brain areas is strengthened, and tasks can be carried out more efficiently. Activity: Choose one of these statements and examine its potential impact on the teaching-learning process in your classroom. Chpt 3 - Impact of Neuroscience on Teaching and Learning A. Proper learning behaviour is no longer defined by students sitting quietly, doing exactly what they are told without question or discussion, and reporting back memorized facts on tests. The work of Vygotsky on the zone of proximal development (ZPD) and Krashen on reducing the negative effects of stress on learning and the practice of individualizing instruction are supported by our current understanding of how the brain operates during learning experiences. B. We can learn a lot about motivation, intrinsic rewards, and ZPD from popular computer games. C. Krashen’s ‘affective filter’ helps us understand the neurological impact of stress and emotion on brain functions during the learning process. D. We cannot learn anything that is not recognized by our brains. The role of the reticular activating system (RAS) is the basis upon which all our lessons should be planned. E. Dopamine is a learning-friendly neurotransmitter. It promotes motivation, enhances memory, and provides focus as well as making us feel good. Dopamine production can be activated by certain environmental influences and teaching strategies. F. Dopamine ‘drop’ occurs when a student experience the negative emotions related to making a mistake. Effective and frequent formative assessments can reduce the fear of making mistakes. G. Neuroplasticity and pattern-based memory provide us with the basis to choose effective teaching strategies. Understanding these two concepts is fundamental when planning lessons that will enhance student learning. H. Intelligence is not a fixed capacity – it can be increased by making our brains’ neural networks stronger, more efficient, accessible, and durable. Teachers who collaborate in ‘learning communities’ can help students become more intelligent. Activity: Choose one of these statements and examine its potential impact on the teachinglearning process in your classroom. Chpt 4 - Role of Emotion and Skilled Intuition in Learning A. The message from social and affective neuroscience is clear: no longer can we think of learning as separate from or disrupted by emotion… building academic knowledge involves integrating emotion and cognition in social context. B. The learners’ emotional reaction to the outcome of their efforts consciously or nonconsciously shapes their future behaviour. Therefore efficient learners build useful and relevant intuitions that guide their thinking and decision making. These intuitions are not randomly generated, they are shaped and organized by experience and are specific and relevant to the particular context in which they were learned. C. Relevant intuitions and emotional learning is enhanced when teachers foster emotional connection to the material students are learning. One way is to offer students a choice of how they will learn the material (writing/performing a play, doing a research report, designing a model). Another is to assign open-ended problems that create space for emotional reactions. D. Intuition can be understood as the incorporation of the nonconscious emotional signals into knowledge acquisition. Building curricular opportunities for students to develop skilled intuitions is therefore a meta-learning process. E. We must actively manage the social and emotional climate of the classroom. Students will allow themselves to experience failure only if they can do so within an atmosphere of trust and respect. F. Critical thinking requires students to use intuition and emotional signals to know how, when, and why to use the new knowledge they have acquired. Chpt 5 - The Speaking Brain Much of what we often believe is true about how the brain enables human speech was discovered in the mid 1800’s using negative reasoning (i.e. - If a part of the brain was injured and the patient lost an ability/function then that part of the brain was the part responsible for that function). Pierre Broca – 1865 Broca’s area enables us to speak. Carl Wernicke – 1875 Wernicke’s area enables us to understand language. Also on the left side of the brain we find the arcuate fasciculus, a large bundle of nerves that connects Broca’s and Wernicke’s areas, and makes it possible for us to communicate using language. The direct and synchronized connection between these two areas makes rapid back and forth conversation possible. Broca’s area also seems to be involved in some semantic and working memory processes – providing coordination and integration or neural information from other language processing areas of the brain (see Fig. 1) Figure 1. – Process Functions and Locations of the Brain Brain Area Process Right middle and superior temporal Bilateral dorsolateral prefrontal Left inferior frontal-left anterior temporal Bilateral medial frontal/posterior right temporal/parietal Left dominant, bilateral intraparietal sulcus Prefrontal cortex, parietal lobes Understanding semantics Monitoring coherence Integrating text Interpreting the perspective of the agent or actor Imaging spatial information Working memory – holding language while other processes are performed Episodic memory – recalling an experience Medial temporal lobe, prefrontal regions, parietal lobe Temporal and frontal lobes Word processing & grammatical processing Lateral dominance of the brain’s left hemisphere for language processing has been supported by modern structural and functional image studies. For most people the left side of the brain controls language (96% of right handed and 76% of left handed people). The knowledge of and competence for human language is acquired through various means and modality types. Linguists regard speaking, signing, and language comprehension as primary faculties of language, i.e., innate or inherent and biologically determined, whereas they regard reading and writing as secondary abilities. Indeed, the native or first language (L1) is acquired during the first years of life through such primary faculties while children are rapidly expanding their linguistic knowledge (2). In contrast, reading and writing are learned with much conscious effort and repetition, usually at school. Speech in infants develops from babbling at around 6 to 8 months of age, to the one-word stage at 10 to 12 months, and then to the two-word stage around 2 years. Note that sign systems are spontaneously acquired by both deaf and hearing infants in a similar developmental course. Speech perception and even grammatical knowledge develops much earlier, within the first months after birth. (Sakai, 2005)1 At school age typically developing children were assumed to have a fully developed spoken language system that could serve as the basis for learning to read and write. This being said, speech and language processing are only part of the cognitive demands placed on the brain when children are in school. If cognitive resources are being spent on other competing processes, children may not have optimal capacity to understand and produce spoken language (e.g. – listening to the teacher and making notes at the same time OR processing feelings of fear or sadness while being expected to speak.) Language and the Right Hemisphere Although the left hemisphere is the dominant one for language, the right hemisphere is also involved. It has long been seen as responsible for understanding and producing prosody, the intonational and emotional aspects of spoken language. While prosodic elements contribute essential information to verbal communication, they are not considered to be language in the same sense as phonological (related to the sounds of speech), semantic (related to the meaning of words) and syntactical (related to the grammatical arrangement of words) elements of language. The right hemisphere is also involved in interpreting humor and metaphors, making inferences, understanding sarcasm or irony, and comprehending discourse. The right hemisphere may also assist in understanding more demanding semantic tasks such as when words are only distantly related, being used to imply a meaning other than the literal ones, or have two very different meanings. This makes the right hemisphere essential when we draw inferences from our experiences. While language is predominantly left hemisphere function, both hemispheres are necessary for a fully functioning, flexible spoken language system. Our ability to examine the brain using fMRI technology is also challenging previously believed knowledge on how speech and language develop. Sakai, K. L (2005) Language Acquisition and Brain Development , downloaded on Dec 22,2014 from www.sciencemag.org SCIENCE VOL 310 4 NOVEMBER 2005 1 Speech and Language Development Behavioural information gathered in the past led us to believe that, by school age, children were able to clearly produce most speech sounds, had mastered basic grammatical aspects of spoken language, and acquired a sufficient lexicon to talk about various concrete and abstract experiences. Having learned language, they were now ready to use language as a vehicle to expand their learning across the curriculum. We now know that behaviour measures are not exact measures of brain function, and may even lead to incorrect conclusions about brain function. We know that not all children come to school with fully developed speech and language, ready to use these skills for reading, writing, and oral expression. Even older children vary in their capacity to understand and use higher order or more abstract language. During school years children are exposed to experiences that change the brain’s structures and functions. The brain develops in response to the environmental input it receives. Although genetics plays an important role, there are differences in the rate at which the development of brain processes occurs. We also know that even if children are behaviourally performing in the same way as adults, they have different neural patterns that may reflect the use of different cognitive strategies. Children’s brains work harder than adult’s brains to accomplish the same behavioural result. This is especially true of boys because it appears that boys do not convert sensory information to language as easily as girls do. Not all performance is gender based though, language skills are determined by a child’s genetic make-up and the amount of time and effort spent on practice and development of those skills. Not all children are capable of the same level of verbal expression; some are verbally fluent while others struggle to put their thoughts into words. Differences in language skills are not related to innate intelligence or motivation; rather they are related to individual differences in brain development. Being slower does not necessarily mean that a child knows or understands less; it simply means that the child needs more time to express what he or she knows. Bilingualism Mastering two languages has traditionally been seen as an age-related issue. Behavioural studies showed that young children who were exposed to two languages before the age of seven developed proficiency in both languages. Brain imaging studies now show that bilingual adults who were exposed to two languages before age five actually process their languages in overlapping brain areas – the same areas that monolingual children use. Bilinguals who learn their second language later appear to use different strategies. Their brains function in a more bilateral manner with more distributive activation in the frontal lobe, in areas thought to represent working memory and inhibitory processes. This pattern of activation is thought to be consistent with greater cognitive effort and less automatic processing. It seems that the most efficient use of neural resources occurs when language learning happens early. Neuroplasticity Previous beliefs about the brain being fully formed at birth have been proven incorrect. Both the structures and functions of brain cells have been proven to change during one’s life. This plasticity is greatest in the earliest years of life. Even in adolescence language networks interact with other cortical resources, such as memory, and thereby change the brain’s structures and functions. New neurons are created in the hippocampus, a process that impacts the formation of memories. The frontal lobe continues to develop thorough early adulthood, making it possible for adolescents to develop metacognitive and metalinguistic skills. This enables adolescents to think more abstractly and to communicate and think more flexibly and creatively. Adolescence is when sophisticated forms of communication and language use can develop. Research Findings to Consider The acquisition and refinement of speech and language is ongoing until early adulthood. School-age children do not have fully developed language systems. Children are less efficient language processors than adults. Gender differences in language processing have been observed. Boys may have more difficulty with verbal expression, and how information is presented may make a greater difference in their ability to learn. The brain learns a second language most easily before school age. During school years, children’s brains continue to mature and develop with both age and new experiences with language. Children may not be able to coordinate ‘listening to language’ and ‘writing language’. Simply because a child can behaviourally perform a task does not mean that the brain is efficiently performing that task. Language skills can vary widely in groups of same-age children. Spoken language is not the only means to determine whether a child understands a concept. Language is not simply a tool which children apply to the learning process. It is a growing, changing skill. Chpt 6 - The Reading Brain In today’s world reading is our most powerful portal to knowledge. Formal education reinforces this first by focusing on children’s need to ‘learn to read’ and then upon their need to use ‘reading to learn’. In many ways the learning to read/reading to learn link sets the stage for most measures of children’s success for the remainder of their lives. Unlike hearing, speaking, and basic motor skills, reading is a relatively recent cultural invention. Most historians/archeologists date the origins of written records to about 4000 BCE, with the creation of syllables around 2600 BCE and finally the first alphabets around 1800 BCE. It wasn’t until when Socrates (469 - 399 BCE) begrudgingly accepted the written word as a necessary supplement to the oral tradition of teaching that learning from texts became well established. Even then he argued ‘that a dependence on the technology of the alphabet will alter a person’s mind, and not for the better. By substituting outer symbols for inner memories, writing threatens to make us shallower thinkers, preventing us from achieving the intellectual depth that leads to wisdom and true happiness.’ Nicholas Carr (2010)2 As we contemplate Socrates’ prediction, and Carr’s fear about how the internet exacerbates our dependence upon ‘reading to learn’ we can take some solace in the fact that neuroscience offers us better ways to understand the impact that the alphabet has had on altering our brains. The fact that the human brain was not evolved to read may explain why reading is not a naturally acquired skill and therefore must be taught explicitly and formally in schools. Once again we see that the combination of behavioural and neurological studies provide us with knowledge that guide our teaching. Two Routes for Reading We begin this task with an understanding that reading and writing are extensions of language, which is hard wired in our brains. We know that language is mainly a left hemisphere function that is guided by right hemisphere support. To this we add our understanding of how the brain uses short term memory to convert written symbols into the sounds and meanings that enable communication and learning. Reading begins with the visual input from the ‘page’ which triggers the left posterior portion of the brain known as the visual word form area. This area relates visual (occipital lobe) and language (temporal lobe) neural systems that develop even before birth and far before learning to read. It does so by transforming visual input into letters 2 Carr, Nicholas, (2010). The shallows : what the Internet is doing to our brains and words. These letters and words are then transformed into meaning using two separate processes (routes). The phonological route decodes a string of letters and translates them into a sound pattern that may match a speech pattern which is meaningful. This route is reserved for words that are regular i.e. – that follow the typical correspondence between graphemes and phonemes. If a word is irregular (rare or novel) we use the phonological route try to sound it out. This is a relatively slow, systematic route that relies on the left posterior temporoparietal brain regions. The second process is the direct route and it by-passes the sound pattern stage and attempts to match the printed word directly with its meaning. The words we read using the direct route are ‘sight words’, ones that we frequently encounter and know so well that we can jump from sight to meaning. We also use this route for irregular words that we have memorized because regular pronunciation rules do not apply. This route, which lies on the left posterior occipitotemporal region, tends to be must faster – in typical adult readers it responds to a word in about 180 milliseconds but only when the word is in the written language that one as learned. The typical healthy reader is thought to use both routes constantly and interactively. As we learn to read it seems logical that we would use the direct route more often if the words become part of our ‘sight word’ complement. Thus the selective response of the visual word form area is education dependent. This could explain why, for healthy children, reading becomes easier with more practice. Timeline of Reading in the Brain It is important to note that visual recognition (seeing) differs based upon what we are looking at. We are genetically predisposed to perceive faces and places much more so than words. This means that the areas that enable us to recognize faces and places are located in relatively fixed places in the brain, while the word form area is less precisely located. This supports the belief that specialization for perceiving faces and places is genetically guided, whereas specialization for letters and words is not. It is possible that human evolution has caused areas within the neocortex that are devoted to the perception of objects to become specialized for recognizing faces and places and that the word form area is still evolving as a result of neuronal recycling. If so, we may not have to teach reading in a few thousand more years! The process of making sense of a visual stimulus is sometimes referred to as the N1 response3. This process occurs whenever a person sees a face, place, or word. In 3 The N1 response is an event related potential (ERP) used in cognitive neuroscience to study the physiological responses to sensory stimulus when the brain processes information. The N means the wave response is negative and the 1 means it occurs about 100 msec after the stimulation. typical adult readers the response is left lateralized for words and right lateralized for faces. When an adult sees a word the response time is typically 400-500 msecs between perceiving a word and recognizing it. For skilled adult readers this response time reduces to 150-200 msecs. During these periods of time higher-order cognitive processing also occurs, processing that extracts sound and meaning from the printed word. Interestingly, these higher-order processes do not wait for visual analysis to be completed. Tihs may be why we can raed wrods that have all the crroect letrtes but in the worng odrer. Cross-Linguistic Differences Orthographical transparency is one way of describing language. This is a measure of how much a single letter or group of letters (grapheme) represent a single sound (phoneme). Italian and Spanish have nearly a 1:1 relationship which means the spelling of a word enables its correct pronunciation. If you can spell a word, you can pronounce it and vice versa. Languages like English have many exceptions and therefore have poor transparency. In English there are nearly thirty pronunciations for every grapheme! Some languages such as Japanese and Cherokee use syllables rather than letters to form written words [e.g. – In Japanese, San means three, Dan means degree, so Sandan means third degree. Likewise the term for foreigner, Gaijin is a combination of gai (outside) and jin (person)]. Other languages, like Chinese, represent words with symbols and are referred to as logographic systems. These languages contain many symbols and many can be interpreted in multiple ways. For example, the Chinese word for foreigner is 鬼佬 or 老外 where the symbols 老 (lǎo, "old, always") + 外 (wài, "outlander, foreign") 老外4 can also mean ‘foreign devil’ which is an insult, or ‘ghost man’ which alludes to white skinned complexion of many foreign visitors. Development of the Reading Brain As the reading brain develops it changes by: 1. increasing its specialization of the left hemisphere, 2. decreasing its use of the left anterior area, and 3. increasing its use of left posterior. The increased specialization of the left hemisphere is likely due to the brain’s increased ability to recognize a wide variety of symbols as the same letter. The multiple visuospatial patterns of the various ways of representing each letter of the alphabet are transformed into twenty-six categories of the English alphabet. The shift from the use of the anterior to the posterior may be due to the shift from phonological decoding to automatic word recognition. This means that the working memory processes of the left anterior area are no longer required because the direct route is capable of processing a word quickly. As the posterior region matures it supports fluent, automatic reading. 4 Pronounced kwai lo At the same time that these changes occur, the brain increases it capacity to respond to printed stimuli as compared to other visual inputs. In non-reading kindergartners the N1 word response shifts from zero toward the typical 150 – 200 msecs of a high functioning adult reader. The larger the difference between N1 word response and N1 symbol response the faster the children read. This integration between print and sound continues to develop for many years. Reading Difficulty in the Brain – Developmental Dyslexia Dyslexia, the most commonly identified reading disability, is defined as difficulty in reading or spelling words accurately and/or fluently given average or higher than average cognitive ability. This is associated with a weakness in phonological processing skills. It is a heterogeneous disorder that may result from a variety of specific underlying difficulties that vary from child to child, including specific deficits in automaticity or auditory and visual perception. Dyslexic children consistently exhibit decreased or absent activations in the left posterior brain regions when performing tasks that require phonological or orthographic processing, when compared to reading-matched and age-matched readers. Readers with dyslexia often exhibit increased activation in frontal brain regions and the right-hemisphere posterior regions. This may be due to a compensation for weaker posterior reading networks. Frontal activations for these readers do not differ from reading matched children. Adolescents with dyslexia who improve or compensate appear to do so by exploiting this atypical use of frontal-lobe regions rather than by the development of left posterior reading systems. Children with dyslexia show less word-specific response to print (N1) and less left hemisphere lateralization than nondyslexic readers. They do not respond to words differently than symbols. Structural Brain Differences that Reflect Functional Brain Differences The brain consists of two types of matter – gray matter and white matter. Gray matter is composed of neuronal cell bodies while white matter is composed of myelinated axon tracts. Readers with dyslexia show less gray matter volume in several regions associated with reading. Even when compared to younger reading-matched children, readers with dyslexia show less gray matter in the left hemisphere temporoparietal region in which they show reduced activation. Thus there is some correspondence between functional and structural brain differences in dyslexia. Better organized white matter in the left posterior region is associated with better reading skill among healthy individuals. White matter tracts in the left frontal regions also reflect weaker connections in readers with dyslexia. These individuals also have greater than normal white matter connectivity in the corpus callosum, which connects regions of the left and right hemispheres. These findings suggest that, in dyslexia, white matter pathways supporting reading project too weakly within the primary reading pathways of the linguistic left hemisphere, but they project too strongly between hemispheres (which may reflect an atypical reliance on right hemisphere regions for reading). How Intervention Affects the Struggling Reader’s Brain Reading interventions can change the structure and function of the brain. Due to brain plasticity left-hemisphere brain regions that are typically under activated in dyslexia exhibit a gain in activation after effective intervention (Lindamood-Bell program for adults, FastForWord for children). Children with dyslexia who had under activated left temporoparietal and frontal brain regions showed gains in activation in those regions after effective remediation. Lindamood Phoneme Sequencing program and Phon-Graphix interventions resulted in a shift from greater activation in the right hemisphere to greater left hemisphere activation and normalization of white matter structure. Effective interventions with dyslexic readers can also strengthen activation in brain regions not typically engaged in reading. Effective interventions may act in two ways – normalization of the brain and brain compensation. These affects can be long lasting. Prevention or early treatment of dyslexia yields better outcomes that later treatment. Neuroscience methods have shown surprising strength in predicting future reading difficulty. A near-term goal could be the prediction and prevention of dyslexia. In general, brain imaging combined with familial information may facilitate preventive interventions that allow more children to succeed at learning to read. Chpt 7 - Constructing a Reading Brain Expectations for neuroscience-based easy-to-follow recipes for classroom practice are unrealistic but in combination with what we know from cognitive, developmental, and other learning sciences, neuroscience can provide a new perspective on education. There is no single part of the brain that ‘does’ reading. The brain is simply not designed for reading. As we learn to read, we are borrowing from and building upon multiple neural systems, each with their own specialization and actually constructing a brain that can read. Learning to read involves the development of several constituent systems and then connecting those systems so that they work in concert automatically and fluently. This process takes years, beginning before formal schooling and extending throughout the school years. The key systems that are required to read include the orthographic system that enables the visual processing of text, the phonological system used to process the sounds of language, and the semantic system used to connect meaning to words. Visual Processing: Orthography The first task for a reader is to make sense of the marks on a page. This begins with recognizing the orthographic symbols of the language (for us the Roman alphabet) – a task that is difficult because the symbols are arbitrary, abstract and sometimes easily confused. Distinguishing between letters requires a highly sensitive visual system. The occipital lobes in the visual cortex enable us to identify lines, curves, angles, terminals and junctions that form written symbols. These visual elements are combined to create letters and words that we see on the page. In order to read we must make meaning of the patterns that we see. ɡet RAY away 爱 我 现 English small letters for the verb move Capital letters for your instructor’s first name Small letters for a place that is at a distance Two Chinese symbols for Ai (the English word ‘love’ ) Chinese symbols for Wǒ and Xiànzài5 (the English words ‘Me’ and ‘Now’) 5 Pronounced who wa and shia si As we ‘read’ each of these patterns the pattern that we ‘see’ is processed by two separate regions of the brain. One pathway (ventral pathway) determines ‘what’ is being seen and transfers the information to the temporal lobes. The ventral pathway is specialized for processing colour, form, texture, patterns, and fine detail. Learning to read appears to involve adapting and specializing the ventral visual stream through practice with printed words. The other (dorsal pathway) determines ‘where’ we are seeing the symbol, which allows us to place what we in some order. (i.e. - to place parts of a symbol, letter or word in order from left to right or up to down. The dorsal pathway enables a reader to move across the page in a complicated pattern of fixations (periods of relative stillness) and saccades (brief jumps across the text). The dorsal stream is involved in controlling eye movements and dorsal stream deficits have been associated with reading difficulties. Activation of the dorsal stream is much reduced in adults with dyslexia. Given that English is read from left to right and Chinese from top to bottom we can see that both pathways are essential for making meaning of the symbols. Building a brain that reads involves developing these two pathways (streams). Therefore children who are learning how to read are truly changing their brains. The fusiform gyrus is a structure that runs along the base of the brain and contains a sub-region that has become known as the word form area. This area is activated by stimuli that are wordlike (i.e. – for us this means they follow the rules of English language). If removed surgically the individual loses their ability to read. The fusiform gyrus enables us to recognize faces, a skill that far precedes reading skills. The current belief is that a portion of this part of the brain has become specialized for word processing. This specialization progresses over time and with experience with words. It is absent in kindergartners, present in second graders, and continues to develop through adolescence. Its development is correlated with the recognition of familiar letter patterns (decoding automaticity) which is an essential skill in reading. Adults who are not experts at reading (particularly those with dyslexia) show no activation of this area during reading. Auditory Processing Phonology Phonology involves the sound system of a language. Phonological processing systems related to rhymes develop early. In fact the region associated with speech processing, the superior temporal sulcus is sensitive to speech very early in the course of typical development. This area is used for both spoken and written language processing which has been proven by studies that show that silent reading activates this region. Activities that emphasize the sound structure of language help to develop phonological awareness. This is important because phonological awareness appears to be positively correlated with reading skill throughout school. This may be explained by the “Matthew Effect’ which suggests that lack of ability cause lack of performance/participation, which then leads to lower performance compared to those who particpate at higher levels. The connection between speech and reading is supported by the belief that learning to read, by changing the phonological processing systems changes the way speech is anlayzed and phonemes are remembered. As a result of reading, whole word sounds are automatically broken up into sound constituents thereby changing language processing. The bonus is that by keeping track of phoneme constituents a reader is can remember novel word sounds more accurately. Connectivity: Mapping Orthography to Phonology The two best predictors of reading achievement in early elementary school are letter identification (knowing graphemes)and phonological awareness (knowing phonemes). These two sets of knowledge map onto one another. Learnin g these mappings is a skill we call decoding, a skill that is required for a child to learn to read. Even this step is difficult when learning to read in English because the English language has a ‘deep orthography’; the mapping of grapheme to phoneme is not one-to-one like it is in some languages. This is also why it is difficult to learn how to spell in English. In fact, studies show that there are two areas of the brain responsible for this task. For easy, familiar orthgrapic-to-phonological mappings the posterior regions are activated, while more difficult mappings rely on more anterior regions. Other differences in mapping indicate that adults and older children activate regions associated with automaticity while younger children do not. Furthermore, children who lack activation in the posterior regions exhibit reading disability. Fortunately, we find that this activation can be increased through the use of phonologically-based interventions that focus on letter-sound mappings and this improves reading ability. It is important to realize that the activation of individual regions is not enough to learn to read well. These regions are part of a more complex interconnected system and the dynamics of the system may either enable us to read well or experience dyslexia. Uncoordinated processing may be characteristic of poor reading. Meaning Processing: Semantics Studies of family environments provide us with valuable information for understanding and assisting poor readers. By age three, 86 to 98 percent of a child’s vocabulary consists of words in his or her caregivers’ vocabulary. Children who grow up in low-income households in which they are not spoken to extensively and are not exposed to a variety of words begin school with many fewer words than their peers from higher-income households. This may be related to the finding that children in professional families hear eleven million words per year while those in families receiving welfare hear only three million. Alarmingly, the vocabulary gap created from this difference remained for years later (Matthew Effect ?) The development of speech results from hearing words and developing a spoken-vocabulary which we refer to as lexicon. This lexicon enables children to make meaning not only when speaking but also when reading. If a word grapheme activates a phoneme that is common in spoken-vocabulary it triggers the activation of other words that the brain has connected to that phoneme. When this happens the reader goes from decoding a word to developing a robust meaning of the word in context to other words. The capacity to relate written words to spoken words as well as other written words depends upon the same brain regions. Since spoken and written lexicon are interconnected, a child expands both speaking and reading at the same time. In fact, by the time a child is in third grade, there is a shift from spoken to written language as the source of most new entries in his or her lexicon. Semantics is the development of meaning of words or phrases in our vocabulary. Understanding semantics is therefore very valuable when teaching a child to read. It appears as if the lexical information that enables semantics is stored in a distributive fashion throughout the brain and the locations depend upon a variety of elements associated with words (action-orientation, kinesthetic, tactile, visual, auditory, orthographic, and phonological). When one of these elements is activated it can cause the related elements to activate as well. A sound or scent or touch can activate several related aspects of an experience during which it was perceived. The same connections can be made when reading a word, thus providing a reader with a semantic context for a word. This belief about how lexical information is stored supports the current beliefs of the value of multi-sensory instruction. Although it is distributed throughout the brain vacabulary knowledge appears to be organized into semantic networks. Words that are conceptually related to one another are linked. Chpt 8 - The Mathematical Brain Chpt 9 - The Calculating Brain Chpt 10 - The Computing Brain Chpt 11 - The Creative-Artistic Brain Chpt 12 - The Future of Educational Neuroscience