2. (20 points) The following are examples of different types of human memory. For each pair in the lists A-D, compare the two types of memory by mentioning, for example, their differences in function and structure.
A. Sensory memory vs. working memory
B. Iconic memory vs. echoic memory
C. Short-term memory vs. long-term memory
D. Explicit memory vs. implicit memory
ANSWER by Umer Fareed
Sensory Memory: The sensory memory contains an exact copy of what a person sees (visual) or hears
(auditory). It only lasts for a few seconds. Some opinions are that it lasts for only 300 milliseconds. It has unlimited capacity. Its structure is assumed to be built on stimulus persistence (something that looks like physical stimulus stays present even after stimulus is no longer present) and information persistence
(information extracted from the presented stimulus after it’s removal). The ability to look at a stimulus and remember what it looked like with just a second of observation, or memorization, is an example of sensory memory.
Working Memory: The working memory is defined to have a dedicated system that maintains and stores information in the short term, and that this system underlies human thought processes. Stored information decays quickly unless actively rehearsed. Current views of working memory involve a central executive, the phonological loop and the visuo-spatial sketchpad; and the multimodal episodic buffer. The central executive passes information to the three component processes: the phonological loop, the visuo-spatial sketchpad, and the episodic buffer. The phonological loop stores auditory information by silently rehearsing sounds or words in a continuous loop; the articulatory process (the "inner voice") continuously "speaks" the words to the phonological store (the "inner ear"). The visuo-spatial sketchpad stores visual and spatial information. It is engaged when performing spatial tasks (such as judging distances) or visual ones (such as counting the windows on a house or imagining images). The episodic buffer is dedicated to linking information across domains to form integrated units of visual, spatial, and verbal information and chronological ordering (e.g., the memory of a story or a movie scene).
Iconic Memory: Iconic memory is a type of sensory memory named by George Sperling in 1960. According to Sperling, it lasts only approximately 250 ms after the offset of a display and has a rapidly decaying behavior.
Iconic memory is thought to be the sensory store for vision. Ulric Neisser (1967) proposed the idea that iconic memory preserves an exact duplicate of the image falling on the retina. It contains all the sensory information available from the retina of the eye. Current concepts of iconic memory have refined Sperling’s original formulation; now iconic memory is viewed as a short-term sensory (visual) buffer, allowing time for sensory information to be recoded in a more permanent, categorical manner. Iconic memories are fragile, decay rapidly, and are unable to be actively maintained.
Echoic Memory: It is the auditory version of sensory memory; which is believed to be a brief mental echo that continues to sound after auditory stimuli has been heard. The idea behind echoic memory is that auditory information may persist in the form of an echo that can be attended to after the original stimulus is removed.
According to Neisser (1967), echoic memory lasts for only one or two seconds. In the echoic memory, the inner ear converts sounds train of nerve impulses that represent the frequency and amplitude of individual acoustic vibrations. Due to its short span, echoic memory is a type of sensory memory as the echoic memories are temporal and last only for a brief period of time. For example, when given two different sound tones, listeners were unable to match two tones after a very short delay time (300 milliseconds) but were able to correctly match when there was no delay between the tones. Echoic memory can be expanded if it is repeated in the phonological loop which rehearses verbal information in order to keep it in short term memory.
Short-term Memory: Short-term memory (STM) refers to memory processes that retain information only temporarily, until information is either forgotten or becomes incorporated into a more stable, potentially permanent long-term store. Estimates of short-term memory capacity vary from about 3 or 4 elements (i.e., words, digits, or letters) to about 9 elements: a commonly cited capacity is 7±2 elements. The information held in short-term memory may be either recently processed sensory input or items recently retrieved from long-term memory.According to Miller (1956), capacity of short-term memory is limited. Duration of short-term memory is brief (Peterson and Peterson, 1959). Most definitions of short-term memory limit the duration of storage to less than a minute; no more than about 30 seconds, and in some models as little as 2 seconds. Without attention and rehearsal, information is lost rapidly from STM. In order to overcome the limitation of short-term memory,
and retain information for longer, information must be periodically repeated, or rehearsed — either by articulating it out loud, or by mentally simulating such articulation. In this way, the information will re-enter the short-term store and be retained for a further period. When several elements (such as digits, words, or pictures) are held in short term memory simultaneously, their representations compete with each other for recall, or degrade each other. Thereby, new content gradually pushes out older content, unless the older content is actively protected against interference by rehearsal or by directing attention to it.
Long-term Memory: Long-term memory (LTM) is memory that can last as little as a few days or as long as decades. It differs structurally and functionally from short-term memory. As long-term memory is subject to fading in the natural forgetting process, several recalls/retrievals of memory may be needed for long-term memories to last for years, dependent also on the depth of processing. Individual retrievals can take place in increasing intervals in accordance with the principle of spaced repetition. This can happen quite naturally through reflection or deliberate recall (a.k.a. recapitulation or recollection), often dependent on the perceived importance of the material. The brain stores long term information by growing additional synapses between neurons. Since the brain has approximately 1015 synapses, one can argue that brain has a maximum capacity of about 100 TByte, possibly more if one synapse can store more than 1 bit of information. By no means do humans store that much information. Experiments in the mid 1980s showed that humans can store only 1-2 bits/second in their long term memory. The cumulative amount of data stored in the brain over a 70 year lifetime is therefore only in the order of 125 Mbyte. Studies undertaken by Bahrick predicted that long term memory can indeed remember certain information for almost a lifetime. However factors can in fact reduce or extinguish information completely. For Example, childhood amnesia is a factor effecting long term memories duration, there are very few people who can remember information or events before the age of 3 or 4.
Explicit Memory: Explicit memory is the memory with awareness. It is the conscious, intentional recollection of previous experiences and information. We use explicit memory throughout the day, such as remembering the time of an appointment or recollecting an event from years ago. Remembering a specific driving lesson is an example of explicit memory. Explicit memory depends on conceptually driven, top-down processing, in which a subject reorganizes the data to store it. The subject makes associations with previously related stimuli or experiences. The later recall of information is thus greatly influenced by the way in which the information was originally processed. There are two kinds of explicit memory: Episodic memory, also called autobiographical memory, consists of the recollection of singular events in the life of a person. It is the memory of life experiences centered on yourself. Second one is Semantic memory that consists of all explicit memory that is not autobiographical. Examples of semantic memory are knowledge of historical events and figures; the ability to recognize friends and acquaintances.
Implicit Memory: It is often referred to as memory without awareness. The key difference with respect to explicit memory is that at the time of test, subjects are unaware that the task they are performing is related to a particular study. Improving your driving skills during the driving lesson is an example of implicit memory. In daily life, people rely on implicit memory everyday in the form of procedural memory, the type of memory that allows people to remember how to tie their shoes or ride a bicycle without consciously thinking about these activities. Implicit memory is distinct from explicit memory and exists as its own entity, with its own processes.
3. (20 points) Explain the embedded processes model of immediate memory. How does this model differ from
Baddeley’s working memory? What are the strengths and weaknesses of the model?
ANSWER by Vengertsev Dmitry
Embedded process model of immediate memory:
Embedded processes model consists of central executive, long-term memory, active memory (subset of memory in a temporarily heightened state of activation), and the focus of attention, which are represented in
Figure 2. It involves all information accessible for a task: memory in the focus of attention; memory out of the focus but nevertheless temporarily activated; and inactive elements of memory with pertinent retrieval cues.
Active memory is a subset of long-term memory and the focus of attention is a subset of the active memory. The direction of the focus of attention is controlled by the central executive.
In embedded processes model some of the necessary information may be in the focus of attention, some may be in an especially active state, ready to enter the focus as needed, and some may simply have the appropriate contextual coding in long-term memory that allows it to be made available quickly.
It differs from Baddely’s working memory in following respects
1) It does not divide immediate memory into separate subsystems.
2) It specifically include a long term memory contribution – working memory is essentially activates longterm memory, that is long-term memory plays important role in short-
C e n tra l E x e c u tiv e
(d ire c ts a n d c o n tro l v o lu n ta ry p ro c e s s in g ) term memory task. Baddely’s model allowed only phonological loop to play major role.
The greatest advantage of embedded processes model as was mentioned before it introduces long-term memory.
It provided demonstration of long-term memory effects, such as memory span
(the number of items that can be immediately recalled in order).
Another advantage of embedded processes model is that it predicts that output time will have an effect on
S tim u lu s a b
L o n g T e rm M e m o ry
S e n s o ry
S to re
F o c u s o f
A tte n tio n
A c tiv a te d M e m o ry
One of the weaknesses about embedded processes model is the usage of the term activation. To code information in some cases nervous system uses a change in the rate of firing, rather than activity versus no activity. Another problem relates to assumption of deactivation. In memory studies idea of usage decay in connectionist network was criticized and then rejected. In memory, information appears lost over time but time should not be given causal role.
4. (20 points) Memory can be viewed as a process rather than a structure. Explain the theory of transfer appropriate processing (TAP). In what aspects is this theory different from the levels of processing view? What kind of data can TAP explain best? Explain also the relationship to the encoding specificity principle of Tulving.
ANSWER by Subhojit Chakladar
It states that memory performance is not only determined by the depth of processing but the relationship between how information is initially encoded and how it is later retrieved. The performance is best when processes engaged in during encoding match those engaged in during retrieval. For example, if the learning uses a shallow process (which involves relatively low depth of processing of the given information) then a retrieval task that involves shallow processing gives the best performance. Thus a match between the information enters the system (the brain or memory) and leaves the system (how the subject uses the stored information), leads to better performance than when they are mismatched. No one type of process is good for all tests.
The level of processing takes a different view. It states that recall of a stimulus is a function of the depth of processing. The more deeply a given information is analyzed (determined how it is connected to preexisting memory, time spent in analyzing it, the cognitive effort), the more durable it is in the memory. On the other hand shallow processing using just superficial properties of the presented information leads to a fragile memory test that is susceptible to rapid decay. Thus according to level of processing, any information that was not analyzed deeply will not be remembered well. Of course this is not true in all cases. Though a more detailed analysis is better for remembering many things, there can be deviations. For example, if we see a certain thing for a very short time, we may forget about it temporarily but a certain suitable cue (that may or may not be directly related to the previous thing) may elicit the memory of that object, even though it was not analyzed deeply. Clearly, this contradicts the level of processing view of memory. On the other hand, transfer appropriate processing tells that the memory is dependent of the match between the encoding and the retrieval mechanism. A match between the two in the above case will lead information not deeply analyzed to be remembered. Thus the difference between transfer appropriate processing and level of processing view is the dependence of memory on the match between encoding and retrieval process. While the former states that memory performance is dependent on them, the latter states that it is independent of them.
Memory of rhyming data can be best explained by TAP. In a study by Morris et al (1977), subjects were given semantic and rhyme tasks. Semantic tasks involve deeper analysis of the given information while rhyming doesn’t. In free-recall tests, the memory performance was better for semantic processing, since it involved deeper analysis. On the other hand, in a rhyming recognition test, memory was better for those who engaged in rhyme processing than semantic processing.
The encoding specificity principle of Tulving takes into account the importance of contextual information in memory. It states that the interaction between training (or encoding/study) and test (or retrieval) decides whether the information will be recalled or not. Similarity of context in both the situations will help the process of recall.
Memory is improved when information available during encoding is also available during retrieval. For example, the encoding specificity principle would predict that recall for information would be better if subjects were tested in the same room they had studied in versus having studied in one room and tested in a different room.
Thus, this principle is similar to TAP, in that it supports the view that memory is depended both on the way in which it gets into the brain and how it comes out (ie, retrieved).
5. (20 points) Forgetting is an important aspect of human memory. Explain the three main theories of forgetting, i.e. consolidation theory, interference theory, and discrimination theory. What data can best be explained by each theory? What are the important characteristics of the modern view of forgetting?
ANSWER by Hjalmar Wennerstrom
In consolidation theory tries to explain the phenomenon of forgetting by pointing at problems when we try to store the information in the first place. According to the spokespersons of consolidation theory the learning process in not finished after the step of practice and rehearsal, there is one more step in the process that they call perseveration. It occurs after the two initial steps and the longer the perseveration period is the more consolidated the memory will become. If some interference occurs in this time period while perseveration is going on the memory will be consolidated more poorly and more difficult to recall. In order to have the best success there should be some period of time where there is no mental activity. This gives a good perseveration period and therefor the best consolidation as well. If on the other hand consolidation is prevented then consolidation theory says that the item should never be recalled. This has however show not to be true in all cases.
Consolidation theory is a good model to explain the fact that memory works better when there is a period of rest between the learning phase and the recall phase. Experiments has shown that the longer perseveration is continued the better the memory becomes if in a resting stage. One can also show that if the time between learning and recall is very short then performance will decrease.
The second model of forgetting is the interference theory and it as the name suggests it states that forgetting is due to interference from other things. It is when we learn something and after that we process new information which causes interference. There are two types of interference that can occur, retroactive and proactive.
Proactive interference is when a memory that has been learned earlier is recalled but interfered by a more recent memory. The older memory gets distorted because of the new memory, hence proactive. The opposite case is called retroactive interference and it is when a newer memory that is recalled get interfered by a older memory. So the new memory gets distorted because of the old memory. This theory is a good model in explaining how we can remember things incorrectly when given certain triggers, even though the memory might have been lost before that. It is also good in explaining the difficulty of learning many very similar (but still different) things at the same time. There will simply be to much interference from previous memories.
A more modern view on forgetting is the discrimination theory. In this theory is is believed that during learning we process the material to be learned and we also associate it with a variety of external and internal cues. So when learning we do not only store the memory itself but also a variety of semantic connections to other memories. So the act of forgetting occurs when the desired memory does not distinguish itself enough from other possible memories. So forgetting can be seen as some performane problem when not enough good stimuli is present.
The discrimination theory is good in explaining why one can have better performance on test when time elapses. It is not always true that memory becomes worse over time. With time it can become easier to discriminate between memories and performance will increase. A consequence of this theory is that is also claim that things can not be completely forgotten. This is an important issue, can things be permanently forgotten or not? There is no good way to prove either of them and there is still debate over this issue. There have been experiments to give favor to the temporary lapse view but things are still not consclusive anough.
6. (20 points) Implicit memory is often defined as memory without awareness. There are at least three different accounts of the implicit memory data. What are these? Compare them by discussing what their major strengths and problems are.
ANSWER by Son Suil
There are three major accounts for the implicit memory.
1. Multiple Memory Systems
2. Transfer Appropriate Processing
3. The Bias View
There is one more account. It is The Activation View, but this can not explain that implicit memory can effect one’s memory for a long time. So we ignore the activation view.
Above three accounts for implicit memory have each strength and problem to explain implicit memory.
First, Multiple Memory Systems. This accounts the implicit memory based on multiple memory systems in brain. The view of multiple memory system is like that a brain is consisted of several parts which do its own process. The several parts in the brain are procedural memory system, perceptual representation memory system, primary memory system, semantic memory system and episodic memory system. Multiple Memory Systems accounts that implicit memory data-indirect test- taps some part of memory system which involves nonconscious operation and explicit memory data-direct test- taps the other parts of memory system which involves conscious operation. This explanation has strengths that it is very simple and clear. But the weak points of multiple system view is that multiple memory system is not the only theory of explaining memory, yet..
Second, Transfer Appropriate Processing. This explains that a human memory has several types of processing and that implicit memory data and explicit memory data need different retrieval operation(different process).
These different processes are data driven for implicit memory and conceptually driven for explicit memory.
Actually for some experiment, Transfer appropriate processing predicted correctly that two data driven test will be better than two direct test which multiple system view predicted. The experiments had 4 combinations of data like 1) direct/data driven, 2) direct/conceptually driven, 3) indirect/data driven, 4) indirect/conceptually driven.
However, Not only Multiple system view but also Transfer Appropriate Processing can not explain quite well for the repetition priming effects. This is weak point of both transfer appropriate processing and multiple system view.
Third, The Bias View. This explains unconscious learning(implicit memory) as some data previously shown cause some effect on processing current data. The bias have two aspects - cost and benefits. If previous work is inappropriate for the current task and causes a disadvantage then the bias is cost. On the other hand, previous work is appropriate for the current task and makes an advantage for the current task then the bias is benefits.
This account explains quite well the repetition priming effects.
7. (20 points) One of the fundamental questions in memory research is where it is located. Two basic historical conceptions are the localized storage view and the distributed storage view. Explain each view and give arguments and experimental results for supporting the view. Is there a resolution between these two views? Give experimental evidence from amnesia research.
ANSWER by Hjalmar Wennerstrom
The localized storage view is considered the older theory of the two and originates from when scientists found specialized memory structures in the brain. Broca in 1861 found that speech production was a localized part in the brain. So the theory is that the brain is divided into specialized units, each with its own defined purpose or task. The theory of distributed storage view says according to Lashley (1950) that “memory is not stored anywhere but rather stored everywhere”. He made this argument after a study on rats where he could show that the extent of a specific damage to the brain correlated with memory impairment there was no relation between the location and the memory impairment. A study made by Cabeza in 2000 using PET scans and MRI show that there is no single place where memory is stored but it seems to be spread out, however with a greater concentration in the frontal lobe.
The two theories are being questioned more and more by todays scientists. The consensus today seems to be more towards a unification of the two views. Memory is distributed in the brain but with subsystems that has some loosely defined task. When memory is processed a limited number of brain systems are in use. One could make an analogy that the brain is an ocean with small islands scatted all over instead of just one big landmass or many many small rocks.
It has long been claimed that amnesia is caused by some damage to the hippocampus. This statement should mean that memory processes are dependent on the hippocampus, there is some central part that needs to be working. There are however more doubt about this statement in recent years. Bockner , Kelley and Petersen
(1999) argued that when damage is limited on one hemisphere in either the hippocampus or frontal lobe other regions of the brain might be able to compensate for that loss.
It has also been argued that from a evolutionary view it seems like a bad idea to have one centralized memory unit. If that was the case all information had to be routed through that unit and then back out again. This idea of so called “anatomical extension cords” has also been questioned. It has been shown in studies that when a subject has damage to the frontal lobe there is little evidence of damage to the hippocampus. So if the hippocampus was indeed the central unit there had to be some connections from the frontal lobe to the hippocampus. In an event where the frontal lobe was damaged there should therefore also had been some impairment of the frontal lobe.
It is concluded in research in different studies (Crowder 1993, Roediger and Srinivas, 1993, Toth and Hunt
1999) that the hippocampus does not have a central role. The showed that a local lesion did cause not only a loss in local function but also a loss in memory of that function.
8. (20 points) Consider recognition memory. What phenomena can not be explained by simple single process models of recall and recognition? What findings are not explained by generate-recognize models? Explain what the modern view of recognition memory is.
ANSWER by Son Suil
Single process models : There are two theories in single process models.
First, tagging model. Tagging model is that when recall and recognition occurs, each item is tagged
Second, Strength theory. Strength theory is that the more recently a particular item was experienced, the stronger or more familiar it seems.
Both of these have a limitation that they contain only a single process. This means that the same manipulation has to show the same effect regardless of the task. However, There are some experiment that single process model can not explain the result about recall and recognition test. High-frequency words are recalled better than low-frequency words, but low-frequency words are recognized more accurate than high-frequency word.
Generate-recognize models : According to generate-recognize model, recall is made up of two processes, but recognition is made up of only one process. In a recall test, First subjects generate a set of plausible candidates for recall, Second, the subject confirm whether each word is worthy of being recalled. In a recognition test, the subject does not need the generation stage. Because they has provided the candidate. So, they only proceed a confirmation or recognition stage.
This model can explain the Word-Frequency effect test for recall and recognition that single process model can not explain. For recall test in word frequency test, High-frequency words tend to have more associates and more pathways. Therefore subjects are able to find a shorter, and able to show good performance. Lowfrequency words have fewer associates and can take longer to read. This makes worse performance for recall.
However, For recognition test, low-frequency words are unusual looking, they can take longer to process and lead to more item-context associations. This helps recognition.
Generate-recognize models also have another problem. This model means that if recognition is easier than recall because recall has one more processing step and If we can recall, we can recognize. This characteristic of generate-recognize model can not explain the phenomenon known as recognition failure of recallable words.
That is a word can be recalled under certain conditions even though it cannot be recognized.
Modern view of recognition memory. To overcome recognition failure of recallable word. Modern view says that there are the same search and confirmation process operate in both recognition and recall.
One recent change in recognition methodology is the ‘remember’ and ‘know’. Recent research says that
‘remember’ and ‘know’ is different. For ‘remember’, it is influenced by conceptual and attentional factors, whereas ‘know’ may be based on a procedural memory system. In this account ‘remember’ is correspond to recall and ‘know’ to recognition. In addition, Recent idea about recognition is that recognition performance is a combination of two different types of process – recollection and familiarity.
9. (20 points) Compare the characteristics of the two forms of representations, i.e. propositional and analog representations. How do we know that they exist? Design an experiment and show that both forms of representation are available. Can we also create auditory images, tactile images, and odor images?
ANSWER by Subhojit Chakladar
Propositional form of representation encodes the meaning of the information using something like a verbal code. Analog form on the other hand preserves the structure of the information in a more or less direct manner.
Its something like a picture or a map. In general, the propositional form is recalled faster but the analog form helps in remembering better (picture superiority effect). For example, the sentence – “The sun rises in the east” is a propositional form of representation. While trying to imagine the rising sun is the analog from.
The existence of these two forms is demonstrated by the Dual Task method. Here if the training (learning) and test (performing a task) taps the same form of representation, then the response is slower.
Experiment – The subjects are given sentences and figures of alphabets. This is the learning phase. In the test phase, they are asked questions whose answers can either be “yes” or “no”. The response is registered either verbally or by pointing to “Yes” or “No” written on the wall.
It is observed that in case of the sentence, the response is faster when the subject is asked to point to the “Yes” or “No”. In case of the figure of the alphabet, the response is faster when the subject responds verbally.
This can be explained in terms of Dual Task. The sentence encoded the information in form of verbal codes, so it was propositional form of representation. Verbal response tapped the propositional form of representation, so the response in case of the questions from the sentence, the response was slower. In this case tapping the same form, causes interference between the input and output steps, leading to increase in response time. The figure of the alphabet was encoded using analog form of representation, while a verbal response taps the propositional form of representation. So the response in this case is faster, since there is no interference between the input and output forms.
Thus both the forms of representation exist.
Yes, imagery need not be just limited to visual imagery, though it is the most widely studied form. Auditory, tactile or odor images help us mentally reconstruct a scene more vividly. For example, when we imagine a restaurant, apart from forming a visual picture of the place, we also imagine about the sound of the cutlery, smell of the food, to make the imagination more vivid. The sound and smell are indeed auditory and odor images.
Roy Patterson and his group in Cambridge have been active in the field of auditory images.
10. (20 points) People are very likely to remember actually reading sentences that were not presented, if those sentences are likely inferences. Explain this phenomenon using the concept of schema. Can the notion of schema also explain the eyewitness memory, i.e. recollected description of a crime scene. How much we should believe the eyewitness report according to the theory of memory as a reconstructive process.
ANSWER by Son Suil
A schema is an organized knowledge structure that reflects an individual’s knowledge, experience, and expectations about some aspect of the world. Information contained within a schema is usually recruited to help recall various events. A schema has five general characteristics.
First, schemas represent knowledge.
Second, schemas can represent knowledge at all levels.
Third, schemas can be embedded within each other.
Fourth, the actual information within a schema is general.
Fifth, schemas are active, dynamic, and continually changing.
Because schemas is an organized knowledge structure, it helps subjects to recall something. Because schemas is structured in general form, schemas can also introduce errors in detail. Therefore when we try to remember read sentence we can judge incorrectly that we saw the sentence which was not in the actual story. This tendency is more likely to occur if those sentences are likely inferences. That’s why the inferable sentence has common characteristics with the actual story in general form-schemas.
In many experiments about eyewitness memory, it showed that subjects are very likely to mislead what they have seen. This phenomena can also be explained by the characteristics of schemas. First schemas affect on recalling. Moreover, schemas are active and continually changing. Therefore, though the subjects have correct memory in first time, their schema can be changed by other stimuli and later they might construct different facts from changed schema later. In addition the schemas is very individual experience, so ones have seen same events may say different fact about the event later.
According to the McCloskey and Zaragoza’s eye witness experiment, 50% of subjects have correct memory,
25% have correct by chance. Therefore, totally 75% were correct, but this was not involved interference. When some interference was made, the percentage of correct would be worse. So we can believe the eyewitness report at most 75%
11. (20 points) Explain the following terms on human memory.
A. Forward telescoping and backward telescoping
B. Retroactive interference and proactive interference
C. Retrograde amnesia and anterograde amnesia
ANSWER Hjalmar Wennerstrom
Telescoping is the the observed effect that when people give estimates of when an event occurred they estimate wrong. Forward telescoping is when a person believes that an event occurred more recent then it actually did. The effects of forward telescoping are seen to increase the more time that passes. That is the longer time since the event occurred the more we telescope it to a more recent time period. Backward telescoping is the effect that events seem to have happened longer ago then they actually did. A person moves the events further back on the time line.
Retroactive and proactive interference are the two key concepts in interference theory that tries to explain forgetting. Retroactive interference is when something that was learned earlier is interfered by something more recent due to some similarities. The effect can be seen when testing subjects on pair of words. They were shown one connection pair of words and after that another pair of connection words, one of the words in the latter was the same word a in the first pair. When subjects are tested on the first pair of words there will be some retroactive interference from the later learned word pair. Proactive interference is when words that are learned before interference with the recollection of words that are learned more recently due to some similarities. In the experiment it would be to switch the order of the two word pairs and still test them on the same pair.
Amnesia is a type of memory loss and the two major categories that they are usually divided into is retrograde amnesia and anterograde amnesia. Retrograde amnesia refers to the inability to remember details that were learned before the event that caused the amnesia. It is only some details that are forgotten and almost never complete memory loss. Retrograde amnesia is however fairly common but usually not very extensive.
Anterograde amnesia can sometimes also be observed along with retrograde amnesia following some traumatic event. Anterograde amnesia is the inability to learn new information after some trauma event.
12. (20 points) One interesting finding in memory is that most people cannot recall events that happened to them before the age of about 3 or 4 years. Give your own explanation to this “infantile amnesia” phenomenon. What are other theories to explain this?
ANSWER by Pham Tuan Minh
Infantile amnesia - the inability of people to recall specific events that happened to them during the early years of their lives has been the topic of frequent theoretical speculation.
In my own opinion, infantile amnesia is due to neurological immaturity as the infants of 3 years or younger lack the neurological equipment necessary for memory formation and storage. It is known that in the newborn infant, the hippocampus and the frontal lobes are immature. During the period of the two following years, there is intense synaptogenesis in frontal cortex which considerbly increases its neurological functions. After that two years, the processes of myelination and increases in synaptic efficiency continue at a slower pace. Therefore, there is good logic to speculate that the immaturity of the neurological system is one of the reasons leading to infantile amnesia. There have been some demonstrations supporting for my point. In a fire alarm study carried out by Pillemer, Pacariello & Pruett, there were two group of children: 3.5 year and 4.5 year old. Two weeks after that emergency evacuation, the 4.5 year olds produced more coherent narratives and could understand better about the temporal and causal sequence of events. Seven years after the evacuation, 86% of the 4.5 group could select their correct classroom location while the younger group could only identify by chance. There is also experiment demonstration on animal ( Campbell, Misanin, White, & Lytle, 1974) suggesting that in animals which are relatively well developed in infancy, there will be little or no infantile amnesia.
There are various explanations of infantile amnesia. Sigmund Freud who was one of the first psychologists to identify infantile amnesia explained the phenomenon by the repression of distasteful memories. According to him, early childhood memories, especially sexual ones, were so frightening and distasteful that they were filtered away. Freud also proposed a selective reconstruction model which blame the disjunction between the earliest and later modes of processing information as the reason for infantile amnesia.
Howe & Courage (1993) argue for the lack of sense of self. They proved their ideas through mirror tests. 3 month infants are extremely attentive and positive toward mirrors, thus play funny games like approaching and retreating from the faces in the mirror. 9 month infants move in tandem with the images in the mirror. An 18 month infant, however, demonstrates full visual self recognition through trying to wipe a red blob off her own nose rather than reach out for the mirror.
Pillemer & White (1989) proposed that the phenomenon is due to lack of the ability to tell stories. They further pointed out that infantile amnesia is overcome through the linguistic sharing of memories with others.
Tessler (1986) had the same view. She studied 3 year olds children and their mothers on a visit to a natural history museum and found that the children could only remember a specific object they saw in the museum if they talked about it with their mothers. Neither mother’s talk alone nor children’s figure alone resulted in remembering. Together creating a narrative, however, helps strengthening the memory of the object.
Lacking a theory of mind is another speculation. It was found out from experimental paradigm that it is difficult for too young children to attribute mental states to others that is different from their own present state of mind. If a child is shown a box of candy (Smarties) which actually contains pencils, and is asked what he thinks is in the box, he replies naturally “Smarties”. They are then shown that they were tricked as the box actually contains pencils and asked what do they think their friends who are going to play the game will think. If the child is less than 4 year olds, he usually says that their friends will think that the box contains pencils. It means that 3 year olds children lack the ability to create another person’s perspective. This explanation fits well with the linguistic ability and the concept of self as they are related developments.
All of the above explanation suffer from one common limitation that it cannot be applied to animal. Another approach emphasized on biological factors overcomes this problem. It assumes that if there is substantial biological development between infancy period and adulthood, there will be infantile amnesia. Nevertheless, this speculation cannot explain why forgotten memory can still be recalled.
Another explanation is the effects of multiple retrievals. As the inactive or forgotten memory are exposed to cues or reminders, the original form of the information gets lost and is replaced by updated and transformed memories after each retrieval.
To conclude, although there have been a variety of explanations on the phenomenon of infantile amnesia, there is still a need for more experimental analysis to be completed on this subject to reach to a more systematic set of theoretical “debate”, not just only “speculation”.