How is knowledge stored?
Human knowledge comes in 2 varieties:
Concepts
Relations among concepts
So any theory of how knowledge is stored must explain both types. We’ll look at concepts a little later in the term. Today, it’s relations.
1
How are relations among concepts stored?
2
Rosch argued for hierarchical knowledge, that is, knowledge using the contains relation:
Animal contains mammal contains canine
She argued that this explains both the speed of knowledge retrieval and our ability to make inferences.
Retrieving knowledge
Is a mouse a mammal?
Yes. But how do I know?
How do I find this bit of information among all the many things that I know?
3
Making inferences
.
Does a mouse bear live young?
A mouse is a mammal. Mammals bear live young. Therefore, a mouse bears live young.
But in order for me to be able to reason like this, my knowledge store must connect mouse to mammal & mammal to live young.
4
Two ways we could store knowledge
Imagine that we have lots of facts that we need to store, and each fact is written on a 3X5 card.
We are going to store these cards on tables in a large room.
How do we do this?
5
Storing knowledge in a list
One way would be just to start piling cards on the nearest table as we get them. We would keep piling cards onto that table until they spilled onto the floor, then move on to the next table, and continue till all the tables were full.
If you wanted a piece of information that was on one of those cards, how would you get it?
6
A list of problems with lists
Retrieving any particular fact becomes more difficult the more facts you learn.
Lists do not capture relations between facts
(e.g., dogs display dominance by snarling; wolves display dominance by snarling).
The list structure doesn’t have a mechanism for making inferences, so our knowledge would never be greater than the sum of the items on the list.
7
Advantages of structured knowledge
Faster access to concepts
E.g., if you want farm animal information, go to the farm animal table
Going beyond knowledge-based-onexperience, by making inferences.
Generalizing to create new knowledge.
8
Faster access to concepts
Continuing with the “tables” metaphor, we could assign each table to a topic (e.g., seven tables for politics, nine tables for animals, six for gardening… The animal tables could each be used for one class (e.g., reptiles, farm animals, sea animals…).
Now, if you wanted a particular piece of information about farm animals, what would you do? The principle, of course, is organization .
9
Making inferences
Example: is spelt a food? Your knowledge store tells you 2 things:
10
You can answer the question even if you don’t have a card that says “spelt is a food”
Generalizing to create new knowledge
Suppose we learn that:
Tractors have large tires
Combines have large tires
We can now generalize: farm vehicles have large tires.
11
Do hay-balers have large tires? Yes. We can work that out even without explicitly learning it.
What is the structure like?
We can all agree that having structure in our knowledge store offers advantages.
But what is that structure? A wall? A path? A tree?
The most widely-accepted answer is, a network. A semantic network.
12
Network models of semantic memory
Quillian (1968), Collins & Quillian (1969)
First network model of semantic memory
Collins & Loftus (1975)
Revised network model of semantic memory
Neural network models (later in the term)
13
14
Quillian’s (1968) model
Quillian was a computer scientist. He wanted to build a program that would read and ‘understand’ English text.
To do this, he had to give the program the
knowledge a reader has.
Constraint: computers were slow, and memory was very expensive, in those days.
Basic elements of Quillian’s model
Nodes
Links
Nodes represent concepts.
They are ‘placeholders’.
They are empty.
Connections between nodes. Nodes send signals to each other down these links.
15
16
Air breathes isa
Animal isa
Wren
Bird isa has has
Feathers Wings
Mammal bears
Live young
Things to notice about Quillian’s model
All links are equivalent.
Structure was rigidly hierarchical. Time to retrieve information based on number of links
Cognitive economy – properties stored only at highest possible level (e.g., birds have wings)
Made sense in late 1960s, when computer memory was very expensive, so efficiency was highly valued.
17
18
Problems with Quillian’s model
1. How to explain typicality effect?
• Is a robin a bird?
• Is a chicken a bird?
• Easier to say ‘yes’ to robin. Why?
2. How to explain that it is easier to report that a bear is an animal than that a bear is a mammal?
3. Cognitive economy – do we learn by erasing links?
What’s new in Collins & Loftus (1975)
A. Structure
• responded to data accumulated since original
Collins & Quillian (1969) paper
• got rid of hierarchy
• got rid of cognitive economy
• allowed links to vary in length (not all equal)
19
cow mammal animal ostrich bat fly bird wings feathers robin fly skin
20
What’s new in Collins & Loftus (1975)?
B. Process – Spreading Activation
• Activation – arousal level of a node
• Spreading – down links
• Mechanism used to extract information from network
• Allowed neat explanation of a very important empirical effect: Priming
21
Priming
•
An effect on response to one stimulus
( TARGET ) produced by processing another stimulus immediately before ( PRIME )
• If prime is related to target (e.g., bread-butter), reading prime improves response to target).
• Usually measured on RT; sometimes on accuracy
RT (related)
22
Priming
Related bread
Unrelated nurse
Task read only
BUTTER BUTTER read, respond
Difference in RT to two types of trials = priming effect. (Related shorter RT than unrelated.)
23
Why is the Priming effect important?
• The priming effect is an important observation that models of semantic memory must account for.
• Any model of semantic memory must be the kind of thing that could produce a priming effect.
• A network through which activation spreads is such a model. (Score one point for networks.)
24
Review
• Knowledge has structure
• Our representation of that structure makes new knowledge available (things not experienced)
• The most popular models are network models, containing links and nodes .
• Nodes are empty. They are just placeholders.
25
Review
• Knowledge is stored in the structure – the pattern of links, and the lengths of the links.
• The pattern of links and the lengths of links are consequences of experience (learning).
• Network models provide a handy explanation of priming effects.
26