make decisions

advertisement
Lecture 10 – Semantic Networks
Two questions about knowledge of the world
1. How are concepts stored?
• We have already considered prototype and
other models of concept representation
2. How are relations among concepts stored?
• Our knowledge is not just knowledge of
things – it is knowledge of relations among
things.
1
Lecture 10 – Semantic Networks
Issues to consider
A. Two things we want our knowledge store to do:
• Help us make decisions
• Help us make inferences
B. Two ways we could store knowledge
• As a list of facts
• In some sort of structure that encodes
relations among concepts
2
Lecture 10 – Semantic Networks
A. Two things we want our knowledge store to do
Any model of knowledge representation must
explain our ability to make decisions.
Example of decision:
Is a mouse a mammal?
Yes. But how do I know? What is the cognitive
operation that lets me get the right answer?
3
Lecture 10 – Semantic Networks
A. Two things we want our knowledge store to do
Any model of knowledge representation must
explain our ability to make inferences.
Example of inference:
Does a mouse bear live young?
A mouse is a mammal. Mammals bear live
young. Therefore, a mouse bears live young.
4
Lecture 10 – Semantic Networks
B. Two ways we could store knowledge:
1. A list
 A cuckoo is a bird
 A tractor is a farm vehicle
 Molasses is a food
Decisions could take a long time.
No attempt to relate one fact to another.
No explanation of how we make inferences.
5
Lecture 10 – Semantic Networks
Problems with representing knowledge as a list:
i. As the list gets longer, it gets harder to retrieve
any given piece of information.
• More to search through; more searching
takes more time.
ii. Our knowledge would never be more than the
sum of the items in the list.
6
Lecture 10 – Semantic Networks
7
B. Two ways we could store knowledge
2. In a structure
• E.g., put all the vehicle knowledge in one spot.
Do the same with all the food knowledge, and
so on.
• Within the vehicle knowledge, have ‘regions’
for family vehicles, farm vehicles, commercial
vehicles, and so on.
Lecture 10 – Semantic Networks
Making decisions
Structure in our knowledge store should make it
easier (faster) to make a decision:
Is a tractor a vehicle?
It’s a farm vehicle. Farm vehicles are vehicles.
8
Lecture 10 – Semantic Networks
Making inferences
Suppose we learn that:
• Tractors have large tires
• Combines have large tires
We can now generalize: farm vehicles have large
tires.
• Do hay-balers have large tires? Yes.
9
Lecture 10 – Semantic Networks
Structure in knowledge
We then know that hay balers have large tires not
because we experienced it, but because we
deduced it.
But what does the structure that can permit this
look like?
• The most widely-accepted answer is, a
network. A semantic network.
10
Lecture 10 – Semantic Networks
Network models of semantic memory
1. Quillian (1968), Collins & Quillian (1969)
• First network model of semantic memory
2. Collins & Loftus (1975)
• Revised network model of semantic memory
3. Neural network models (later in the term)
11
Lecture 10 – Semantic Networks
12
Quillian’s model
Quillian was a computer scientist. He wanted to
build a program that would read and ‘understand’
English text.
• To do this, he had to give the program the
knowledge a reader has.
• Constraint: computers were slow, and memory
was very expensive, in those days.
Lecture 10 – Semantic Networks
Basic elements of Quillian’s model
1. Nodes
Nodes represent concepts.
They are ‘placeholders’.
They contain nothing.
2. Links
Connections between nodes.
13
Lecture 10 – Semantic Networks
14
breathes
Air
Animal
isa
isa
Bird
Mammal
has
has
bears
Fly
Feathers
Wings
Live
young
Lecture 10 – Semantic Networks
Things to notice about Quillian’s model
All links were equivalent.
• They are all the same length.
Structure was rigidly hierarchical. Time to
retrieve information based on number of links
Cognitive economy – properties stored only at
highest possible level (e.g., birds have wings)
15
Lecture 10 – Semantic Networks
Problems with Quillian’s model
1. How to explain typicality effect?
• Is a robin a bird? Is a chicken?
• Easier to say ‘yes’ to robin. Why?
2. How to explain that it is easier to report that a
bear is an animal than that a bear is a mammal?
3. Cognitive economy – do we learn by erasing
links?
16
Lecture 10 – Semantic Networks
What’s new in Collins & Loftus (1975)
A. Structure
• responded to data accumulated since original
Collins & Quillian (1969) paper
• got rid of hierarchy
• got rid of cognitive economy
• allowed links to vary in length (not all equal)
17
Lecture 10 – Semantic Networks
18
skin
cow
animal
mammal
fly
feathers
bird
robin
wings
fly
ostrich
bat
Lecture 10 – Semantic Networks
What’s new in Collins & Loftus (1975)?
B. Process – Spreading Activation
• Activation – arousal level of a node
• Spreading – down links
• Mechanism used to extract information from
network
• Allowed neat explanation of a very important
empirical effect: Priming
19
Lecture 10 – Semantic Networks
Priming
• An effect on response to one stimulus (target)
produced by processing another stimulus (prime)
immediately before.
• If prime is related to target (e.g., bread-butter),
reading prime improves response to target.
• RT (unrelated) – RT (unrelated) = priming effect.
• Sometimes measured on accuracy.
20
Lecture 10 – Semantic Networks
21
Priming
Related
Unrelated
Task
bread
nurse
read only
BUTTER
BUTTER
read, respond
• Difference in RT to two types of trials = priming
effect.
• Responses in related condition are faster.
Lecture 10 – Semantic Networks
Why is the Priming effect important?
The priming effect is an important observation
that models of semantic memory must account
for.
Any model of semantic memory must be the kind
of thing that could produce a priming effect.
A network through which activation spreads is
such a model. (Score one point for networks.)
22
Lecture 10 – Semantic Networks
Review
Knowledge has structure
Our representation of that structure makes new
knowledge available (things not experienced)
The most popular models are network models,
containing links and nodes.
Nodes are empty. They are just placeholders.
23
Lecture 10 – Semantic Networks
Review
Knowledge is stored in the structure – the pattern
of links, and the lengths of the links.
The pattern of links and the lengths of links
reflect experience (learning).
Network models provide a handy explanation of
priming effects.
Note: modern neural network models are different
in some respects.
24
Download