Past-tense learning model

advertisement
Com1005: Machines and
Intelligence
Amanda Sharkey
Summary


Brain – thought emerging from interconnected neurons
Neural computing – based on simplified model of neurons


Neurons on/off, modifiable connections between them (learning)
Brief history of NNs




McCulloch and Pitts neurons, and Perceptrons
Minksy and Papert point out limitations
Traditional Symbolic AI
Resurgence of Connectionism –


Examples –





Backpropogation and MLPs
Pattern recognition and Pandemonium
NetTalk
Past-tense learning model
Differences between Symbolic AI and Connectionism
Which is better?
Past-tense learning model


A model of human ability to learn past-tenses
of verbs
Presented by Rumelhart and McClelland
(1986).
Past-tenses?




Today I look at you, yesterday I ? at you.
Today I speak to you, yesterday I ? to you.
Today I wave at you, yesterday you ? at me.
rick – yesterday he ? the sheep




Many regular examples:
E.g walk -> walked, look -> looked
Many irregular examples:
E.g. bring -> brought, sing -> sang
Children learning to speak






Baby: DaDa
Toddler: Daddy
Very young child: Daddy home!
Slightly older child: Daddy came home!
Older child: Daddy comed home!
Even older child: Daddy came home!
Stages of acquisition

Stage 1




Past tense of a few specific verbs, some regular e.g. looked, needed
Most irregular e.g. came, got, went, took, gave
As if learned by rote (memorised)
Stage 2
Evidence of general rule for past-tense – add ed to stem of verb
 E.g. camed or comed
 Also for past-tense of nonsense word e.g rick
They added ed - ricked


Stage 3

Correct forms for both regular and irregular verbs
Verb type
Stage 1
Stage 2
Stage 3
Early verbs
correct
Regularised
correct
regular
correct
correct
Other
irregular
novel
regularised
Correct or
regularised
regularised
Regularised


U shaped curve – correct form in stage 1,
errors in stage 2, few errors in stage 3.
Suggests rule acquired in stage 2, and
exceptions learned in stage 3.

Rumelhart and McClelland – aim to
demonstrate that connectionist network would
show same stages and learning patterns.

Trained net by presenting



Input – root form of word e.g. walk
Output – phonological structure of correct past-tense version of
word e.g. walked
Test model by presenting root form as input, and see
what past-tense form it generates as output.


Used Wickelfeature method to encode words
Wickelphone: Target phoneme and context





E.g. came #Ka, kAm, aM#
Coarse coded onto Wickelfeatures, 16 wickelfeatures
for each wickelphone
Input and output of net 460 units
Shows need for input representation

Training: used perceptron convergence procedure (problem
linearly separable)



Target used to tell output unit what value it should have.
If output is 0 and target is 1, need to increase weights from active input
units
If output is 1 and target is 0, need to reduce weights from active units.
560 verbs divided into High, Medium, and Low
frequency (regular and irregular)
 1. Train on 10 high frequency verbs for 10 epochs


2. 410 medium frequency verbs added, trained for
190 more epochs


Live-lived, look-looked, come-came, get-got, give-gave,
make-made, take-took, go-went, have-had, feel-felt
Net showed dip in performance – making errors like
children e.g come -comed
3. Tested on 86 low frequency verbs not used for
training

Got 92% regular verbs right, 84% irregular right.

Model illustrates:


Neural net training – repeated examples of inputoutput pairs
Generalisation – correct outputs produced for
untrained words


E.g. input guard -> guarded
Input cling -> clung

Past-tense model: Showed
- a neural net could be used to model an aspect of
human learning
- same u-shaped curve shown as found in children.
- the neural net discovered the relationship
between inputs and outputs, not programmed.
- that it is possible to capture apparently rulegoverned behaviour in a neural net.

Strengths of connectionism


Help in understanding how a mind, and thought,
emerges from the brain
Better account of how we learn something like
past-tense, than explicit programming of a rule?

Is this a good model of how we learn past-tenses?

Fierce criticisms: Steve Pinker and Alan Prince
(1988)




More than 150 journal articles followed on the debate
Net can only produce past-tense forms, cannot
recognise them.
Model presents pairs of verb+past tense, children
don’t get this.
Model only looks at past-tense, not the rest of
language

Getting similar performance to children was the
result of decisions made about:





Training algorithm
Number of hidden units
How to represent the task
Input and output representation
Training examples, and manner of presentation
Differences between Connectionism
and traditional Symbolic AI



Knowledge – represented by weighted
connections and activations, not explicit
propositions
Learning – Artificial Neural Nets (ANNs)
trained versus programmed. Also greater
emphasis on learning.
Emergent behaviour – rule-like behaviour,
without explicit rules
More Differences



Examinability: you can look at a symbolic
program to see how it works. Artificial Neural
net – consists of numbers representing
activations, and weighted links. Black box.
Symbols: connectionism has no explicit
symbols
Relationship to the brain: ‘Brain-style’
computing versus manipulation of symbols.


Connectionism versus Symbolic AI
Which is better?


Which provides a better account of thought?
Which is more useful?

Artificial neural nets more like the brain than a
computer????
Are Brains like Computers?

Parallel operation – 100 step argument







Neuron slower than flip-flop switches in computers. Takes
thousandth of a second to respond, instead of a thousandmillionth of a second.
Brain running AI program would take 1000th of a second
for each instruction
Brain can extract meaning from sentence, or recognise
visual pattern in 1/10th of a second.
Means program should only be 100 instructions long.
But AI programs contain 1000s of instructions
Suggests parallel operation
Connectionism scores
Brains unlike computers



Computer memory – exists at specific physical
location in hardware.
Human memory – distributed
E.g. Lashley and search for engram.



Trained rats to learn route through maze. Could destroy
10% of brain without loss of memory.
Lashley (1950) “there are no special cells reserced for
special memories….The same neurons which retain
memory traces of one experience must also participate in
countless other activities”
Connectionism scores
Brains unlike computers

Graceful degradation


When damaged, brain degrades gradually,
computers crash.
Phineas Gage


Railway worker – iron rod through the anterior and
middle left lobes of cerebrum, but lived for 13 years –
conscious, collected and speaking.
Connectionism scores

Brains unlike von Neumann machines with:




Sequential processor
Symbols stored at specific memory locations
Access to memory via address
Single seat of control, CPU
Connectionism and the brain





Units in net like neurons
Learning in NNs like learning in brain
Nets and brains work in parallel
Both store information in distributed fashion
NNs degrade gracefully – if connections, or
some neurons removed, can still produce
output.
But ….

Connectionism only “brain-style” computing



Neurons simplified – only one type
Learning with backpropagation biologically
implausible.
Little account of brain geometry and structure
Also ….
Artificial neural nets are simulated on computers
e.g. past-tense network
Rumelhart and McClelland simulated neurons,
and their connections on a computer.
Connectionism and thought


Can connectionism provide an account of
mind?
Symbolicists arguing that only a symbol
system can provide an account of cognition


Functionalists: Not interested in hardware, only in
software
Connectionist arguing that you need to explain
how thought occurs in the brain.

Rule-like behaviour

Past tense learning


Create rules, and exceptions
Or show that rule-like behaviour can result from a
neural net, even though no rule is present.

Implementational connectionism


Eliminative connectionism


A different way of implementing symbolic structures,
different level of description
Radical position, cognition and thought can only be
properly described at connectionist level
Revisionist connectionism


Symbolic and connectionist are both legitimate levels of
description.
Hybrid approach – choose best level depending on what is
being invesigated.

Symbolic AI and Connectionism

Different strengths



Connectionism – good at low level pattern recognition,
in domains where there are many examples, and it’s
hard to formulate the rule.
Symbolic AI – good at conscious planning, reasoning,
early stages of learning a skill (rules).
Hybrid system


Connectionist account of lower level processes
Symbolic account of higher level processes.
Connectionism and Strong AI



Searle – Chinese room shows program does
not understand, any more than operator of
Chinese room.
Problem of symbol grounding …..
But does a neural net understand?


It learns…..
Chinese Gym – a room full of english speaking
people carrying out neural processes, and
outputing chinese. Do they understand?
Assignment


Planning out your argument….
Researching a topic




Wikipedia, can be good starting point.
Finding journals – see link to electronic holdings
on library page.
Find recent paper and follow up their references
Visit the library! Find relevant book, and look at its
references


Leave time to read through what you have
written, and improve it.
Zeigarnik effect



Russian psychologist – noticed that waiters only
remembered orders until the bill was paid.
We go on thinking about something that has not
been completed.
Ergo – get started on your essay, and you will go
on thinking about it.



Presentations
- Tutorial groups – contact email by Monday week 12. (split groups?)
Prepare presentation – week 13-15 in the new year.
To be assessed: weeks 1 and 2 of next semester.

Each group: 5 – 10 minute presentation.
Paper hand in with title and acknowledgements –(who did what ? research, presentation,
delivery etc.)

Topics:

AI in the movies - how accurate is its portrayal?

Who has argued for Strong AI, and how convincing are their claims?

What was Lady Lovelace's objection to the Turing Test, and is it still valid?

AI hype?: Find some startling AI predictions, and consider their likelihood of coming true.

Early history of AI

Can robots be creative?

AI in the news

AI and Ethics – should we be concerned?

Download