Contextual Vocabulary Acquisition: From Algorithm to Curriculum William J. Rapaport Michael W. Kibby

advertisement
Contextual Vocabulary Acquisition:
From Algorithm to Curriculum
William J. Rapaport
Department of Computer Science & Engineering
Department of Philosophy
Center for Cognitive Science
Michael W. Kibby
Department of Learning & Instruction
Center for Literacy & Reading Instruction
State University of New York at Buffalo
NSF ROLE Grant REC-0106338
Definition of “CVA”
“Contextual Vocabulary Acquisition” =def
• the acquisition of word meanings from text
– “incidental”
– “deliberate”
• by reasoning about
– contextual cues
– background knowledge
• Including hypotheses from prior encounters (if any) with the word)
• without external sources of help
– No dictionaries
– No people
Project Goals
• Develop & implement computational theory of
CVA based on human protocols (case studies)
• Translate algorithms into a curriculum
– To improve CVA and reading comprehension in
science, technology, engineering, math (“STEM”)
• Use new case studies, based on the curriculum, to
improve both algorithms & curriculum
People Do “Incidental” CVA
• Know more words than explicitly taught
– Average high-school senior knows ~40K words
 learned ~3K words/school-year (over 12 yrs.)
– But only taught a few hundred/school-year
 Most word meanings learned from context
– “incidentally” (unconsciously)
• How?
People Also Do “Deliberate” CVA
•
•
•
•
•
•
You’re reading;
You understand everything you read, until…
You come across a new word
Not in dictionary
No one to ask
So, you try to “figure out”/“learn”/“acquire”
its meaning from:
– its context
– + your background knowledge
• How?
– guess? derive? infer? deduce? educe? construct? predict? …
What does ‘brachet’ mean?
(From Malory’s Morte D’Arthur [page # in brackets])
1. There came a white hart running into the hall with
a white brachet next to him, and thirty couples
of black hounds came running after them. [66]
2. As the hart went by the sideboard, the white
brachet bit him. [66]
3. The knight arose, took up the brachet and rode
away with the brachet. [66]
4. A lady came in and cried aloud to King Arthur,
“Sire, the brachet is mine”. [66]
10. There was the white brachet which bayed at him
fast. [72]
18. The hart lay dead; a brachet was biting on his
throat, and other hounds came behind. [86]
Computational cognitive theory of
how to learn word meanings
• From context
– I.e., text + grammatical info + reader’s prior knowledge
• With no external sources (human, on-line)
– Unavailable, incomplete, or misleading
• Domain-independent
– But more prior domain-knowledge yields better
definitions
• “definition” = hypothesis about word’s meaning
– Revisable each time word is seen
Cassie learns what “brachet” means:
Background info about: harts, animals, King Arthur, etc.
No info about:
brachets
Input:
formal-language version of simplified English
A hart runs into King Arthur’s hall.
• In the story, B17 is a hart.
• In the story, B18 is a hall.
• In the story, B18 is King Arthur’s.
• In the story, B17 runs into B18.
A white brachet is next to the hart.
• In the story, B19 is a brachet.
• In the story, B19 has the property “white”.
• Therefore, brachets are physical objects.
(deduced while reading;
Cassie believes that only physical objects have color)
-->(defn_noun ’brachet)
(CLASS INCLUSION
structure
function
actions
ownership
POSSIBLE PROPERTIES
synonyms
=
=
=
=
=
=
=
(PHYS OBJ)
nil
nil
(nil)
nil
((WHITE))
nil)
I.e., a brachet is a physical object that may be white.
A hart runs into King Arthur’s hall.
A white brachet is next to the hart.
The brachet bites the hart’s buttock.
--> (defn_noun ’brachet)
(CLASS INCLUSION
structure
function
ACTIONS =
((POSSIBLE ACTIONS
ownership
POSSIBLE PROPERTIES
synonyms
= (ANIMAL)
= nil
= nil
= (BITE)))
= nil
= ((WHITE))
= nil)
A hart runs into King Arthur’s hall.
A white brachet is next to the hart.
The brachet bites the hart’s buttock.
The knight picks up the brachet.
The knight carries the brachet.
--> (defn_noun ’brachet)
(CLASS INCLUSION
structure
function
ACTIONS =
((POSSIBLE ACTIONS
ownership
POSSIBLE PROPERTIES
synonyms
= (ANIMAL)
= nil
= nil
=
=
=
=
(BITE)))
nil
((SMALL WHITE))
nil)
A hart runs into King Arthur’s hall.
A white brachet is next to the hart.
The brachet bites the hart’s buttock.
The knight picks up the brachet.
The knight carries the brachet.
The lady says that she wants the brachet.
--> (defn_noun ’brachet)
(CLASS INCLUSION
structure
function
ACTIONS =
((POSSIBLE ACTIONS
ownership
POSSIBLE PROPERTIES
synonyms
= (ANIMAL)
= nil
= nil
=
=
=
=
(BITE)))
nil
((SMALL VALUABLE WHITE))
nil)
A hart runs into King Arthur’s hall.
A white brachet is next to the hart.
The brachet bites the hart’s buttock.
The knight picks up the brachet.
The knight carries the brachet.
The lady says that she wants the brachet.
The brachet bays in the direction of Sir Tor.
[background knowledge: only hunting dogs bay]
--> (defn_noun ’brachet)
((A BRACHET IS A KIND OF
ACTIONS =
(POSSIBLE ACTIONS
FUNCTION
structure
ownership
synonyms
(DOG))
=
=
=
=
=
(BAY BITE))
(HUNT)
nil
nil
nil)
I.e. A brachet is a dog that may bay & bite, and that hunts.
General Comments
• System’s behavior  human protocols
• System’s definition  OED’s definition:
= A brachet is “a kind of hound which hunts by scent”
Computational cognitive theory of how to learn word meanings from context (cont.)
• 3 kinds of vocabulary acquisition:
– Construct new definition of unknown word
• What does ‘brachet’ mean?
– Fully revise definition of misunderstood word
• Does “smiting” entail killing?
– Expand definition of word used in new sense
• Can you “dress” (i.e., clothe) a spear?
• Initial hypothesis;
Revision(s) upon further encounter(s);
Converges to stable, dictionary-like definition;
Subject to revision
Motivations & Applications
• Part of cognitive-science projects
– Narrative text understanding
– Syntactic semantics (contra Searle’s Chinese-Room Argument)
• Computational applications:
– Information extraction
– Autonomous intelligent agents:
• There can be no complete lexicon
• Agent/IE-system shouldn’t have to stop to ask questions
• Other applications:
– L1 & L2 acquisition research
– Computational lexicography
** Education: improve reading comprehension **
State of the Art
• Vocabulary Learning:
– Some dubious contributions:
• Useless “algorithms”
• Contexts that include definition
– Useful contribution:
• (good) reader’s word-model
= updateable frame with slots & defaults
• Psychology:
– Cues to look for (= slots for frame):
• Space, time, value, properties, functions, causes, classes,
synonyms, antonyms
– Can understand a word w/o having a definition
• Computational Linguistics:
– Systems need scripts, human informants, ontologies
• Not needed in our system
– CVA  Word-Sense Disambiguation
• Essay question vs. multiple-choice test
State of the Art: Computational Linguistics
• Granger 77: “Foul-Up”
– Based on Schank’s theory of “scripts”
– Our system not restricted to scripts
• Zernik 87: self-extending phrasal lexicon
– Uses human informant
– Ours system is really “self-extending”
• Hastings 94: “Camille”
– Maps unknown word to known concept in ontology
– Our system can learn new concepts
• Word-Sense Disambiguation:
– Given ambiguous word & list of all meanings,
determine the “correct” meaning
• Multiple-choice test 
– Our system: given new word, compute its meaning
• Essay question 
State of the Art: Vocabulary Learning (I)
• Elshout-Mohr/van Daalen-Kapteijns 81,87:
– Application of Winston’s AI “arch” learning theory
– (Good) reader’s model of new word = frame
• Attribute slots, default values
• Revision by updating slots & values
– Poor readers update by replacing entire frames
– But EM & vDK used:
• Made-up words
• Carefully constructed contexts
– Presented in a specific order
Elshout-Mohr & van Daalen-Kapteijns
Experiments with neologisms in 5 artificial contexts
• When you are used to a view it is depressing when you live in
a room with kolpers.
– Superordinate information
• At home he had to work by artificial light because of those
kolpers.
• During a heat wave, people want kolpers, so sun-blind sales
increase.
– Contexts showing 2 differences from the superordinate
• I was afraid the room might have kolpers, but plenty of
sunlight came into it.
• This house has kolpers all summer until the leaves fall out.
– Contexts showing 2 counterexamples due to the 2
differences
State of the Art: Psychology
• Sternberg et al. 83,87:
– Cues to look for (= slots for frame):
•
•
•
•
•
•
•
Spatiotemporal cues
Value cues
Properties
Functions
Cause/enablement information
Class memberships
Synonyms/antonyms
• Johnson-Laird 87:
– Word understanding  definition
– Definitions aren’t stored
Sternberg
• The couple there on the blind date was not
enjoying the festivities in the least. An acapnotic,
he disliked her smoking; and when he removed his
hat, she, who preferred “ageless” men, eyed his
increasing phalacrosis and grimaced.
• To acquire new words from context:
– Distinguish relevant/irrelevant information
– Selectively combine relevant information
– Compare this information with previous beliefs
State of the Art: Vocabulary Learning (II)
Some dubious contributions:
• Mueser 84: “Practicing Vocabulary in Context”
–
•
BUT: “context” = definition !!
Clarke & Nation 80: a “strategy” (algorithm?)
1. Look at word & context; determine POS
2. Look at grammatical context
•
E.g., “what does what”?
3. Look at wider context
•
[E.g., search for Sternberg-like clues]
4. Guess the word; check your guess
CVA: From Algorithm to Curriculum
“guess the word”
=
“then a miracle occurs”
•
•
Surely,
we computer scientists
can “be more explicit”!
CVA: From algorithm to curriculum …
• Treat “guess” as a procedure call
– Fill in the details with our algorithm
– Convert the algorithm into a curriculum
• To enhance students’ abilities to use deliberate CVA strategies
– To improve reading comprehension of STEM texts
… and back again!
• Use knowledge gained from CVA case studies to
improve the algorithm
• I.e., use Cassie to learn how to teach humans
& use humans to learn how to teach Cassie
Question (objection):
Why not use a dictionary?
Because:
• People are lazy (!)
• Dictionaries are not always available
• Dictionaries are always incomplete
• Dictionary definitions are not always useful
– ‘chaste’ =df pure, clean / “new dishes are chaste”
• Most words learned via incidental CVA,
not via dictionaries
Question (objection):
Teaching computers  teaching humans!
But:
• Our goal:
– Not: teach people to “think like computers”
– But: to explicate computable & teachable methods to hypothesize
word meanings from context
• AI as computational psychology:
– Devise computer programs that are essentially faithful simulations
of human cognitive behavior
– Can tell us something about human mind.
• We are teaching a machine, to see if what we learn in
teaching it can help us teach students better.
Implementation
• SNePS (Stuart C. Shapiro & SNeRG):
– Intensional, propositional semantic-network
knowledge-representation & reasoning system
– Node-based & path-based reasoning
• I.e., logical inference & generalized inheritance
– SNeBR belief revision system
• Used for revision of definitions
– SNaLPS natural-language input/output
– “Cassie”: computational cognitive agent
How It Works
• SNePS represents:
– background knowledge + text information
in a single, consolidated semantic network
• Algorithms deductively search network for
slot-fillers for definition frame
• Search is guided by desired slots
– E.g., prefers general info over particular info,
but takes what it can get
Noun Algorithm
Find or infer:
• Basic-level class memberships
(e.g., “dog”, rather than “animal”)
– else most-specific-level class memberships
– else names of individuals
•
•
•
•
•
Properties of Ns (else, of individual Ns)
Structure of Ns (else …)
Functions of Ns (else …)
Acts that Ns perform (else …)
Agents that perform acts w.r.t. Ns
& the acts they perform (else…)
• Ownership
• Synonyms
Else do: “syntactic/algebraic manipulation”
• “Al broke a vase”  a vase is something Al broke
– Or: a vase is a breakable physical object
Verb Algorithm
• Find or infer:
– Predicate structure:
• Categorize arguments/cases
– Results of V’ing:
• Effects, state changes
– Enabling conditions for V
• Future work:
– Classification of verb-type
– Synonyms
• [Also: preliminary work on adjective algorithm]
Belief Revision
•
•
Used to revise definitions of words with
different sense from current meaning hypothesis
SNeBR (ATMS; Martins & Shapiro 88):
–
If inference leads to a contradiction, then:
1.
2.
•
SNePSwD (SNePS w/ Defaults; Martins & Cravo 91)
–
•
SNeBR asks user to remove culprit(s)
& automatically removes consequences inferred from culprit
Currently used to automate step 1, above
AutoBR (Johnson & Shapiro, in progress)
–
Will replace SNePSwD
Revision & Expansion
• Removal & revision being automated via SNePSwD by
ranking all propositions with kn_cat:
most
certain
least
certain
intrinsic
story
life
story-comp
info re: lang; fund. Bkgd info (“before” is transitive)
info in text (“King Lot rode to town”)
bkgd info w/o vars or inf (dogs are animals)
info inferred from text (King Lot is a king, rode on a horse)
life-rule.1
life-rule.2
everyday CS bkgd info (BearsLiveYoung(x)  Mammal(x))
specialized bkgd info (x smites y  x kills y by hitting y)
questionable already-revised life-rule.2; not part of input
Belief Revision: “smite”
• Misunderstood word; 2-stage “subtractive” revision
• Bkgd info includes:
(*) smite(x,y,t)  hit(x,y,t) & dead(y,t) & cause(hit(x,y,t),dead(y,t))
P1: King Lot smote down King Arthur
D1: If person x smites person y at time t, then x hits y at t, and y is dead at t
Q1: What properties does King Arthur have?
R1: King Arthur is dead.
P2: King Arthur drew Excalibur.
Q2: When did King Arthur do this?
• SNeBR is invoked:
– KA’s drawing E is inconsistent with being dead
– (*) replaced: smite(x,y,t)  hit(x,y,t) & dead(y,t) & [dead(y,t)  cause(hit, dead)]
D2: If person x smites person y at time t, then x hits y at t & (y is dead at t)
P3: [another passage in which ~(smiting  death)]
D3: If person x smites person y at time t, then x hits y at t
Belief Revision: “dress”
•
•
“additive” revision
Bkgd info includes:
(1) dresses(x,y)  z[clothing(z) & wears(y,z)
(2) Spears don’t wear clothing (both kn_cat=life.rule.1)
P1: King Arthur dressed himself.
D1: A person can dress itself; result: it wears clothing.
P2: King Claudius dressed his spear.
[Cassie infers: King Claudius’s spear wears clothing.]
Q2: What wears clothing?
•
SNeBR is invoked:
–
–
–
KC’s spear wears clothing inconsistent with (2).
(1) replaced: dresses(x,y)  z[clothing(z) & wears(y,z)] v NEWDEF
Replace (1), not (2), because of verb in antecedent of (1) (Gentner)
P3: [other passages in which dressing spears precedes fighting]
D2: A person can dress a spear or a person;
result: person wears clothing or person is enabled to fight
Using SNePS Networks
• See handout
Research Methodology
• AI team:
– Develop, implement, & test better computational
theories of CVA
– Translate into English for use by reading team
• Reading team:
– Convert algorithms to curriculum
– Think-aloud protocols
• To gather new data for use by AI team
• As curricular technique (case studies)
Problem in Converting
Algorithm into Curriculum
• “A knight picks up a brachet and carries it away …”
• Cassie:
– Has “perfect memory”
– Is “perfect reasoner”
– Automatically infers that brachet is small
• People don’t always realize this:
– May need prompting: How big is the brachet?
– May need relevant background knowledge
– May need help in drawing inferences
• Teaching CVA =? teaching general reading comprehension
– Vocabulary knowledge correlates with reading comprehension
CVA & Science Education
• Original goal: CVA in & for science education
– Use CVA to improve reading of STEM materials
• A side effect: CVA as science education
– There are no ultimate authorities to consult
• No answers in the back of the book of life!
• As true for STEM as it is for reading about STEM
–  Goal of education =
• To learn how to learn on one’s own
• Help develop confidence & desire to use that skill
– CVA as sci. method in miniature furthers this goal:
•
•
•
•
Find clues/evidence (gathering data)
Integrate them with personal background knowledge
Use together to develop new theory (e.g., new meaning)
Test/revise new theory (on future encounters with word)
Conclusion
Developing a computational theory of CVA,
which can become …
• a useful educational technique for improving
[STEM] vocabulary and reading comprehension
• a model of the scientific method
• a useful tool for learning on one’s own.
Participants
Santosh Basapur (IE)
Adam Lammert
(Vasser/ugrad)
Taha Suglatwala (CSE)
Aishwarya Bhave (CSE)
Amanda MacRae (LAI)
Matthew Sweeney
(ENG/CSE/ugrad)
*Marc Broklawski (CSE)
Brian Morgan (LAI)
Matthew Watkins (CEN)
Chien-chih Chi (PHI)
*Scott Napieralski (CSE)
*Karen Wieland (LAI)
Justin Del Vecchio (CSE)
Vikranth Rao (CSE/ugrad) Yulai Xie (CSE)
Karen Ehrlich
(Fredonia)
Laurie Schimpf (LAI)
Valerie Raybold Yakich
(GEO)
Jeffrey Galko (PHI)
Ashwin Shah (CSE)
& SNeRG members
Christopher Garver (CSE)
Stuart C. Shapiro
(UB/CSE)
(+ new students, Spring
2003)
Paul Gestwicki (CSE)
Anuroopa Shenoy (CSE)
Kazuhiro Kawachi (LIN)
Rajeev Sood (CSE)
(* = supported on NSF
grant)
Web Page
http://www.cse.buffalo.edu/~rapaport/cva.html
Download