A GOMS REPRESENTATION OF TASKS FOR INTELLIGENT AGENTS

advertisement
From: AAAI Technical Report SS-95-05. Compilation copyright © 1995, AAAI (www.aaai.org). All rights reserved.
A GOMS REPRESENTATION
OF TASKS FOR INTELLIGENT AGENTS
1Julio K. Rosenblatt
2Alonso H. Vera
HughesResearch Labs
3011 Malibu CanyonRoad
Malibu, CA90265
Abstract
i.e. whattheir beliefs and priorities are, andtailor its
actions and recommendations
appropriately. Similarly, a
GOMS
modelcan be used for plan recognition in a multiagent domain,to reason about what should be communicated, andto determinein whatcontextthe dialogueis taking place.
In this paper we describe a GOMS
model of Radar
Operatorsmonitoringair and sea traffic on boarda ship.
TheRadarOperator’stask has a large amountof routine
content, evenwhenthings get busy. It can be high in time
pressureor low, depending
on the situation, andthe operaKeywords:
GOMS,
intelligent agent, task model,human-com- tor mustconstantlydeal with a great deal of information.
puterinteraction
Thearrival of newinformationoften results in the creation
of newgoals or a reprioritizationof existinggoals, thereIntroduction
fore, the systemmustbe highly reactive. However,
rational, consistent behavior also demandsthat a course of
Ourgoal is to determinethe domainknowledge
required to
action, oncechosen,shouldbe pursueduntil its goalis met
implement
an intelligent agentwhichassists a humanuser in
or a goalof higherpriority is generated.
performinga computer-based
task. A central characteristic
Thetask is very interactive, betweenthe operator and
of intelligentbehavior
is that it is purposeful,
i.e., the agentis
other members
of the crew, as well as betweenthe operaexecutinga task in order to achievea goal. Consequently,
to
tor and the radar screen itself. Theworkpresentedhere
help a user accomplish
a goal, an intelligent agentmusthave
uses a methodologythat was originally developed to
knowledgeabout that goal. GOMS
is a technique that has
addressroutine expert behavioron non-interactive tasks;
been used in the study of human-computer
interaction to
recent workhas indicated that this methodologyyields
model user knowledgeand behavior at various levels of
excellentresults whenappliedto interactive tasks as well
description. Weinvestigate here howGOMS
modelsmaybe
(John, Vera and Newell, 1994; Gray, John and Atwood,
used by intelligent agents as a meansof understandingthe
1993;Endestadand Meyer,1993). This report presents
actions of other agentsandusers so that it mayinteract and modelof the RadarOperator’sknowledge
and behavior.
cooperatewith themin a consistent andreasonablemanner.
A GOMS
model consists of a set of Goals, Operators,
A Model of a Radar Operator
Methods,andSelectionrules necessaryto accomplisha partitular task. Ahierarchyof goalsis created, andwithinthat
Wehave created a modelof the radar operation task by
hierarchy a set of methodsprovidea functional description decomposingthe Radar Operator’s actions into Goals,
of a task. Selectionrules distinguishbetweenvariousopera- Operators, Methods,and Selection rules (GOMS)
as first
tional cases andaccountfor the idiosyncraciesof individual proposed by Card, Moran, and Newell (1982). A GOMS
users. Operators provide a low level description of the
modelbegins with the concept of a top-level goal which
actions finally performed.Thus, GOMS
provides a uniform the user seeksto achieve,anda series of unit tasks that the
structure for representingthe intentional, functional, and user performsrepeatedlyuntil there are no tasks left. The
implementationallevels of behavior. A GOMS
model of a
classic exampleis that of a typist using a word-processor
particular task mightbe used by an intelligent agent to
to makecorrections that have been markedon a printed
understandthe task at handandthe current state withinthat
copyof a manuscript;the top-level goal is to edit the ontask. Actingalone, the agentcan use the modelto decidethe
line versionof the manuscript,andthe unit task is simply
next action; acting as an assistant, the agent can use the
to makethe next correction (Card, Moran,and Newell
modelto understandwhatother agents or the user are doing, 1982).
Wedescribea GOMS
modelof a ship-boardRadarOperator’s
behaviorwhilemonitoring
air andsea traffic. GOMS
is a technique that has beensuccessfullyused in Human-Computer
Interactionto generateengineering
modelsof human
performance.Basedon the GOMS
modeldeveloped,weidentified
thoseportionsof the taskwherean intelligentagentwouldbe
most
ableto assist operators
in the performance
of their duties,
andthe natureof the knowledge
that will be requiredfor the
task. Wepresentthe results of a simulatedexecutionof the
modelin a samplescenario,whichpredictedthe operator’s
responses
witha highdegreeof accuracy.
1. Currentlyat Carnegie
MellonUniversity(jkr@emu.edu)
2. Currentlyat HongKongUniversity (vera@hkucc.hku.hk)
iii
Wehave written the GOMS
model using NGOMSL,
or
Natural GOMS
Language,(Kieras, 1988;1994). It takes the
basic precepts of GOMS
and defines a programminglanguagebased on those ideas, allowing creation of a model
whereflowof control is clearly specifiedandwhichin principle could be run on a computer;indeed, a compileris currently being written for that purpose.Theprocessof fully
specifying the Radar Operator’s decisions and actions
guidedus to create a modelthat is completeandaccurate to
the level of detail in whichit is specified,andaidedus in the
task of knowledge
acquisition as well; it also pointed out
those places wherethere weregapsor inconsistenciesin our
knowledgeof the domain.
tion on this fine grained unit task. Oncedone with a
particularstage of the identificationprocesson a particular
contact, the modelreturns to the top-level, wherenew
orders maybe received and acted uponor a task that has
becomemoreurgent maybe selected for execution. As in
Johnand Vera(1992), a relatively shallowgoal stack and
carefully designedset of selection rules wasused to make
modelreactive to external changes.The
resulting goal and
methodhierarchyfor the intentional andfunctional levels
of the task is shown
in Figure1.
Contents of the Model
Ascan be seenin Figure1, there are three sets of selection
rules in our GOMS
model.These correspondto points in
Structure of the Model
the executionof a unit task wherea decision mustbe made
Oneof the first andmostbasic decisionsthat hadto be made as to howto proceedbecausethere are multiplemethodsto
washowto define the unit task for the RadarOperator;the
accomplisha goal. Thefirst selection rule, Select Next
organizationof the top-level goal and the unit task would Task, simply choosesthe ExecuteOrdermethodif a new
affect the entire structure of our GOMS
model.In the case of
order has beenreceived, otherwisethe methodfor Monitor
text editing, the structure waseasily definedby makingeach RadarContactsis selected. If an order is to be executed,
markedcorrection a unit task. Theanalogousdecomposition ExecuteOrderedTaskselects the methodthat is appropriin our domainwouldbe to define the unit task as tracking ate for carrying out that order. If no order has been
each object on the radar screen. However,oneobviousreareceived, then the ExecuteUnit Taskselection rule must
son that such a schemecould neither modelthe RadarOper- decide, for a given contact, whichis the appropriatesubator’s behavior accurately nor provide a reasonable
task; this dependson whatinformationhas already been
framework
for defininga systemis that the task of tracking gatheredaboutthat contact, whichis reflected by the curan object has no well-definedend; the task mightnever be rent label assignedto it. Thecorresponding
selection rule,
completedand all other radar contactswouldbe ignoredas a
written in NGOMSL,
is as follows:
result. In addition,the RadarOperatorsmustalso receiveand
Selection rule set for ~oal: Execute Unit Task
respondto orders whichmayarrive at any time, so they must
If <contact-label>is Newandcontactis underlocal
be incorporated
into the definitionof a unit task as well.
control, then accomplish
goal: EstablishTentative
In searchof a better definitionof the unit task, weturned
Track.
to the training manualused by the RadarOperators(Operations Specialist TrainingManual),wherewefoundthis stateIf <contact-label>
is TentativeTrackandcontactis unment:"Informationhandlingcomprisesfive major functions
der local control, thenaccomplish
goal: Establish
- gathering,processing,displaying,evaluating,anddissemiAir or Surface.
nating informationand orders." Weattemptedto use these
If <contact-label>is Unknown
andcontact is underlofive functionsfor our unit task structure, but our efforts were
cal
control,
then
accomplish
goal: Establish Friend
complicatedby anotheremergenttask structure; as weexamor
Foe.
ined a sampleRadarOperatorscenario, it becameapparent
that the RadarOperatorwentthroughvarious stages of idenIf <contact-label>is not Tentative Track or Unknown
tification for eachnewcontact: establishingtentative track,
andcontactis underlocal control, then accomplish
goal: UpdateContactInformation.
air or surface,commercial
or military, friendlyor hostile, etc.
Thechallengewasto create a goalstructure that persistedin
If contactis not underlocal control, then accomplish
the incremental acquisition of knowledgeabout a given
goal: UpdateContactInformation.
tracked object while still providing reactiveness to new
Returnwith goal accomplished.
informationand situations and as they developed.Themanual continues, "All informationhandlingmustbe considered
a continuousandgrowingprocessthat ultimately furnishes a
compositepicture of a situation, enabling the commanding
officer to makea final evaluationandgive ordersfor action."
Thesolution weultimately decideduponwasto select a
contact, determinewhichstage of identification should be
performednext, and go throughthe steps of gathering, processing, displaying, evaluating, and disseminatinginforma-
112
This selection rule uses perceptual informationthat is
available to the RadarOperator, in order to choosethe
appropriatemethod.This is true of the other twosets of
selection rules as well. Verylittle beyondthe ability to
understand symbolsand orders is encapsulated in the
selection rules. TheRadarOperator’sknowledgeis contained in the methodsof the model.
....................
FigU,.
11
.....i
RADAR OPERATION
|
............
OAL
H’ERA"cHY
..............
,,,I
("--)(--)
......................................
.,. ........
,,.,
GOMS LEGEND
:...=,: =.,..; Method
Step
-- --I~ OptionwlStep
,~=:SeleCtlOn
R,lle
.v.’
Seleoted Method
Withinthe GOMS
modelof a RadarOperator,the goal
hierarchycapturesthe structureof the decision-making
process involved;its topologyreflects knowledge
of the nature
of the task and the control knowledgeneeded to carry it out.
Step 4. Accomplish
goal: EvaluateandDisseminate
Information.
Step5. Returnwithgoal accomplished.
Thedetailed knowledgewhichmakesthese decisions and Theimplementational
(or keystroke) level is createdby
the achievement
of goals possible is embedded
in the meth- furtherdecomposing
the structureof the task into primiodsof the model.Forexample,the EstablishAir or Surface tive operators,suchas readingtext, pointingandclicking
methoddescribesthe steps that are takento achievethat
the mouse,etc. Forexample,
the first step in the Establish
goal, includingthe specific steps the RadarOperator
must Air or Surfacemethod
is to invokethe subgoalGatherand
taketo make
the determination:
Process Movement
Information,implemented
by the following
method
whose
steps
consist
of
operators
that are
Methodfor ~oal: Estahllrh Air or Surface
not analyzed
further:
Step1. Accomplish
goal: GatherandProcessMovement
Information.
Methodfor ~oal: Gather and Process Movement
Information
Step2. Decide:If contactis not a track,thenremove
tentative trackandreturnwithgoal accomplished.
Step1. Decide:If contactis underlocal control,then
perform
positioncorrection.
Step3. Decide:If contacttypeis determined
to be Air,
thenAccomplish
goal of: DisplayContactInformation
Step2. Readcontactinformation.
Unknown
Air.
Step3. Decide:If CPA
needsto be evaluated,then
If contacttypeis determined
to be Surface,thenAccomcomputecontact CPA.
plish goal of: Display.Contact
Information
Unknown
Step4. Processcontactinformation.
Surface.
Step5. Returnwithgoal accomplished.
If contacttypeis not determined,
thenreturnwithgoal
accomplished.
113
ters so that the agent maycontinueassisting the operator
on the appropriatetask. This schemecan be used in genWhendevelopinga modelof a dynamic,real-worldtask, it is
eral whendecidingwhenit is appropriateto communicate
of critical importance
to captureandretain the qualities that
informationbetweenagents.
allow humansto performthe task as well as they do. Oneof
By creating a knowledge-leveldescription of a task
the keycharacteristics of humanperformance
in this domain (Newell, 1982), GOMS
modelsprovide the meansto preis reactivity. Theoperatorsin the scenarioare able to quickly dict the user’s goals, beliefs, andpriorities, andtherefore
react to newcontacts and respondto neworders. Theycan her actions. At anygivenpoint in the problem-solving
prostop whatevertask they are doing, begina newone, execute cess, there exists the current goal stack; associatedwith
it, and return to the original task. TheGOMS
modelpreeachgoal is a methodfor achievingthat goal, consistingof
sentedheremustbe able to reproducethis reactivity in a nata series of primitiveperceptual,cognitive,andmotoroperural waythat results in sequencesof behaviorslike those of ators that are to be executed.For example,the operator
the operatorsin the scenario.
begins execution of the OperateRadarStation method,
The GOMS
modelof the operator derives its reactivity
whereshe perceives informationsuch as newradar confromthe organizationof its methodsand selection rules. At tacts andorders; withinthe selectionrule Select NextTask,
any given moment
in the model’sbehavior,the goal stack is
she mustthen choosewhichtask will be performednext;
relatively shallowbecausethere are very few chainedmeth- and in the Moveto Target Contactmethod,she uses the
ods that get called as a sequence.Reactivityis not achieved mouseto moveto andclick on the object of interest.
by forcingthe modelto checkfor changesin the world(e.g.,
Thoseoperators that are directly observable provide
neworders or contacts) within each method,but instead by cues for recognizingthe mentalstate of the user; unobreturningto the top-level goal after completinga portionof
servablecognitiveoperatorsandthe changesin belief that
prioritized sub-tasks. Thetop-level methodcan then check they generate mustbe hypothesized.Wemakethe assumpfor changesin the world.
tion that the user behavesin a rational manner,so that her
Avoidinglong linked sequence of methodsallows the
choiceof actionreflects a consistentset of beliefs, priorimodelto checkfor importantchangesin its environment
in a
ties anddecision-making
criteria; this correspondsto the
waythat does not overloadworkingmemory
nor unnecessar- agencyhypothesisof Brafmanand Tennenholtz(in press).
ily interrupt routine behaviors. Themodelwasdesignedin
sucha wayas to simulatethe operator’ssequenceof contact- Goals, Priorities, and Beliefs
identifying behaviorswhile remainingsensitive to changes
that affect its goalprioritization. It is thus able to combine Toworkcooperativelywith anotheragentor user, an automatedagent mustbe awareof whatgoals the other agent is
the routine collection of informationaboutcontacts, detecting newtargets, andexecutingorders in a cognitivelyplausi- trying to achieve,andwhereit is in the processof achieving them.Furthermore,the efficacy of interagent cooperable way.
tion and communication is greatly enhanced by an
understanding
of the other’s currentpriorities andbeliefs.
Using Predictive Models for Agent
Theoperationof a radar station contains a large degree
Assisted Decision-Making
of routine behavior, whichsimplifies the task of understanding whatthe operator is doing andwhyshe is doing
Certain proceduresrequire the operator to performcomplex it. However,there are potentially conflicting goals that
heuristic judgments.Rather than attemptingto fully automustbe satisfied whenchoosingthe next task to perform.
mate such decision-makingprocesses, an intelligent agent
This processof goal selection and prioritization can be
couldinsteadprovidea great deal of assistanceby presenting
understoodwithin the context of a small class of basic
the results of partially processedinformation
to the operator. motivations (Simon, 1967). Morignot& Hayes-Roth(in
In additionto providinginformationrelevant to a decisionat
press) have interpreted these motivations within the
the appropriatetime, the agent mayalso be responsiblefor
domainof mobilerobots; wewill attemptto create analodetermining whenthe operator is unawareof important
gousinstantiations for the motivationsof a RadarOperator
information. Theagent mayhave an explicit modelof the
(see Table1).
task suchas the onedescribedherethat can be usedto deterThechoiceof whichgoal is to be satisfied next is speciminewhatthe operator’s current goal is. If the operator fied by the Selection Rules in the GOMS
model. The
divergesfromthe behaviorpredictedby the model,the agent
orderingof these goals for the RadarOperatoris:
can mapthe newbehavior onto the model. The agent can
then assess whetherthe operator’sdivergentbehavioris war- Affiliation >Safety>Achievement
> Learning
>Physiological
rantedbythe currentsituation. If it is not, the agentcanrecommend
alternative courses of action that the operator
shouldfollow. If the operator’sbehavioris warrantedby the
situation, then the agentcan updateits modelor its parameReactivity of the GOMS
Model
114
Generic Motivation
Operator’s Motivation
Physiological
remainalert
by knowingthe current goal of the operator. Theagentcan
anticipate the informationthe operator will require to
accomplishher goals, presentthat informationin a useful
form, and recommend
actions basedthe information.
Safety
detect threats
Monitoring
Affiliation
followorders
Achievement
track contacts,report info
Learning
observe unknownareas
Dueto their procedural nature, GOMS
modelsalso provide the meansto predict a user’s workingmemory
load.
At any given point in the problem-solvingprocess, there
exists the currentgoal stack; associatedwith eachgoalis a
methodfor achievingthat goal, andeach methodexplicitly
states whatinformationit accessesandmusttherefore be
stored in workingmemory
for the duration of that method’s execution. For example,referring to the modelshown
in Figure1, the operator beginsexecutionof the Operate
RadarStation method,whereshe must retain whetheror
not a neworder has been received, and if so what that
order is. Withinthe selection rule Select Next Task, the
operator must then choose whichtask she will perform
next, and remember
that information, and so on. Table 2
showsa trace of whichvariables are retained within each
method,andthe total workingmemory
load at that point in
the goalstack.
Table1: Motivationsfor the RadarOperator’stask.
Working Memory Load
If the operatorhas receivedan order, she followsit; otherwiseshe mustdetermineif there are anyimmediateor potential threats to the safety of the ship. If neither of those
conditionsexists, then the routine goals of monitoringthe
position and course of radar contacts and of reporting any
significant newinformationtakes precedence.Updatingthe
radar range and possibly obtaining newcontacts mustoccasionally be performedas well. Finally, in lieu of anyother
activity, the RadarOperatorwill resort to behaviorsuch as
"tracking"the positionof an islandin orderto maintainalertness (this sort of task is not uncommonly
assignedto operators for just this purpose).
Method/Rule
Variables Retained
WM
Theupdatingof radar rangehas a relatively low priority;
however,it mustoccasionallybe performedevenwhenthere
Operate Radar
(order= nil)
0
are other higherpriority objectivesto be met. Consequently, Station
a strict prioritization maynot be appropriate. A more
Select Next Task none
0
dynamicschemecan be used to account for those instances
wherethe operator chooses to take an action that would
MonitorRadar
target_contact= Casper
1
appearto be irrational otherwise. Objectivefunctions that
Contacts
take into accountfactors such as the estimatedutility of a
newpiece of informationcould be used for such a purpose
Moveto Target
contact_label= Unknown 2
(Brafmanand Tennenholtz,in press); these can be incorpoContact
rated into a GOMS
modelvia complexcognitive operators.
ExecuteUnit
none
2
Whenselecting the next task to be performed,the agent
Task
can estimatethe priority of processedinformationin order to
focusthe user’s attention on the mostsignificant andurgent
Establish Friend none
2
developments
andto adapt to the user’s priorities whenthey
or
Foe
divergefromthose predictedby the model.Asthe user evaluates newinformation, the agent mustupdate its modelof
Gather and
bearing = 24.5
the user’sbeliefs; these canbe inferred, via the rational agent
Process
range = 32
hypothesis,by the subsequentactions taken by the user. As
Movement
altitude = 30,000
described by Brafmanand Tennenholtz,each action is choInformation
speed = 500
sen basedon maximizing
a decision criterion such as maxiheading= 69.3
minor averageutility, whichin turn dependson the current
DisplayContact <newlabel = Hostile>
set of plausible beliefs being held by the user. Thus,by
Information
observingthe user’s actions, inferencescan be drawnas to
(contact_label= Hostile)
<new_label>
whatthe user mustbelieve, as well as whatshe maypossibly
believe.
Evaluate and
none
7
Thesecases represent those aspects of the task wherean
Disseminate
intelligent agent, armedwith a modelof the operator’s
knowledge
and behavior, can makesignificant contributions
Table 2: WorkingMemory
Loadduring task execution.
115
In the presentscenario,the operatormayoften be in a situation whereher workingmemory
is overloaded.For example,
if there are manynewcontacts as well as orders to be executed, the operator maynot be able to retain the necessary
sequenceof behaviors. Usingthe modelto evaluate situations wherethe operator’s workingmemory
capacity might
be overwhelmed,
it is possible to predict whenbehaviormay
deviate from what is expected, and to determine whenan
agentcan be mosteffectively assisted.
Comparison of Predicted versus Actual Radar
Operator Behavior
A simulated execution of the GOMS
model was conducted
on a test scenario; this resulted in a sequenceof behaviors
that the modelwouldperformif it wereoperatingthe radar
station. Wethen comparedthis sequenceof behaviorswith
that of the operatorsin the scenario. Table3 summarizes
the
resultsof this analysis.
MODEL
Match Miss
SCENARIO
Match
Miss
54
4
107
X
58
nario. Thatis, theyare behaviorsthat werenecessarilyperformedby the operator, but that were not explicitly
described in the scenario. For example,whenthe model
selects a newcontactto establisha TentativeTrack,it must
necessarily execute an intermediate methodof movingto
the contact and then hookingit. Theoperator mustalso
performthis sequence
of behaviors,but the full set of steps
is not explicitly describedin the scenario.
The remaining3 behaviors producedby the modelthat
were neither matchesnor implicit in the scenario were
behaviors that probably should have been done by the
operator but wereleft out becauseof time or memory
conslxaints. As discussed, the GOMS
modelprovides a measure of workingmemory
usage that can be used to better
predict the mentalstates of an agentby taking these constraints into account. Steps within methodsthat are not
strictly necessarycan be noted as optional, so that nondeterministicbehaviormaybe accountedfor. Overall,only
3 out of 161 (less than 2%)behaviors generatedby the
modelwere neither matchesnor implicit whencompared
to the operator’sbehaviorin the scenario.
Themodelpredicted 96.5%of the operator’s behaviors.
Furthermore98%of the behaviorsgeneratedby the model
wereeither explicitly or implicitlypresentin the scenario.
Theseresults indicate that the modelis successfullysimulating the operator’s behaviorandtherefore successfully
capturingthe knowledge
required to performthe task.
161
Conclusion
Table3: Predictedvs. DescribedOperatorBehavior
Therewere58 distinct operatorbehaviorsdescribedin the
original scenario. Themodelgenerateda total of 161behaviors in total andmatched54 of the 58 operatorbehaviors.In
order to be considered matching,the modelmust generate
not only the samebehaviorsas the operator, but it mustalso
do so in the sameorder. Modelbehaviorsthat occurredout
of order were counted as mismatches.Of the 4 operator
behaviors that were not matchedby the model, 2 of them
werebehaviorsthat the modelperformedimplicitly. Thatis,
the model,as webuilt it, did not explicitly performthese
actions as independentmethods,but instead had thembuilt
in to other methods.This wasnot an importantdesigndecision on out part but simplythe consequence
of the granularity level chosenfor these methods.Of the other two actions
that werenot accountedfor, oneinvolvedchangingthe type
of radar usedand simplywasnot includedin our model,and
the other involveda non-routinereporting of information.
Discounting the behaviors performed implicitly by the
model,only 2 (3.5%)of the operator’s behaviorswerenot
matchedby the model.
Themodelgenerated107 behaviorsin addition to the 54
that matchedthose of the operator. Althoughthis is a large
numberof extra behaviors, a case by case analysis shows
that 104 of themare behaviorsthat are implicit in the sce-
116
Using the GOMS
methodologyfor analysis of humancomputerinteraction, wehave developeda modelof a
RadarOperator’sgoals and the methodsthat she uses to
accomplishthem. Asimulatedexecutionof the modelin a
test scenariopredictedthe operator’sresponseswith a high
degree of accuracy, and furthermoreprovideddetails of
thoseactionsthat werenot explicitly stated in the scenario
description. Basedon this model,wewereable to identify
those portionsof the task wherean intelligent agentwould
be mostable to assist the operator in the performance
of
her duties by sharing information, and to describe the
natureof the knowledge
that will be requiredfor the task.
Bycreating a knowledge-leveldescription of a task,
GOMS
models provide the meansto predict the user’s
goals, beliefs, priorities, and actions. Whenthe user’s
actual behavior departs from what the modelhas predicted, the agent mayinformthe user of the unexpected
actions and recommend
an alternate course of action. If
the operator’sbehavioris warrantedby the situation, then
the agentcanupdateits modelor its parametersso that the
agentmaycontinueassisting the operator on the appropriate task.
References
Brafman,R. I., &Tennenholtz,
M.(in press). Modeling
at the
mental-level: Someideas and somechallenges. In M. T.
Cox & M. Freed (Eds.), Proceedings of the 1995 AAAI
Spring Symposium
on Representing Mental States and
Mechanisms.MenloPark, CA:AAAIPress.
Card, S. K., Moran, T. P., and Newell, A. (1983) The
psychology of human-computerinteraction. Lawrence
Erlbaum,Associates,Hillsdale, NJ.
Endestad, T. & Meyer, P. (1993). GOMS
analysis as
evaluationtool in processcontrol: Anevaluationof the
ISACS-1prototype and the COPMA
system. Technical
ReportHWR-349,
OECD
HaldenReactor Project, Instituut
for Energiteknikk,Halden,Norway.
Gray, W.D., John, B.E. & Atwood,M.E. (1993) Project
Ernestine: A validation of GOMS
for prediction and
explanation of real-world task performance. Human
Computer
Interaction, 8, 3, pp. 209-237.
John, B. E. & Vera, A. H. (1992). A GOMS
analysis for
graphic, machine-paced, highly interactive task. In
Proceedings of CHI(Monterey, May3-7, 1992) ACM,
NewYork, pp. 251-258.
John, B. E., Vera, A. H. and Newell,A. (1994). Towardreal
time GOMS:
A model of expert behavior in a highly
interactive task. Behaviourand InformationTechnology,
13, 4, pp. 255-267.
Kieras, D.E. (1988). Towardsa practical GOMS
model
methodologyfor user interface design. In M. Helander
(Ed.), Handbookof Human-Computer
Interaction (pp.
135-158).Amsterdam:
North-HollandElsevier.
Kieras, D. (1994). GOMS
Modelingof User Interfaces Using
NGOMSL.
Tutorial Notes, CHI Conference on Human
Factors in Computing
Systems,Boston, MA,April 24-28.
Morignot,P., &Hayes-Roth,
B. (in press). Whydoes an agent
act? In M. T. Cox&M. Freed reds.), Proceedingsof the
1995 AAAISpring Symposiumon Representing Mental
States and Mechanisms.MenloPark, CA:AAAIPress.
Newell, A. (1982). The knowledge level. Artificial
Intelligence,18, 87-127.
OperationsSpecialist 3 & 2, Vol. 1, NavyTrainingManual.
Simon,H. (1967). Motivationaland EmotionalControls
Cognition. Reprinted in Models of Thoughts, Yale
UniversityPress, 1979,pp. 29-38.
Download