Representing Context Using the Context for Human and Automation Teams Model

advertisement
Activity Context Representation — Techniques and Languages: Papers from the 2011 AAAI Workshop (WS-11-04)
Representing Context Using the Context for
Human and Automation Teams Model
Gabriel Ganberg, Jeanine Ayers, Nathan Schurr, Ph.D., Michael Therrien, Jeff Rousseau
Aptima Inc., 12 Gill Street, Suite 1400, Woburn, MA 01801
{ganberg,jayers,nschurr,mtherrien,jrousseau}@aptima.com
presentation of context will allow for the sharing of algorithm implementations across multiple applications and
domains.
Abstract
The goal of representing context in a mixed initiative system is to model the information at a level of abstraction that
is actionable for both the human and automated system. A
potential solution to this problem is the Context for Human
and Automation Teams (CHAT). This paper introduces the
CHAT model and provides example implementations from
several different applications such as task scheduling techniques, multi-agent systems, and human-robot interaction.
Problem
This paper will explore the creation of a shared context representation that models the information required for three
applications currently in development, each solving a different problem and embedded in a different domain. The
first application is the Airportal Function Allocation Reasoning (AFAR) testbed. AFAR enables exploration of issues involved in dynamic function allocation within
NASA’s Next Generation Air Transportation System
(NextGen) concept of operations. Specifically, AFAR targets “investigating the critical challenges in supporting the
early design of systems that allow for optimal, contextsensitive function allocation between human air traffic
controller and automated systems” (Good et al. 2010). The
system models the tasks required for the safe and timely
navigation of aircraft through the air traffic control system.
The tasks are performed by different types of performers
including pilots, air traffic controllers in the local controller and ground controller positions, as well as current and
future forms of automation. These tasks must capture precedence and execution constraints that govern how and by
whom these tasks can be accomplished.
The second application of the shared context representation is a personal assistant agent targeted at the challenges
of team formation and collaboration within a multidisciplinary business environment that spans multiple organizations. A proxy agent is created for each potential
collaborator within the business environment, whether an
individual user or an organization. This proxy agent stores
a set of beliefs regarding the capabilities, knowledge, and
schedule of its corresponding user, and acts in conjunction
with a network of proxy agents to propose collaboration
opportunities for the tasks that the user is trying to accomplish. These proxy agents require a similar task representation as AFAR, but they also need a rich representation of
the capabilities, expertise, resources, and schedules of the
Introduction
A shared representation of context is needed by both humans and automation (whether software agents or robots)
when they work together to perform tasks in a common
environment. This context must be represented at a level of
abstraction that is understandable to a human and usable by
an automated system. If the context is modeled at a level
most natural for a human, meaning high level concepts and
natural language, the machine will require advanced natural language processing and artificial intelligence techniques to understand and act on it. If the context is defined
at the level that is most natural for a machine, meaning
vectors, matrices, and complex data structures, the burden
is placed on the human to make sense of a mass of low
level data. What is needed is a contextual representation
for the performance of tasks that bridges the gap between
humans and machines. The solution should make conceptual sense to humans while remaining usable by a wide
range of computing technologies.
There are many types of applications that could potentially use a human/machine shared context model. Example applications include task allocation or planning, resource optimization, task execution, distributed coordination, and personal assistance. In addition, algorithms that
operate in these human/automation team systems often
have use across multiple types of applications and domains. Having a common application/domain agnostic re-
Copyright © 2011, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
20
users. In addition to modeling the information about the
specific user, the proxy agent models beliefs of the user
about this same information with respect to the other users
within the environment.
The third application requiring the shared context representation is a software system that coordinates mixed
teams of humans and robots performing a room clearing
operation. The challenge is to determine how best to use
the robots and their various sensor packages in cooperation
with the human team members to (1) maximize the likelihood of finding what they are looking for, and (2) minimize
the risk to the human team members. In this application it
is necessary to reuse the task representation and sequence
information from AFAR, as well as the representation of
the capabilities and resources of the various performers
(humans and robots with different sensor packages) from
the personal assistant agent application. Because this application involves the exploration of a physical space, the collection and use of data describing the context of the environment becomes paramount. Environmental context becomes an important input into the decisions about how to
explore the space and how to manage the human-robot interaction over time and circumstance.
One final area of related work is in the development of
ontologies. An ontology is a set of primitives, such as
classes, properties, and relationships, with which to model
a domain of knowledge or discourse (Liu and Ozsu, 2009).
A benefit of an ontology is that it is independent from lowlevel data structures and therefore is abstracted to a higher
level of detail. But, this in turn creates a burden for both
the human and automation to determine how to use the information. Our goal is not to create a generalizable knowledge representation but a model of context that includes
the semantics necessary for a human and machine to exchange and utilize the same available information.
Approach
The Context for Human and Automation Teams (CHAT)
model has evolved over the past several years during the
development of several different mixed initiative team applications. The model has been iteratively developed starting with the need to model context for dynamic function
allocation in AFAR. Each successive application of CHAT
has incrementally refined and expanded it. Lessons learned
from seven different research projects (three of which are
discussed in this paper) have been incorporated into the definition of the CHAT model. The goal at each step of development has been to achieve the proper level of abstraction that will allow the context model to be applied to a
wider variety of domains, but to remain concrete enough to
be intuitively understood by a human. The model must balance the need to represent multiple systems and domains
with a well-defined semantic structure that allows common
algorithms to operate on it across those systems and domains.
CHAT is programming language and software development environment independent, but to be useful in the development of real systems it must have a software implementation that can be used within multiple programming
environments. AFAR has been developed using a combination of Java Agent Development (JADE) agents and C#
simulations and user interfaces. The personal assistant
agents are developed in Java and accessed through a webbased user interface, and the room clearing robots are developed in Python and C++ using Robot Operating System
(ROS). CHAT was initially defined and refined within an
Extensible Markup Language (XML) schema. XML
serves as a lowest common denominator when it comes to
data exchange between diverse programming languages,
and XML schemas allow for the automatic generation of
business object class definitions in most programming environments.
During the initial development of CHAT, having it defined as an XML schema was ideal. Now that the model
has matured and is no longer changing regularly, it has
Related Work
Context has been studied in many areas of research and yet
it remains an underutilized and poorly managed source of
information in our every day computing technologies. As
far back as 1994, important aspects of context such as
where you are, who you are with, and what resources are
nearby were identified (Schlicht, 1994). That definition
was expanded to include any piece of information that can
be used to characterize the situation of a participant in an
interaction (Dey, 2000). Mobile middleware platforms and
physical, software, and logical sensors have expanded the
ways in which we process, use, and collect context from
our environment (Amundsen, 2008; Gutheim, 2011). Yet,
we are still striving to build software that can intelligently
apply contextual information for a better user experience.
Context aware applications should be able to use contextual information to adapt to changes in the user environment as well as predict and anticipate user intentions and
goals. One area of related work that uses the context of the
current environment to transfer control of a decision between humans and automation is called adjustable autonomy (Reitsema et al. 2005); (Schurr, Patil, and Pighin
2006); (Scerri, Pynadath, and Tambe 2002). The choice of
the best fit between the human and automation often depends on the state of the world, the locality of the information among team members, the priority of the decision, and
the availability of the decision maker, all of which are elements of context.
21
been re-engineered as a Representational State Transfer
(RESTful) web-service implementation. This implementation stores the CHAT model data in a database and provides object oriented access through a lightweight webservice layer usable in almost all programming environments. This implementation is complete and in the testing
phase, but has yet to be applied to a real system.
there are additional elements that will not be discussed due
to space considerations. For example, all elements are given a model-wide unique identifier.
Environment
The environment section represents the state of the world
as viewed by the application using CHAT. It is the responsibility of the application to populate and maintain the environmental information as time passes and events dynamically unfold. This is easy and straightforward in some situations such as in the case of a simulation or game-based
environment, or it might be extremely difficult (and a research effort in and of itself) in the case of a robot exploring an unknown environment.
Within the CHAT model, the environment is composed
of a set of entities which represent discrete objects in the
world. An entity has the following characteristics outlined
in Table 1.
Context for Human and Automation Teams
(CHAT)
CHAT provides a framework for representing the types of
information needed for synthetic entities (whether software
agents or physical robots) to interact with each other, and
more importantly, to interact and cooperate with human
users in the process of pursuing common goals. The CHAT
model is divided into four main categories of information:
Environment, Performers, Mission, and Interactions.
Table 1: Entity Characteristics
Type – a designator defining the kind of entity in the world
(e.g. table vs. tank). Different types of entities might have
different sets of attributes.
Attributes – a set of discrete or continuous data values that
describe the state of the entity.
Capabilities – similar to Attributes in structure, but
representing the types of actions the entity can take in the
environment at a certain level of quality. A capability
might represent that a car can drive on unpaved roads,
while an attribute might say that the car is currently at location X,Y.
• Environment – A description of the state of the world
relevant to the team. The environment may be populated
by a variety of data collectors including federates within a
simulation-based training system, computer or key stroke
monitors, or from real world sensors in the case of a robot.
The environment will often be used to describe geographic
features or a common operating picture, but it is not limited to geospatially placed objects. The CHAT environment is the interface to the world, whether that world is
virtual or real, and filled with abstract concepts or concrete
objects.
• Performers – A description of the actors or users operating within the environment. A performer represents a
decision maker that is able to take actions within the system. A performer might be a software agent residing on a
server, a human interacting with a user interface, or a robot
traversing real world obstacles.
Performers
The performers section represents the decision makers acting in the system. These decision makers are individuals
(human or automation) or organizations or both, depending
on the application. Often, it is not obvious what should be
modeled as a part of the environment, and what should be
modeled as a performer. In those cases, it is important to
determine the goals of the system. In the case of a command and control system, the commander might be modeled as the performer making decisions about the environmental entities, such as air and ground forces, under their
control. But if the focus of the system is the collaboration
between the commander and the pilots or platoon leaders
inside those environmental entities, then it becomes necessary to represent each pilot or platoon leader as a performer
in their own right. The decision to represent one thing as an
entity and another as a performer will ultimately depend on
the focus of the system. A performer has the following
characteristics outlines in Table 2.
• Mission – A description of the types of activities that
performers might undertake within the environment. This
is where goals are defined, as well as task constraints and
plans of action.
• Interactions – A description of the various communications and actions that can take place between performer
and/or environmental entities. Interactions can be messages between performers for the purpose of coordination,
performers acting on environmental entities, or feedback to
a performer from the environment after an action.
Each CHAT category will now be described. The significant characteristics of the model will be highlighted but
22
ities as selected by the entity selection clause above to be
in a certain state (i.e. to satisfy a set of attribute constraints). An optimization goal is expressed as a desire to
maximize or minimize a certain entity attribute (or maximize one attribute while minimizing another).
Table 2: Performer Characteristics
Type – a designator defining the kind of performer, e.g. a
room clearing robot vs. a human soldier.
Attributes – a set of discrete or continuous data values that
describe the characteristics of the performer.
Capabilities – the skills that the performer possesses with
an associated quality level.
Resources – a set of entities in the environment that the
performer has control over. A person might have a car, or
a commander could have a set of military assets.
Performer Beliefs – a performer may have a set of beliefs
about other performers. This is a recursive structure, as a
performer can have beliefs about any other performer with
regard to any performer characteristics.
Members – a performer might be an atomic individual
within the system, or it might be an organization with other
performers as constituents.
Goals – a performer might have a set of goals that he/she/it
is trying to accomplish. A performer’s goals are links to
goals defined within the mission section.
Tasks are defined in the CHAT model as objectives that
may be accomplished within the environment. They also
define the constraints as to how these objectives will be accomplished. Tasks have the following characteristics as
outlined in Table 4.
Table 4: Task Characteristics
Type – a designator defining the kind of task.
Attributes – a set of discrete or continuous data values that
describe the characteristics of the task.
Performer Attribute Constraints – a set of constraints
that govern which performers are allowed to work on this
task.
Performer Capability Constraints – a set of constraints
that govern which performer capabilities are required in
order to accomplish this task.
Resource Attribute Constraints – a set of constraints that
govern which entities under the performer’s control are
able to be used for this task.
Resource Capability Constraints – a set of constraints
that govern which entity capabilities are required in order
to accomplish this task.
Task Dependencies – currently a set of hard precedence
constraints. This will likely be extended to represent soft
precedent constraints in the future.
Subtasks – a task can be decomposed into constituent
smaller tasks.
Goals – a task is usually associated with a goal.
Mission
The mission section describes all the types of activities that
might need or want to be undertaken by the performers
within the environment. Without a defined mission, a robot will just sit in one place and do nothing. The mission
defines the types of things a robot might need to accomplish, the ways that the robot might accomplish these
things, and the criteria for selecting what is the best thing
for the robot to be doing right now. The mission is composed of three different interrelated types of information:
goals, tasks, and plans.
Goals are defined in the CHAT model as an expression
of the desire for a change in the state of the environment
(or a performer). Goals have the following characteristics
as outlined in Table 3.
Plans are defined in CHAT as a set of activities that can be
undertaken to accomplish a task. Any given task may have
any number of possible plans that might accomplish it. It is
up to the performer to generate possible plans and select
the best to advance the performer’s goals. A plan has the
following characteristics as outlined in Table 5.
Table 3: Goal Characteristics
Status – a representation of the state of the goal. For example, a possible set of goal states might be: inactive, active, satisfied.
Subgoals – a goal may be defined at a high level, or it may
be decomposed into sub-goals.
Reward – the reward is a weight that is applied to the goal,
allowing for prioritization of goals.
Entity Selection – entity selection is defined as a set of
attribute constraints that allow for the identification of
which entities are relevant to this goal.
Goal Logic – currently two goal conceptions are supported, namely state change goals and optimization goals.
A state change goal is expressed as a desire for a set of ent-
Table 5: Plan Characteristics
Tasks – a reference to the tasks that this plan is trying to
accomplish.
Activities – a sequence of activities that define how a performer will use resources to work towards accomplishing
the tasks. An activity has the following characteristics:
Start Time – an optional time that the activity should
be started.
End Time – an optional planned activity finish time.
Action – this will take one of the following two forms:
1. a role defining the allocation of a performer us-
23
depending upon the experimental condition, the individual
tasks will be assigned to human ground controllers or automation, and these assignments might change depending
on the state of the overall environment.
ing a capability and a resource to work towards
a task.
2. a synchronized set of recursively defined activities, allowing for precise sequencing and coordination of activities for complex tasks.
Status – a representation of the state of the plan. An example set of possible plan statuses might be: inactive, active, complete, unachievable, and irrelevant.
Interactions
Interactions represent any actions or communication between performers or entities. Interactions represent realtime events as the system executes, but also represent historical information that can be used by the system’s algorithms as part of the decision making process. Interactions
have the following characteristics as outlined in Table 6.
Table 6: Interaction Characteristics
Type – a designator for the kind of interaction, whether it
be a performer dispatching instructions to an entity or a
performer sending an instant message to another performer.
Start Time – the time that the interaction started.
End Time – the time that the interaction finished.
Attributes – a set of discrete or continuous data values that
describe the characteristics of the interaction. In the case of
an email, this might be the “Is Read” flag or a spam indicator.
Senders – performers or entities initiating the interaction.
Receivers – performers or entities being interacted with.
Content – this might be the text content of an email or a
link to the audio file of a voice communication.
Figure 1: AFAR Departure Workflow
Personal Assistant Agents Using the CHAT Model
A personal assistant agent designed to facilitate intra- and
inter-organization collaboration requires a similar task
structure as outlined above to define the things that must be
collaborated on, but it also needs a rich description of the
knowledge, skills, goals, and availability of all of the potential collaborators. Each personal assistant agent will
have a fully fleshed out description of the performer information for the user, and part of that information is the
beliefs that the user has about other performers within the
collaboration network. Figure 2 shows an example of a
personal assistant agent’s performer model.
Airportal Function Allocation Reasoning Using
the CHAT Model
The AFAR testbed provides an environment for conducting research into the intelligent dynamic allocation/reallocation of functions within the Air Traffic Control
(ATC) domain. This research requires that the system allow ATC functions to be performed by human operators
and by software agents. In addition, the system must allow
for the hand-off of these functions between humans and
automation in the case of function reallocation. In order to
build a system with this functionality, it is necessary to
have a model of the functions being performed that captures all the information that a human or automated system
would need in order to execute it.
The AFAR testbed is initially focused on surface operations and the ground controller position. The responsibility
of the ground controller position is to safely control each
aircraft’s navigation through the airport taxiways for both
departures and arrivals. Figure 1 illustrates the workflow
for a single departing aircraft. Within the AFAR testbed,
Figure 2: Personal Assistant Agents using the CHAT model
24
The major next step is to address the problem of algorithm portability. A shared context representation is a necessary condition but is not sufficient for the implementation of algorithms that cross-cut multiple domains and application types. Within the CHAT framework it is possible
to formulate the model for a particular domain in many different ways, depending on the emphasis of the system, so it
cannot be assumed that an algorithm developed for one
CHAT model will function with another. One possible
way to mitigate this problem would be to extend the CHAT
model to include algorithm documentation and data requirements, making it more clear to the CHAT model user/designer whether a model formulation will work with a
given algorithm.
Multi-Robot Pursuit Using the CHAT Model
The final application that will be discussed is a team of robots and humans exploring an unknown environment in
search of an adversary. This application uses both the
workflow and performer elements of CHAT but also introduces the need for enhanced monitoring and modeling of
the environment in which they are operating. The robot and
human team uses a distributed coordination algorithm to
traverse and explore the unknown environment while obtaining local state information from each of the performers
to increase the situational awareness of the entire team.
The performers determine their next task based on information from the shared environment and their set of capabilities and resources. Ultimately, the environment provides the context to the performer to decide whether or not
they can make a decision about their next task or whether
they choose to seek guidance from another team member.
Figure 3 illustrates information flow within CHAT in this
example.
Acknowledgements
The authors would like to thank the NASA Airspace Systems Program for its support of this work under Contract
#NNA08BC68C.
References
Amundsen, S., & Eliassen, F. (2008). A resource and context
model for mobile middleware. Personal and Ubiquitous Computing. Volume 12 Issue 2, 143-153.
Dey, A.K.D., Abowd, G.D.: Towards a better understanding of
context and context awareness”. Workshop on The What, Who,
Where, When, and How of Context Awareness, affiliated
with the 2000 ACM Conference on Human Factors in
Computer Systems (CHI 2000), The Hague, Netherlands
(April 2000).
Good, R., Shurr, N., Alexander, A. L., Picciano, P., Ganberg, G.,
Therrien, M., Beard, B., & Holbrook, J. (2010). A testbed to investigate allocation strategies between air traffic controllers and
automated agents. Presented at the 22nd Annual Conference of
Innovative Applications of Artificial Intelligence. IAAI-10.
Gutheim, P. (2011). “An Ontology-Based Context Inference Service for Mobile Applications in Next-Generation Networks.”
IEEE Communications Magazine, vol. 50, no. 1.
Liu, L., Ozsu, M. T. (2009). Ontology. Encyclopedia of Database
Systems. Springer-Verlag.
Reitsema, J., Chun, W., Fong, T., & Stiles, R. (2005). Teamcentered virtual interactive presence for adjustable autonomy.
American Institute of Aeronautics and Astronautics (AIAA) Space
Scerri, P., Pynadath, D., & Tambe, M. (2002). Towards adjustable autonomy for the real world. Journal of Artificial Intelligence
Research, 17, 171-228.
Schilit, B., Adams, N. Want, R. (1994). Context-Aware Computing Applications. 1st International Workshop on
Mobile Computing Systems and Applications. pp 85-90.
Schurr, N., Patil, P., & Pighin, F. (2006). Resolving Inconsistencies in Adjustable Autonomy in Continuous Time (RIAACT): A
robust approach to adjustable autonomy for Human-Multiagent
teams. Fifth International Joint Conference on Autonomous
Agents and Multiagent Systems.
Figure 3: Information exchange between performer and environment CHAT elements
Conclusions and Future Work
In this paper, three research systems that require a representation of context that allows for sharing of information
between humans and automated systems were described.
CHAT, a context model that represents the information required for these systems, was introduced. Finally, an illustration of how each of the three systems model aspects of
their information requirements using CHAT was provided.
CHAT as presented here is a work in progress, and will
continue to evolve and be refined as it is applied to more
domains and types of problem. Future papers will delve into specific domain applications of CHAT in more depth.
25
Download