Human-Aware AI (aka Darned Humans: Subbarao Kambhampati Arizona State University

advertisement
Human-Aware AI
(aka Darned Humans:
Can’t Live with them. Can’t Live without them)
Subbarao Kambhampati
Arizona State University
Given at U. Washington on 11/2/2007
51 year old field of unknown gender; Birth date unclear;
Mother unknown; Many purported fathers;
Constantly confuses holy grail with daily progress; Morose disposition
So let’s see if the future
is going in quite the
right direction…
What is missing in this picture?
(Static vs. Dynamic)
Environment
(perfect vs.
Imperfect)
(Full vs.
Partial satisfaction)
Goals
(Observable vs.
Partially Observable)
(Instantaneous vs.
Durative)
(Deterministic vs.
Stochastic)
What action next?
A: A Unified Brand-name-Free Introduction to Planning
Subbarao Kambhampati
What is missing in this picture?
(Static vs. Dynamic)
(Observable vs.
Partially Observable)
Goals
A : A U nified Brand-name-Free Introduction to Planning
tio
n
(Full vs.
Partial satisfaction)
ac
(perfect vs.
Imperfect)
perception
Environment
(Instantaneous vs.
Durative)
(Deterministic vs.
Stochastic)
n
tio
s
ue
Q
What action next?
$$$
$$$
e
Th Subbarao Kambhampati
AI’s Curious Ambivalence to humans..
• Our systems seem
happiest
– either far away from
humans
– or in an adversarial
stance with humans
You want to help humanity, it is the people that you just can’t stand…
What happened to Co-existence?
• Whither McCarthy’s advice taker?
• ..or Janet Kolodner’s house wife?
• …or even Dave’s HAL?
• (with hopefully a less sinister voice)
Why aren’t we doing HAAI?
• ..to some extent we are
– Assistive technology; Intelligent user interfaces;
Augmented cognition, Human-Robot Interaction
– But it is mostly smuggled under the radar..
– And certainly doesn’t get no respect..
• Rodney Dangerfield of AI?
• Is it time to bring it to the center stage?
– Having them as applied side of AI makes them seem
peripheral, and little formal attention gets paid to them
by the “main-stream”
(Some) Challenges of HAAI
– Human-level communication/interfacing
• Need to understand what makes natural interfaces..
– Explanations
• Humans want explanations (even if fake..)
• Teachability
– Advice Taking (without lobotomy)
• Elaboration tolerance
– Dealing with evolving models
• You rarely tell everything at once to your secretary..
– Need to operate in an “any-knowledge” mode
• Recognizing Human’s state
– Recognizing intent; activity
– Detecting/handling emotions/affect
Human-aware AI may necessitate acting human
(which is not necessary for non-HAAI)
• Communication
Caveats & Worries about HAAI
• Are any of these challenges really new?
• HAAI vs. HDAI (human-dependent AI)
– Human dependent AI can be enormously
lucrative if you find the right sweet spot..
• But will it hamper eventual progress to (HA)AI?
• Advice taking can degenerate to adviceneeding..
• Designing HAAI agents may need
competence beyond computer science..
Are the challenges really new?
Are they too hard?
• Isn’t any kind of feedback “advice giving”? Isn’t
reinforcement learning already foregrounding “evolving
domain models”
– A question of granularity. There is no need to keep the
interactions mono-syllabic..
• Won’t communication require NLP and thus become AIcomplete?
– There could well be a spectrum of communication modalities that
could be tried
• Doesn’t recognition of human activity/emotional state
really AI?
– ..it is if we want HAAI (you want to work with humans, you need
to have some idea of their state..)
HDAI: Finding“Sweet Spots”
in computer-mediated cooperative work
• It is possible to get by with techniques blithely ignorant of
semantics, when you have humans in the loop
– All you need is to find the right sweet spot, where the computer
plays a pre-processing role and presents “potential solutions”
– …and the human very gratefully does the in-depth analysis on those
few potential solutions
• Examples:
– The incredible success of “Bag of Words” model!
• Bag of letters would be a disaster ;-)
• Bag of sentences and/or NLP would be good
– ..but only to your discriminating and irascible searchers ;-)
• Concern:
– Will pursuit of HDAI inhibit progress towards eventual AI?
• By inducing perpetual dependence on (rather than awareness of) the
human in the loop?
Delusions of Advice Taking:
Give me Advice that I can easily use
• Planners that expect
“advice” that is expressed
in terms of their internal
choice points
– HSTS, a NASA planner,
depended on this type of
knowledge..
• Learners that expect
“advice” that can be easily
included into their current
algorithm
– “Must link”/ “Must-not
Link” constraints used in
“semi-supervised”
clustering algorithms
Moral: It is wishful to expect advice that will be tailored to your program internals.
Operationalizing high-level advice is your (AI program’s) responsibility
HAAI pushes us beyond CS…
• By dubbing “acting rational” as the definition of
AI, we carefully separated the AI enterprise from
“psychology”, “cognitive science” etc.
• But pursuit of HAAI pushes us right back into
these disciplines (and more)
– Making an interface that improves interaction with
humans requires understanding of human
psychology..
• E.g. studies showing how programs that have even a
rudimentary understanding of human emotions fare much
better in interactions with humans
• Are we ready to do HAAI despite this push
beyond comfort zone?
How are sub-areas doing on HAAI?
I’ll focus on “teachability” aspect in two areas that I know something about
• Automated Planning
– Full autonomy through
complete domain models
– Can take prior
knowledge in the form of
• Domain physics
• Control knowledge
– ..but seems to need it
• Machine Learning..
– Full autonomy through
tabula rasa learning over
gazillion samples
– Seems incapable of
taking much prior
knowledge
• Unless sneaked in through
features and kernels..
What’s Rao doing in HAAI?
(Some) Challenges of HAAI
• Communication
– Human-level communication/interfacing
• Need to understand what makes natural interfaces..
– Explanations
• Humans want explanations (even if fake..)
• Teachability
– Advice Taking (without lobotomy)
• Elaboration tolerance
– Dealing with evolving models
• You rarely tell everything at once to your secretary..
– Need to operate in an “any-knowledge” mode
• Recognizing Human’s state
– Recognizing intent; activity
– Detecting/handling emotions/affect
Human-aware AI may necessitate acting human
(which is not necessary for non-HAAI)
• Model-lite planning
• Planning in HRI
scenarios
• Human-aware
information integration
Motivations for Model-lite
Is the only way to get more applications is to tackle more and more expressive domains?
• There are many scenarios where domain
modeling is the biggest obstacle
– Web Service Composition
• Most services have very little formal models attached
– Workflow management
• Most workflows are provided with little information about
underlying causal models
– Learning to plan from demonstrations
• We will have to contend with incomplete and evolving domain
models..
• ..but our approaches assume complete and
correct models..
From
“Any Time” to
“Any Model”
Planning
Model-Lite Planning is
Planning with incomplete models
• ..“incomplete”  “not enough domain
knowledge to verify correctness/optimality”
• How incomplete is incomplete?
• Knowing no more
than I/O types?
• Missing a couple of
preconditions/effects
or user preferences?
Challenges in Realizing Model-Lite
Planning
1. Planning support for shallow domain
models [ICAC 2005]
2. Plan creation with approximate domain
models [IJCAI 2007, ICAPS Wkshp 2007]
3. Learning to improve completeness of
domain models [ICAPS Wkshp 2007]
Challenge: Planning Support for
Shallow Domain Models
• Provide planning support that exploits the shallow model
available
• Idea: Explore wider variety of domain knowledge that
can either be easily specified interactively or
learned/mined. E.g.
• I/O type specifications (e.g. Woogle)
• Task Dependencies (e.g. workflow specifications)
– Qn: Can these be compiled down to a common substrate?
• Types of planning support that can be provided with such
knowledge
– Critiquing plans in mixed-initiative scenarios
– Detecting incorrectness (as against verifying correctness)
Challenge: Plan Creation with
Approximate Domain Models
• Support plan creation despite missing details
in the model. The missing details may be (1)
action models (2) cost/utility models
• Example: Generate robust “line” plans in the
face of incompleteness of action description
– View model incompleteness as a form of
uncertainty (e.g. work by Amir et. al.)
• Example: Generate Diverse/Multi-option plans
in the face of incompleteness of cost model
– Our IJCAI-2007 work can be viewed as being
motivated this way..
Note: Model-lite planning aims to reduce the
modeling burden; the planning itself may actually
be harder
Imprecise Intent & Diversity
Challenge: Learning to Improve
Completeness of Domain Models
• In traditional “model-intensive” planning learning is
mostly motivated for speedup
– ..and it has gradually become less and less important with the
advent of fast heuristic planners
• In model-lite planning, learning (also) helps in model
acquisition and model refinement.
– Learning from a variety of sources
• Textual descriptions; plan traces; expert demonstrations
– Learning in the presence of background knowledge
• The current model serves as background knowledge for additional
refinements for learning
• Example efforts
– Much of DARPA IL program (including our LSP system); PLOW
etc.
– Stochastic Explanation-based Learning (ICAPS 2007 wkhop)
Make planning Model-lite  Make learning knowledge (model) rich
Learning & Planning with incomplete
models: A proposal..
•
Address learning and planning
problem
– Learning involves
• Updating the prior weights
on the axioms
• Finding new axioms
– Planning involves
• Probabilistic planning in the
presence of precondition
uncertainty
• Consider using MaxSat to
solve problems in the
proposed formulation
Domain Model - Blocksworld
DARPA Integrated Learning Project
•
Represent incomplete domain with
(relational) probabilistic logic
– Weighted precondition axiom
– Weighted effect axiom
– Weighted static property axiom
•
•
•
•
•
•
•
•
0.9, Pickup (x) -> armempty()
1, Pickup (x) -> clear(x)
1, Pickup (x) -> ontable(x)
0.8, Pickup (x) –> holding(x)
0.8, Pickup (x) -> not armempty()
0.8, Pickup (x) -> not ontable(x)
1, Holding (x) -> not armempty()
1, Holding (x) -> not ontable(x)
Precondition Axiom:
Relates Actions with
Current state facts
Effect Axiom:
Relates Actions with
Next state facts
Static Property:
Relates Facts in a
State
Towards Model-lite Planning - Sungwook Yoon
Can we view the probabilistic
plangraph as Bayes net?
0.5
clear_a
clear_b
armempty
ontable_a
ontable_b
A
Domain Static Property
Can be asserted too, 0.9
pickup_a
pickup_b
noop_clear_a
noop_clear_b
noop_armempty
noop_ontable_a
noop_ontable_b
clear_a
clear_b
armempty
ontable_a
ontable_b
holding_a
holding_b
pickup_a
pickup_b
stack_a_b
stack_b_a
noop_clear_a
noop_clear_b
noop_armempty
noop_ontable_a
noop_ontable_b
noop_holding_a
noop_holding_b
0.8
Evidence Variables
B
0.8
How we find a solution?
MPE (most probabilistic explanation)
There are some solvers out there
Towards Model-lite Planning - Sungwook Yoon
A
B
clear_a
clear_b
armempty
ontable_a
ontable_b
holding_a
holding_b
on_a_b
on_b_a
Indiana Univ; ASU
Stanford, Notre Dame
MURI 2007: Effective Human-Robot Interaction
under Time Pressure
[CIDR 07; VLDB 07]
Challenges in Querying Autonomous Databases
Imprecise Queries
Incomplete Data
User’s needs are not clearly defined hence:
Databases are often populated by:


Queries may be too general
Queries may be too specific


Lay users entering data
Automated extraction
Density Function
Relevance Function
General Solution: “Expected Relevance Ranking”
Challenge: Automated & Non-intrusive
assessment of Relevance and Density functions
However, how can we retrieve similar/
incomplete tuples in the first place?
Challenge: Rewriting a user’s query
to retrieve highly relevant Similar/
Incomplete tuples
Once the similar/incomplete tuples have been
retrieved, why should users believe them?
Challenge: Provide explanations for
the uncertain answers in order to gain
the user’s trust
QUIC: Handling Query Imprecision & Data Incompleteness in Autonomous Databases
Summary: Say Hi to HAAI
• We may want to take HAAI as seriously as we
take autonomous agency
– My argument is not that everybody should do it, but
rather that it should be seen as “main stream” rather
than as some applied
• HAAI does emphasize specific technical
challenges: Communication; Teachability;
Human state recognition
• Pursuit of HAAI involves pitfalls (e.g. need to
differentiate HDAI and HAAI) as well as a
broadening of focus (e.g. need to take interface
issues seriously)
• Some steps towards HAAI in planning
Points to Ponder..
– Is HAAI moot without full NLP?
• How do we make progress
towards HAAI
– Is IUI considered progress towards
HAAI?
– Is model-lite planning?
– Is learning by X (X=
“demonstrations”; “being told”…)?
– Is elicitation of utility
models/recognition of intent?
(Some) Challenges of HAAI
• Communication
– Human-level communication/interfacing
• Need to understand what makes natural interfaces..
– Explanations
• Humans want explanations (even if fake..)
• Teachability
– Advice Taking (without lobotomy)
• Elaboration tolerance
– Dealing with evolving models
• You rarely tell everything at once to your secretary..
– Need to operate in an “any-knowledge” mode
• Recognizing Human’s state
– Recognizing intent; activity
– Detecting/handling emotions/affect
Human-aware AI may necessitate acting human
(which is not necessary for non-HAAI)
• Do we (you) agree that we might
need human-aware AI?
• Do you think anything needs to
change in your current area of
interest as a consequence?
• (What)(Are there) foundational
problems in human-aware AI?
Epilogue: HAAI is Hard but Needed..
• The challenges posed by HAAI may take
us out of the carefully circumscribed goals
of AI
• Given a choice, us computer scientists
would rather not think about messy human
interactions..
• But, do we really have a choice?
Download