From: AAAI-86 Proceedings. Copyright ©1986, AAAI (www.aaai.org). All rights reserved.
Constructing
and
from
Refining
Causal
an Inconsistent
Richard
Abstract
Recent
work
in the
strated
the utility
eralization.
Most
field of machine
learning
has
the formation
of explanations
from consistent
present an approach
to forming explanations
domain theories.
I
from domain the-
ories which are inconsistent
which suppress
potentially
due to the presence of abstractions
relevant detail.
In this approach,
ex-
planations
are constructed
refined in a failure-driven
to support
reasoning
tasks and are
manner.
The elaboration
of explana-
tions
is guided
by the structuring
MA 02139
of domain
theories
be relevant
plied,
domain
demon-
inherent
them
effort
to develop
a causal
mod-
elling system which
relations
in physical
forms explanations
of the underlying
causal
This system utilizes an inconsissystems.
tent, common-sense
physical systems.
theory
of the mechanisms
which
reasoning
abstractions
may manifest
The
problem
justified,
ories;
and
how to refine
those
operate
in
learning
methods
as an important
an explanation
derived
ticular
has shown a recent shift towards
which utilize the construction
of
step in the generalization
theory
shows
is an instance
of some concept.
After
in the explanation
are determined,
example
constraints
are generalized
a generalized
This approach
which are consistent,
planations
from
a domain
while maintaining
recognition
derived
these
constraints;
rule for examples
is now well understood
or are at least assumed
and generalized
ories constitute
proofs
context of all reasoning
process.
which
tasks
from
why a parthe critical
its components
the result
is
of the given concept.
for domain theories
to be consistent.
Exconsistent
domain
the-
can be taken to be correct
in the
they may subsequently
support.
However,
most domain
theories
are not consistent
- they
incorporate
defaults,
they omit details,
or they otherwise
abstract
away from a complete
account
of the constraints
which
‘This report describes research done at the Artificial Intelligence
Laboratory of the Massachusetts
Inetitute of Technology.
Support for the laboratory’s artificial intelligence
research is provided in part by the Advanced
Research Projects Agency of the Department
of Defense under Office of Naval
Research contract N000414-80-C-0505.
538
/ SCIENCE
are
ap-
inferences
derived
from
reasoning
or their
tasks
generaliza-
to which
they
are
An
Consider
level the
Example
a domain
theory which describes
kinds of causal mechanisms
that
couplings,
systems:
flows, mechanical
from this domain theory a simple causal
two flow mechanism
problem
at a common-sense
operate
in physical
etc. My system derives
model of a bathtub
which
instances:
water
This simple
model
flows in at the tap
proves
inadequate
of how to fill the bathtub
with
water.
This reasoning
task becomes solvable after my system elaborates
the model to describe a mechanical
coupling between the plunger
these
ezplanation-based
learning
methods
/DeJong
&
86, Mahadevan
85, Mitchell et al 86, W’inston et al 831,
In
Mooney
they
applied.
describes
The Problem
explanations
when
explanations
when they fail to support
tions
for the planning
The field of machine
knowledge
intensive
to which
addressed
in this paper is how to construct
explanations
despite inconsistent
domain the-
plausible
and flows out at the drain.
1
tasks
are not corroborated.
1.1
of a larger
to the
Explanations
derived and generalized
from inconsistent
theories cannot be assumed
to be always correct;
their
into layers
of abstractions.
This work is part
1
Laboratory
of Technology
may
of explanation
formation
as a guide to genof these investigations
have concentrated
on
Theory
J. Doyle
Artificial
Intelligence
Massachusetts
Institute
Cambridge,
Explanations
Domain
and the
drain.
plug
and
how the plug
This elaborated
intersection
between
blocks
the flow of water
at the
causal explanation
includes
an interesting
a flow mechanism
and a mechanical
coupling
mechanism.
A single physical object - the plug - plays dual roles:
it serves both as one half of a mechanical
coupling and as barrier
to a flow. My system extracts
this composed
causal mechanism
- which might be called “valve” - out of the causal model of
the bathtub
and generalizes
manner,
maintaining
these two roles.
causal
My system
relations
camera,
there
it in the explanation-based
the constraint
next uses
in another
that
learning
one physical
object
plaj
the valve
physical
is a mechanical
mechanism
to explain the
system - a camera.
In a
coupling
between
the shutter
re-
lease and the shutter;
furthermore
the shutter
plays the addithe photographed
tional role of barrier to the light flow between
subject
and
the film.
This
causal
model
generates
an incorrect
prediction
when a lens cap is inadvertently
left on the lens. My
system refines the model by instantiating
the lens cap as another
barrier
to light flow. The model also cannot be used to explain
why the shutter
does not move when the safety latch on the
shutter
release is engaged.
My system handles this situation
by
instantiating
a latch
as a type of barrier
to a mechanical
coupling
- a detail
which
never
appeared
the valve explanation
The
1.2
in the original
in the context
Proposed
construction
of
of the bathtub.
This representation
mechanisms
describable
Solution
view of explanation
formation
in Figure
1.
from inconsis-
tent domain theories,
as a tool for learning
or otherwise:
Explanations are constructed
in the context
of a reasoning
task; they
example
/
theories
which
GRAVITATION
i\
I
ELECTRICITY
MAGNETISM
ELECTRICAL
’
MATERIAL
on domain
\
’
MECHANICAL
above,
I focus
FIELD
INTERACTION
PROPAGATION
a planning
problem
motivates
the
elaboration
of the bathtub
model and prediction
failures motivate
the refinement
of the camera model.
In this paper,
CAUSAL
MECI
/’
are refined, as needed, in an incremental
failure-driven
manner.
The usefulness
of an explanation
is relative
to the goal of its
motivating
reasoning
task.
Similarly,
the consistency
of an explanation
is relative to the set of inferences
it supports.
In the
of causal
develop-
ment and are being tested in the modelling
of a number of physical systems.
A generalization
hierarchy
for these mechanisms
is
shown
I take the following
for causality
and a vocabulary
within it are currently
under
TRANSMISSION
FLOW
are incon-
LIGHT
sistent because they incorporate
a particular
kind of abstraction
- the suppression
of possibly relevant detail through
approzimation.
Approximations
may be layered into several levels.
Explanations
derived from less approximate
support
incorrect
inferences.
I argue
that
the minimum
be instantiated
task. I present
stantiation
levels are less likely to
level to which an explanation
of the explanation
into a situation
of the explanation
the familiar
and truth
processes
maintenance
2
A Context
Modelling
The
issue
domain
prediction.
theory
and geometrical
over time,
and refine
comes
my system
systems
developed
which
Causal
explanations
from
up in my work on causal
of how quantity
relations
values,
in a physical
a common-sense
a representation
supports
-
an
mod-
modelling system learns how physical
of reasoning tasks such as planning or
utilizes
theory
relations
system.
for causality
the description
of these
structural
system
change
of causal
which can
mechanisms
or
mechanisms
require
the
presence
decouple
cause
by a blocked
by a broken
from
channel
physical
effect.
through
which
can be disrupted
For example,
and mechanical
connection.
Hierarchy
for Causal
to propagate
moby barriers which
flows can be inhibited
couplings
Mechanisms
The relevant
aspect
anisms for the purposes
of this domain theory of causal
of this paper is its inconsistency.
mechThe
theory does not describe
all the relevant
aspects of the various
mechanisms
which operate in physical systems.
Furthermore,
the
layers
of approximation.
the next
I describe
this domain
descriptions
these
theory suginto several
levels of explanation
in
section.
2.1
Layers
There
are
of Explanation
several
levels
in a Domain
of causal
explanation
Theory
available
in the
representation
for causality
described
above:
notion of mechanism
to a different
degree.
each drawing on the
Each more detailed
level introduces
are meaningful
in the context
additional
provided
levels of explanation
they
ignore
that
which
abstract
do not employ
certain
potentially
abstract
does not incorporate
explanation
merely
verifies
constraints
by the more
a coarser
relevant
only
levels.
The higher
grain
size, rather
conditions.
level of explanation
in the representation
the notion of causal mechanism
at all. This
notes the co-occurrence
of two events and
the effect
does not precede
CO-OCCURRENCE
the cause.
EXPLANATION
of
some kind of medium, or structural
link, between the site of the
cause and the site of the effect. For example, flows require a channel through
which to transfer material
and mechanical
couplings
require a physical
connection
mentum.
Causal mechanisms
1: Generalization
The most
in physical
processes
by which effects emerge from causes in this domain.
This aspect of my work addresses issues first considered
in ;Rieger
& Grinberg
771.
In my representation,
Figure
representation
of causality
underlying
gests a decomposition
of the mechanism
mechanisms
to hypothesize
underlying
causal
explain the observed
behavior of the physical
I have
backtracking
for the Problem
Given a description
relations,
level in
power. This approach
of a domain theory to
[Doyle 791.
efling [Doyle 861. My causal
systems work in the context
THERMOCHEMICAL
ELECTROTHERMAL
must
has changed,
of dependency-directed
of how to construct
inconsistent
which
to a less approximate
the domain theory with more explanatory
to refinement
uses the layered structure
guide
\
I
depends on the goal of the motivating
reasoning
two means of refining failed explanations:
rein-
and elaboration
PHOTOCHEMICAL
ELECTROPHOTIC
can be disabled
tl) A Changes(dq,
(Changes(zq,
t2)
A
2 (tl, t2))
===+
FunctaonalDependence(aq,
The
whose
mechanism.
quantities
sition
next
level
values
of explanation
are correlated
For example,
and mechanical
quantities.
dq)
verifies
that
are of the appropriate
flows are causal
couplings
the
links between
are causal
quantities
type
for the
amount
links between
po-
2
LEARNING
I 539
QUANTITY
TYPES
3(b)
EXPLANATION
m, tl : t2) A BurrzerType(b)
(Along(b,
A
3(k)
(Changes(iq,
tl) f\ Changes(dq,
t2)
IndependentQuantityType(iq)
2 (tl, t2)
A
A
===+
===+
FunctionulDependence(iq,
FunctzonalDependence(iq,
This explanation
the enabling
medium
tities
b) A IsA(bq, Posztion)))))
(QuantztyOf(bq,
A DependentQuantityType(dq))
whose
values
MEDIUM
dq)
Enubles(m,
FunctionulDependence(bq,
is an approximation
of one which identifies
between the physical
objects of the quan-
Enubles(b,
are correlated.
Note
tl) A Chunges(dq,
(Changes(zq,
that
(between
associated
EXPLANATION
t2)
IndependentQuantityType(iq)
other
> (tl,
12) A
A
A
DependentQuuntztyType(dq)
FunctionulDependence(bq,
this
dq))
level of explanation
describes
a dependence
a quantity
associated
with a barrier and the quantity
with the effect) which does not appear at any of the
levels.
flows and mechanical
PhyslculOb~ectOf(zq),
dq))
dq)
These levels of explanation
are defined for causal mechanisms
in general; the particular
most detailed
levels of explanation
of
A
34
(Between(m,
dq)
FunctionulDependence(iq,
PhyszculObjectOf(dq),
era examples
couplings
are shown
needed
for the bathtub
and cam-
below.
tl : t2) A MediumType(m
MATERIAL
===+
FunctionulDependence(iq,
Enables(m,
Note
causal
that
must
dq))
be maintained
throughout
must
in turn
be no barriers
the causal
BARRIER
IsA(zq, Amount)
the
A IsA(dq,
BARRIER)
t2)
approximates
which
one which states
disrupt
the structural
that
link and
mechanism.
tl : t2)
A
A
PhyszculObjectOf(dq),
A
3(b)
(Along(b,
m, tl : t2) A Blocks(b,
3(h)
(QuuntityOf(bq,
EXPLANATION
2 (tl, t2)
A
Amount)
3b-4
(Touches(PhyszculObjectOf(iq),
This explanation
there
tl) A Changes(dq,
(Changes(iq,
interaction.
disable
(VARIABLE
dq)
FunctionalDependence(iq,
the medium
FLOW
A
PhysicalObjectOf(zq))
b) A IsA(bq, Position)))))
e
(Changes(iq,
tl)A Ghunges(dq,
t2)
IndependentQuantityType(iq)
DependentQuantityType(dq)
3(m)
(Between(m,
2 (tl, t2)
A
A
A
FunctzonulDependence(zq,
Enables(m,
A
FunctionulDependence(bq,
Enables(b,
Physicu10bjectOf(iq),
tl : t2) A MedzumType(m)
dq)
FunctzonulDependence(zq,
dq))
dq)
FunctionulDependence(bq,
dq))
PhysaculObjectOf(dq),
A
LIGHT
TRANSMISSION
(VARIABLE
BARRIER)
3(b)
(Along(b,
m, tl : t2) A BarrzerType(b))))
(Changes(zq,
_
FunctionulDependence(iq,
dq)
Enables(m,
FunctzonulDependence(iq,
dq))
Disubles(b,
FunctionalDependence(iq,
dq))
Finally,
which states
this description
that in general
on how much
VARIABLE
of barriers can be elaborated
to one
the effectiveness
of a barrier depends
of the medium
BARRIER
tl)A Chunges(dq,
IsA(zq, Amount)
it blocks.
EXPLANATION
PhysiculObjectOf(dq),
3(b)
(Along(b,
tl) A Chunges(dq,
t2)
IndependentQuantityType(iq)
DependentQuuntityType(dq)
3b-4
(Between(m,
2 (tl,
t2) A
540
This difference is discussed
/ SCIENCE
A
A
m, tl : t2) A Opaque(b)
h) A IsA(bq,
Enables(m,
Posztzon)))))
A
Enubles(b,
PhysicuZObJectOf(dq),
dq)
FunctaonulDependence(iq,
FunctzonalDependence(bq,
dq))
dq)
FunctzonuIDependence(bq,
MECHANICAL
COUPLING
(Chunges(iq,
tl) A -Change$(dq,
dq))
(BARRIER)
A
‘Flows and mechanical couplings are instances of a class of causal mechevents at difanisms I call propagattons; they involve similar co-occurring
ferent sites.
There are also tranc~ormat:ons
(e.g. photochemical
on film,
electrophotic
in a light bulb, electrothermal
in a toaster) involving different
co-occurring events at a single site.
than the
3This level of explanation
remove- P a different type of abstraction
other levels.
A
A
===+
A
PhyszculObjectOf(iq),
tl : t2) A MediumType
A
2 (tl, t2)
A
tl : t2)
FunctzonulDependence(zq,
(Changes(iq,
t2)
Amount)
+-4
(StraightLznePuth(PhysiculObjectOf(zq),
Wq)
(QuuntztyOf(bq,
3
A IsA(dq,
in the section on types of abstraction.
IsA(aq, Positton)
3(m)
(AttuchedTo(Phy
t1 : t2)
A
A IsA(dq,
t2)
Posztzon)
szculOb~ect0
j(lq),
> (tl, t2)
A
A
A
PhysaculOb3ectOj(dq),
(AttachedTo(b,
m) A Anchored(b))))
J
dq)
Funct~onalDe~tndence(zq,
Enables(n,
FunctzonalDependence(zq,
dq))
Disables(b,
FunctionalDependence(og,
dq))
This
last explanation
describes
a latch
barrier
/
ATTACHED TO
Although
descriptions
ALONG
‘.
to a mechan-
ical coupling.
there
-A
------ET
this presentation
suggests
of levels in the causal
fixed approximation
may be several
ways to elaborate
mechanism
hierarchies,
in general
any level of explanation.
For example,
a non-rigid
may disable a mechanical
physical
coupling.
connection
as well as a latch
3
and Refining Explanations
ATTACHED TO \4
Figure
Constructing
and
3: Variable
Medium
Barrier
Level
Level Explanation
Explanation
of a Material
of a Mechanical
Flow
Coupling
in a
Bathtub
In this section. I show how the causal explanations
of the bathtub
and camera
are constructed
and incrementally
refined from the
inconsistent
domain
theory
described
in
layered,
approximate,
A plan can now be generated
for filling the bathtub
with water:
Start a flow of water at the tap and move the plunger to place
the previous
the plug in the drain.
section.
This example
3.1
Construction
needed
of an Explanation
task.
Consider the problem of constructing
in the context of a planning
problem
a causal model of a bathtub
to fill a tub with water.
A
first attempt
instantiates
a model at the medium level of explanation of the material
flow mechanism.
This causal explanation
is shown
in Figure
In this
problem;
case,
hence
to the barrier
3.2
of explanation
level of explanation
construction
depends
a barrier
the causal
illustrates
on the motivating
was needed
explanation
to solve
how the
reasoning
the planning
of the bathtub
had to go
level.
Generalization
of an Explanation
2. 4
A material
nism
causal
flow mechanism
intersect
and
in the expanded
modelling
system
notes
may provide opportunities
ful compositions
of causal
mechanism
might
a mechanical
bathtub
such
model
coupling
mecha-
at the plug.
intersections
My
because
they
for extracting
and generalizing
usemechanisms.
This particular
complex
be called
“valve”.
Using the hierarchy
in Figure 1, my system generalizes
the
valve concept to other kinds of flows. The definitions
of media,
__-.___-- - -
QIIANTITICS
_--.-_
AND FUNCTIONAL
DEPENDENCIES
barriers,
etc. for other types of flow are substituted
taining the constraint
that a single physical object
PHYSICAL OBJECTS AND RELATIONS
both
one half of the mechanical
flow.
The generalized
shown
Figure
tub
2: Medium
Water
Level Explanation
flows from
the tap
of Material
through
in Figure
valve
coupling
mechanism
while mainmust serve as
and as barrier
for light
to the
transmission
is
4.
Flows in a Bath-
the tub
and
out
of the
drain.
It is not yet possible
with water.
Starting
to generate
a plan
for filling
the
tub
ALONG
/
/Y
a flow at the tap into the tub does not work
because of the presence of the drain, an additional
medium for
flow from the tub.
This unsolved
planning
problem
motivates
the further
expansion
of the bathtub
model
level of explanation,
as shown in Figure 3.
The model
drain
now reveals
and how the position
4Temporal,
the figures
explanation
how the plug
to a more
enabling,
and disabling
relations
to avoid clutter;
such information
descriptions.
ATTACHED TO
can block
of the plug is affected
detailed
I/
flow at the
by the plunger.
are mostly suppressed in
in the
used ZLPindicated
Figure
4: The Learned
Valve Mechanism
for Light
Transmission
LEARNING
/ 541
supports
This learned
complex
mechanism
is used in the construction
of a causal model for another
physical
causal model is shown in Figure 5.
ALONG
system
- a camera.
the correct
in the altered
prediction
that
the film
This
/
ALONG
OPAQUE
light does not reach
camera.
I
/
<,
OPAQUE
Figure
Figure
5: Valve Explanation
3.4
of a Camera
6, so that more detailed levels of explanation
mechanisms
can be accessed if needed.
barrier
expla-
The origins
as in Figure
in the constituent
of a Camera
of an Explanation
through
Elabo-
In some
cases:
refinement
explanation
requires
elabnot yet
camera
model to handle the situation
where a safety latch on the shutAs is, the model supports
the incorrect
ter release is engaged.
that
the shutter
The model
barrier
COUPLla
of a failed
orating
to a level of explanation
which calls on details
considered.
This kind of refinement
is needed in the
inference
~iv!ECHkNICAL
Refinement
Explanation
rat ion
All of the valve explanations
combine
the variable
explanation
level of a flow mechanism
and the medium
nation level of the mechanical
coupling mechanism.
of composed
mechanism
explanations
are recorded,
7: Reinstantiated
moves whenever
is repaired
when
to the mechanical
shutter,
anchored
the release
my system
coupling
recognizes
between
the release
as in Figure 8. The shutter
does not
release latch is attached.
My system
planation
mechanical
moves.
the latch
and the
move when the
formed this ex-
by elaborating
to the barrier explanation
level of the
coupling constituent
of the valve mechanism
for light
transmission
(see Figure
6).
Although
was never reached in the bathtub
learned valve mechanism
for light
era model.
this
level of explanation
model, it is accessible
in the
transmission
used in the cam-
ATTACHED
TO
1RELEASE
IT1
ATTACHED TO
Figure
6: Origins
of the Valve Explanation
for Light
ANCHORED
Transmis-
sion
3.3
Refinement
of an Explanation
through
Figure
Rein-
\<
/
\&W
8: Elaborated
Explanation
of a Camera
stantiation
When
a lens cap is placed
on a camera
this
incorrect
prediction
- that light will continue
In this case, the level of explanation
needed
situation
shutter,
additional
542
already
appears
in the
is a barrier
to light
barrier,
as in Figure
/ SCIENCE
flow.
model;
the
model
supports
to reach
to handle
an
the film.
the new
lens cap,
like the
My system
instantiates
this
7. The refined
explanation
now
4
Issues
In this
section,
I discuss
a set of issues
of constructing
and refining
which is inconsistent.
explanations
relevant
from
to the problem
a domain
theory
Limits
4.1
on Perception
and Use of Empirical
Ev-
idence
The justification
soning
for employing
in a given
schema
used,
situation
comes
and partially
ate the existentially
The justification
an explanation
partially
to support
from the explanation
from the perceptions
quantified
terms
in that
due to the explanation
rea-
which
domain theory; the theory stops short of a full physical accounting of the laws which govern the behavior
of physical systems.
This
instanti-
explanation
schema
gas behavior
in terms of the motions of molecules and in terms of
the macroscopic
properties
of volume, temperature,
and pressure.
Aggregations
currently
do not appear in the causal mechanism
schema.
may be compro-
mised by approximations.
The justification
due to perception
may be compromised
when the instantiations
of terms in an explanation
are unobsetoable
due to limits in the available percep-
inary.
enumeration
A recent
different
of abstraction
investigation
abstraction
types
[Smith
types,
and
is admittedly
prelim-
et al 85: also has described
has
investigated
how explana-
tions fail because of them. I have described
in this paper an appreach to explanation
construction
and refinement
from domain
theories
which
incorporate
approximations
and qualitizations.
tion equipment.
For example,
thermal
conducting
air, which may be unobservable,
serves as a
medium for heat flow in a toaster.
A causal
explanation
for a toaster
tially instantiated.
based
on heat
flow might
be only
par-
4.3
Incomplete
Even
the
theory.
The loss of justification
due to an uninstantiable
be countered
by gathering
empirical
evidence that
tion is consistent,
e.g., confirming
that bread placed
does indeed become
i.e., explanation-based,
methods.
term can
an explanain a toaster
terms
in an explanation
level
For example,
on a camera.
imply
Intractable
a barrier
Abstractions
missing
of explanation
also can be countered
applicable
also be arguably
the uninstantiable
Even complete domain
proximations.
A complete
For example,
flow mechanism
in a toaster
should be disabled
between the coils and the bread.
level can strongly
ma1 conducting
Types
4.2
level of ex-
that
heat
when a thermal
insulator
Confirming
observations
the presence
flow
exists
at this
of the unobservable
ther-
medium.
of Abstraction
Approximation
ing between
main theory.
suggest
the barrier
indicates
is the most
prevalent
type
of abstraction
levels of explanation
in the causal
Approximations
are assumptions
appear-
mechanism
that some
docon-
dition holds or that some constraint
is satisfied.
For example,
the approximation
between the quantity
types and medium levels of explanation
causal mechanism
be correct
is that an appropriate
medium
to support
a
is in place. A more detailed explanation
may
in situations
A different
and variable
kind
barrier
where
an approximate
of abstraction
levels.
Here
appears
explanation
between
a continuous
is not.
t,he barrier
description
is col-
lapsed into a discrete one. At the barrier level, a barrier either
completely
disables a mechanism
or has no effect at all. At the
variable barrier level a barrier may also partially
affect a mechanism.
This
kind
of abstraction
might
qualitization.
be called
Under
barrier
explanation.
aggregation,
complex
planation
are subsumed
single terms, at higher
under
levels.
grain
example
size. The oft-used
structures
at one
level
of ex-
simpler structures,
perhaps
even
Aggregations
involve changes in
is the alternate
explanations
e.g. a UV filter
level of a domain
refinement
described
detail
of
knowledge,
theory
in this pa-
perhaps
analogy.
theories
domain
as to be intractable.
fining
those
explanations
4.4
Learning
Given
an inconsistent
of such
explained
light
either
a theory
into
allows plausible
exa capability
for re-
851.
Experiments
domain
than one plausible,
tions. For example,
giv-
might make use of layered aptheory may involve so much
A structuring
[Tadepalli
from
Simply
appropriate.
several approximating
levels of explanation
planations
to be constructed,
and maintains
theory,
it is possible
partially
justified
a glowing taillight
by the electrical
to derive
more
explanation
in many situaon an automobile
might be
system
of the car or by reflected
from the sun.
I am developing
guishing
planation
an experiment
competing
design
capability
for distin-
multiple
explanations.
This capability
utilizes the exrefinement
method described
in this paper. It appears
similar in spirit to that
my method,
refinements
explanations
proposed
in [Rajamoney
et al 851. In
are proposed
to one or more of a set of
until
the explanations
support
divergent
predictions.
The refinements
specify further instantiations
at the
same or at an elaborated
level. For example,
an experiment
to
distinguish
Some situations
may not even be describable,
much less correctly
described,
by explanations
which incorporate
qualitizations.
For
example,
the variable flow out of a bathtub
drain or the way the
aperture
in a camera lens affects light flow cannot be described
by the on/off
be selective,
per has no recourse when an incomplete
domain theory ‘Lbottoms
out”. A possible course of action in this circumstance
is to resort
to an inductive
method.
Another
is to invoke some other means
of accessing
term.
Theories
knowledge.
ing up may
in the heat
may
at the lowest
by elaborating
an explanation.
More detailed
levels of explanation can suggest
how to obtain indirect
empirical
evidence for
planation
Domain
of explanation
in a domain
theory
may
This is true of the causal mechanism
abstractions.
The method
hotter.
This is one way in which analytical,
methods
can be combined
with empirical
Uninstantiable
lowest
incorporate
and
light
the glowing
flow explanation
taillight
explanations
from the medium
might
elaborate
level and specify
the
the in-
stantiation
of an opaque barrier to disable the hypothesized
light
transmission.
This barrier, importantly,
would have no predicted
effect on the electrical
This approach
single explanations.
system
of the car.
to experiment
design
Even an explanation
applies equally
with no rivals
well to
may be
only partially
justified
because of perception
limits.
Empirical
evidence for the correctness
of such an explanation
may be gathered
via experiments
which
specify
refinements
involving
LEARNING
addi-
/ 543
tional observable instantiations
of terms in the explanation.
Such
an experiment,
involving a toaster, is described
in the section on
perception
above.
Experimenting
greater
can
justification
be viewed
as the
active
for fewer and fewer plausible
gathering
of
explanations.
completeness
of a domain
theory,
and designing
experiments
to
distinguish
and gather justification
for plausible explanations.
In
addition,
a better understanding
is needed of the kinds of domain
theories which admit to decomposition
via layered abstractions.
and of the principles
which govern the placement
of orderings
on
abstractions.
5
Relation to Other Work
7
Acknowledgements
Patil has investigated
multi-level
causal explanation
in a medical
domain [Patil et al 811. He identifies five levels of explanation
and
Patrick
describes
methods
for moving between levels in both directions.
The kind of abstraction
employed
by Patil’s system ABEL is ag-
tion. Randall Davis, Bob Hall, David Kirsh, Rick Lathrop,
Jintae
Lee and Tomas Lozano-Perez
have all engaged
in discussions
of
gregation;
the ideas
nodes
into fewer nodes
in ABEL supports
and allows
detail
and/or
causal
the reasoning
to a user.
failure-driven
approximations,
links
at one level are condensed
and links at the next higher level. Elaboration
confirmation
of diagnoses
to greater resolution
of the system
Elaboration
to be revealed
in greater
in ABEL is not intended
refinement
of explanations
through
as described
in this paper.
ior of digital
circuits
the troubleshooting
are described
at several
system
the removal
of
both aggreand behav-
levels of aggregation;
with different
grain sizes
at which to examine a circuit.
Fault models indicate
how to lift
approximations
concerning
the possible “paths of interaction”
in
circuits.
Davis’ fault
models
appear
to be well-described
in my
representation
for causality.
His notions of spurious
and inhibited causal pathways
correspond
to my concepts
of medium and
barrier.
In general,
there
may be many
ways to repair
failed
approx-
imate explanations.
Smith et al [Smith et al 851 have explored
how the task of isolating the source of an explanation
failure can
be constrained.
They show how different
types of abstraction
in an explanation
schema propagate
along dependency
links to
instantiated
explanations
and lead to different types of failure.
6
I have
presented
two types
an approach
from inconsistent
of abstraction
constraints
to explanation
domain
is guided
construction
theories
- the suppression
and the discretization
which
of domain
relevant
representations.
are elaborated
from failures.
by the structuring
and
incorporate
of potentially
of continuous
In this approach,
explanations
reasoning
tasks and to recover
process
to support
new
The elaboration
theories
into layers
This work is taking place in the context
of an investigation
into the formation
of causal models of physical systems.
Causal
involves
the construction
and refinement
planations
of the behavior
of physical
systems
theory describing
the mechanisms
which operate
The levels of explanation
in this domain
a representation
for causality
in physical
in this paper.
References
[Davis 841 Davis, Randall,
“Diagnostic
Reasoning
Based
ture and Behavior,”
Artificial
Intelligence
24, 1984.
on Struc-
[DeJong & Mooney
“Explanation
Based
Mooney,
Machine
861 DeJong,
Gerald and Raymond
Learning:
A Differentiating
View,“
1, no. 2, 1986.
Learning
[Doyle 791 Doyle,
Jon,
“A Truth
[Doyle
861 Doyle,
Justified
Causal
Richard
[Mahadevan
ing:
ads,”
851 Mahadevan,
A Generalization
9th IJCAI,
[Mitchell
j&i
/ SCIENCE
Sridhar,
Strategy
616-623,
Refinement
of
for
“Verification-Based
Learn-
Problem-Reduction
?rleth-
1985.
et al 861 Mitchell,
Tom M., Richard
“Explanation-Based
Smadar
T. Kedar-Cabelli,
A Unifying
View,” Machine
Learning
1, no.
M. Keller and
Generalization:
1: 1986.
of causal
ex-
from a domain
in such systems.
theory are derived
systems.
Diagnosis,”
7th IJCA I, 1981.
[Rajamoney
Boi Faltings,
et al 851 Rajamoney,
Shankar,
Gerald DeJong and
“Towards a Model of Conceptual
Knowledge
Acqui-
sition
Through
Directed
understanding
theory
inconsistent,
the types
dealing
from
of abstraction
with
9th IJCAl.
Experimentation,”
688-690.
[Rieger & Grinberg
771 Rieger, Chuck and Milt Grinberg,
*‘The
Declarative
Representation
and Procedural
Simulation
of Causality in Physical
[Smith
Mitchell
and
the in-
5th IJCAI.
Mechanisms,”
et al 851 Smith,
Bruce
Reid
Explicit Justifications
CAI, 673-680, 1985.
for Knowledge
[Tadepalli 851 Tadepalli,
mains,”
3rd International
1985.
1977.
G., Howard
G. Buchanan,
A. 1Vinston.
‘LRepresentation
Base
Tom hl
and
Cse of
9th IJ-
Refinement.”
Prasad V., “Learning
in Intractable
DoMachine Learning
Workshop. 202-205,
Functional
Definitions,
433-439, 1983.
a domain
and
Levels of Explanation
Institute
of Technol-
ogy, forthcoming.
complement
render
Artificial
System.”
“Construction
J.,
Models Through
Multiple
Ph.D., Massachusetts
and Experimenting,”
[Winston et al 831 Winston,
Katz and Michael Lowry,
which
Maintenance
12, 1979.
Intelligence
Some of the issues related to explanation
formation
from inconsistent
domain theories include:
using empirical
evidence to
explanation,
this line of investiga-
1985.
of abstractions.
modelling
me to pursue
[Patil et al 81; Patil, Ramesh S., Peter Szolovits and CVilliam B.
Schwartz,
‘Causal
Understanding
of Patient
Illness in hledical
Conclusions
refinement
encouraged
to support
Davis’ hardware
troubleshooting
system expands
gations and approximations
[Davis 841. The structure
this provides
8
Winston
Patrick
H., Thomas
“Learning
Physical
Examples:
and
0. Binford,
Descriptions
Precedents.”
Boris
from
.4.4A I-83,