Introduction to Truth Maintenance Systems

advertisement
Introduction to Truth Maintenance Systems
A Truth Maintenance System (TMS) is a PS module responsible for:
1.
2.
3.
4.
5.
Enforcing logical relations among beliefs.
Generating explanations for conclusions.
Finding solutions to search problems
Supporting default reasoning.
Identifying causes for failure and recover from inconsistencies.
The TMS / IE relationship is the following:
Justifications, assumptions
Inference
Engine
TMS
Beliefs, contradictions
Problem Solver
1.
Enforcement of logical relations (constrains) among beliefs.
Every AI problem which is not completely specified requires search. Search
utilizes assumptions, which may eventually change.Changing assumptions
requires updating consequences of beliefs. Re-derivation of those
consequences is most often not desirable, therefore we need a mechanism to
maintain and update relations among beliefs.
Example: If (cs-501) and (math-218) then (cs-570).
If (cs-570) and (CIT-core-completed) then (TMS-related-capstone).
If (TMS-related-capstone) then (AI-experience).
The following are relations among beliefs following from these statements:
(AI-experience) if (TMS-related-capstone).
(TMS-related-capstone) if (cs-570), (CIT-core-completed).
etc.
Beliefs can be viewed as propositional variables, and a TMS can be viewed as
a mechanism for processing large collections of logical relations on
propositional variables.
2. Generation of explanations.
Solving problems is what PSs do. However, often solutions are not enough the PS is expected to provide an explanation for the proposed solution so that
the user can identify the cause of a problem if something goes wrong. To
provide explanations, a TMS uses cached inferences.
The fundamental assumption behind this idea is that caching inferences
once is more beneficial than running inference rules that have generated
these inferences more than once.
Example Q: Shall I have an AI experience after completing the CIT program?
A: Yes, because of the TMS related capstone.
Q: What do I need to take a TMS related capstone?
A: CS-570 and completed core.
Note: There are different types of TMSs that provide different ways of explaining
conclusions (JTMS vs ATMS). In this example, explaining conclusions in terms
of their immediate predecessors works much better.
3. Finding solutions to search problems.
Consider the following graph
B
A
D
C
E
Assume you want to color the nodes so that every node is red, or green, or
yellow, and adjacent nodes are of different colors. Let "1" means "red", "2"
means "green", and "3" means "yellow". Then, the following set of constraints
describe this problem:
A1 or A2 or A3
B1 or B2 or B3
C1 or C2 or C2
D1 or D2 or D3
E1 or E2 or E2
not (A1 and B1)
not (A2 and B2)
not (A3 and B3)
not (A1 and C1)
not (A2 and C2)
not (A3 and C3)
not (B1 and D1)
not (B2 and D2)
not (B3 and D3)
not (D1 and E1)
not (D2 and E2)
not (D3 and E3)
not (C1 and E1)
not (C2 and E2)
not (C3 and E3)
To find a solution that satisfies all of the constraints, we can use search:
A is red
A is green
B is red B is green B is yellow
C is red C is green C is yellow
D is red D is green D is yellow
E is red E is green E is yellow
A is yellow
4. Default reasoning and TMS
Many real-world problems cannot be completely specified. That is, the PS must
make conclusions based on incomplete information. Typically the assumption
under which such conclusions are drawn is that X is true unless there is an
evidence to the contrary. This is known as the “Closed-World Assumption”
(CWA). Notice that the CWA helps us limit the underlying search space by
assuming only a certain choice and ignoring the others. The reasoning scheme
that utilizes this assumption is called “default (or non-monotonic) reasoning”.
Example: Consider the following knowledge base
Bird(tom) and not Abnormal(tom)  Can_fly(tom)
Penguin(tom)  Abnormal(tom)
Ostrich(tom)  Abnormal(tom)
Bird(tom)
--------------------------------------------Under the CWA, we assume not Abnormal(tom) (because there is no
evidence that Tom is abnormal). Therefore, we can derive can_fly(tom).
Non-monotonic TMS supports this type of reasoning.
5. Identifying causes for failures and recovering from inconsistencies.
Inconsistencies among beliefs in the KB are always possible, especially if the PS
makes its conclusions based on insufficient information. The most common
reasons for inconsistencies or other failures are the following:
-- Wrong data. Example: “Outside temperature is 320 degrees.”
-- Impossible constraints. Example: (Big-house and Cheap-house and
Nice-house).
-- Non-monotonic inference. PS is forced to “jump” to a conclusion, because
of the lack of information, or lack of time to derive the conclusion.
-- Contradictions due to inconsistent data, conclusions contradicting the
existing data, or inconsistent assumptions.
-- Dynamic data. When the domain evolves, the new domain state may be
considerably different from the previous domain state, and inferences
made in the previous state may no longer be valid.
Cashed dependences among beliefs that TMS maintains help identify the reason
for an inconsistency, and a mechanism, called “dependency-directed
backtracking” allows the TMS to recover from it. Example: see book, figures
6.1 – 6.4
How the TMS and the IE communicate?
The PS works with:
– assertions (facts, beliefs, conclusions, hypotheses);
– inference rules;
– procedures.
Each one of these is assigned a TMS node.
Example:
N1: (rule (student ?x)
(assert (and (underpaid ?x) (overworked ?x))))
N2: (student Bob)
Note that the IE and the TMS treat nodes differently. Given N1 and N2, the IE
can infer
N3: (and (underpaid Bob) (overworked Bob))
This is possible because the IE threats nodes as logical formulas, while the TMS
treats nodes as propositional variables.
TMS nodes
Different types of TMSs support different types of nodes. Here are the basic ones:
 Premise nodes. These are always true.
 Contradiction nodes. These are always false.
 Assumption nodes. These are nodes, which the IE wants to believe no
matter whether or not they are supported by the existing evidence.
 (Regular) nodes. These are nodes which are believed only if there is a
valid reason for that.
Each node has a label associated with it. The contents and the structure of the
label depends on the type of TMS. In the simplest case, it may only indicate
whether a node is believed (:IN) or not believed (:OUT).
Nodes are complex data structures, where different node properties are stored.
Labels are just one of those properties. Other properties are node type (premise,
assumption, etc.), node support (justifications, antecedents), node consequences,
etc.
TMS justifications
Once a new node, N3, is created by the IE, it can be reported to the TMS
together with the fact that it follows from N1, N2 and MP. This is recorded in the
following form, called the justification:
(N3
Modus-Ponens N2 N1)
Here N3 is called the consequent, Modus-Ponens is the informant, N1 and
N2 are the antecedents of the justification. That is, justifications record
relations among beliefs (N1, N2 and N3 in this case), and therefore can be
used for explaining consequents and identifying causes for inconsistencies.
The general format of justifications is the following:
(<consequent> <informant> . <antecedents>)
TMS dependency networks
Nodes and justifications form a dependency network. Here is an example
network:
Node
Justifications for Node;
one of them is selected to
be Node's "support", i.e
the reason for Node to
be believed ("IN").
Node's consequences; these are justifications
for other nodes of the network, for which Node
is an antecedent.
See figures 6.8 and 6.9 for examples.
TMS / IE interaction
Responsibilities of the IE:
Responsibilities of the TMS:
1.
1.
2.
3.
4.
Adds assertions and
justifications.
Makes premises and
assumptions,
Retracts assumptions.
Provides advise on handling
contradictions
2.
3.
4.
Cashes beliefs and
consequences and maintains
labels.
Detects contradictions.
Performed belief revision.
Generates explanations.
Propositional specification of a TMS
As we have already seen, TMS nodes are propositional variables. Therefore,
we can view TMS justifications as propositional formulas (implications) of the
form:
N1 & N2 & … & Ni  Nj
Here N1, N2, …, Ni, Nj are positive literals, therefore this implication is a Horn
formula.
A TMS can be viewed as a collection of Horn formulas.
There exist polynomial time inference procedures for Horn formulas knowledge
bases. For example, forward chaining -- by just applying MP, we can derive all
formulas that logically follow from the KB. This makes it possible for a TMS to
answer a variety of queries about the current set of nodes and justifications.
The most fundamental query is whether a node logically follows from a given
TMS state.
Families of TMSs
There are several families of TMSs, which differ in the representation scheme
they use and the functionality they support:
1.
2.
3.
4.
5.
6.
Justification-based TMSs. The language used is limited to Horn formulas.
Logic-based TMSs. These use a full propositional logic language.
Assumption-based TMSs. Language limited to Horn formulas, but several
alternatives (contexts) can be explored at the same time.
Non-monotonic JTMSs. Language limited to Horn formulas, but allow
non-monotonic justifications, thus making it possible to implement default
reasoning.
Clause Management Systems. Their representational power is equivalent
to LTMSs, but like ATMSs can support several contexts at the same time.
Contradiction-tolerant TMSs. Language limited to Horn formulas, but
support non-monotonic and plausible reasoning and deal explicitly with
contradictions in a single context.
Download