Summary of previous lecture Knowledge and reasoning 2 Knowledge-based agents Knowledge base deduce Action Inside our agent TDDC65 Artificial intelligence and Lisp The real world Peter Dalenius petda@ida.liu.se Department of Computer and Information Science Linköping University Percept execute perceive 1. How can we represent knowledge? 2. How can we draw conclusions? Summary of previous lecture Logic Truth table What is the value? It depends on the interpretation! P Q P∧Q⇒P F F T F T T T F T T T T Summary of previous lecture What is a good argument? In order to prove that it is correct to draw the conclusion S when we know KB, we can do either of the following: Is it satisfiable? Can it ever be true? P∧Q⇒P – Prove KB ╞ S (entailment) using either a truth table or reasoning with interpretations – Prove KB ├ S using a sound and complete proof system, e.g. resolution Is it a tautology? Is it always true? Does it entail P ∧ Q? Is it correct to draw the conclusion P ∧ Q? Is it equivalent to P ⇒ Q? Do these two formulas always have the same value? Summary of previous lecture The Wumpus world Representing the Wumpus world There is at least one Wumpus in the cave: W1,1 ∨ W1,2 ∨ W1,3 ∨ … ∨ W4,4 If there is a breeze in square (x, y) then there is a pit in one of the four adjacent squares: PIT PIT Bx,y ⇔ Px,y+1 ∨ Px,y-1 ∨ Px+1,y ∨ Px-1,y This is a template for a number of formulas. PIT There is only one Wumpus: That’s a tricky one to represent… 1 We need a more expressive logic! What’s new in first order logic? Propositional logic (used so far) – Each symbol represents a proposition, a single statement that says something about the world. – Several things are hard to formulate. First order predicate logic – An extension to propositional logic. – Helps us to formulate more complex statements. We can talk about objects (e.g. the different rooms in the cave) denoted by object constants. Example: A, Room1, John The objects can have properties or relations to each other. These are called predicates. Example: Empty(Room1), Father(John, Mary) We have functions that can return objects. Example: nextRoom(Room1), fatherOf(Mary) We have quantifiers to say general things. ∀ means ”for all” and ∃ means ”exists”. p.247 Sentences in first order logic Sentences in first order logic A term is an expression that refers to an object, either a constant, a variable, or a function application. Example: Room1, x, motherOf(John). Note that a term is not a sentence in itself. An atomic sentence states a fact. It is formed from a predicate symbol followed by a list of terms. Example: Mother(Anna, John), Long(tailOf(Cat56)) A complex sentence is formed by connectives, just like in propositional logic. Example: Shining(Sun) ∧ Angry(brother(John)) A quantified sentence says something about an entire collection of objects. ∀x P(x) means that P(x) is true for all objects ∃x P(x) means that P(x) is true for some objects p.248 More examples of formulas The sky is blue and the grass is green Blue(Sky) ∧ Green(Grass) Color(Sky, Blue) ∧ Color(Grass, Green) John is Mary’s father fatherOf(Mary) = John Father(John, Mary) There are nice people Everyone has a father ∃x(People(x) ∧ Nice(x)) It is ok to use equality. p.249 Connections between ∀ and ∃ Nobody Move ¬ inwards, and likes carrots: the quantifier changes. Compare to DeMorgan. ¬∃x Likes(x, Carrots) ∀x ¬Likes(x, Carrots) Everybody likes ice cream: ∀x Likes(x, IceCream) ¬¬∀x Likes(x, IceCream) ¬∃x ¬Likes(x, IceCream) ∀x∃y(Father(y,x)) p.252 2 Interpretations in first order logic What do first order sentences/formulas actually mean? An interpretation in first order logic consists of four parts: – The domain, i.e. the set of objects we are talking about. – A definition of all the constant symbols. – A definition of all the predicate symbols (properties and relations). – A definition of all the function symbols. Working with N5 Is the following formula true in N5? ∃x∀y(Greater(x,y) ∨ x = y) The formula actually says that there exists a greatest number. If we let x = 5, is the following formula true? ∀y(Greater(x,y) ∨ x = y) We can try all possible values of y and see if the inner formula is true (which it is). Sample interpretation N5 The domain is {0, 1, 2, 3, 4}. is one constant symbol: There – Smallest has the value 0 There are two function symbols: – plus(x, y) is defined as addition mod 5 – minus(x, y) is defined as subtraction mod 5 There is one predicate symbol: – Greater(x, y) is defined as true whenever x is greater than y Working with N5 Which of the following formulas are true in the interpretation N5? ∀x∀y Greater(plus(x, y), x) ∀x∀y∀z (plus(x, y) = z) ⇒ (minus(z, y) = x) ∃x∃y (minus(x, y) = Smallest) Experiences from first order logic 1. Convert to propositional logic Much easier to express knowledge compared to propositional logic. ☺ Much more complex interpretations, which makes it harder to draw conclusions (no truth tables). We need a sound and complete proof system! We already have resolution for propositional logic. Can we use that? Yes, we can use resolution if we transform our first order formulas into propositional formulas. This is called propositionalization. Example: Instead of ∀x Car(x) ⇒ Blue(x) we write Blue(C1) ∧ Blue(C2) ∧ Blue(C3) if the domain consists of cars C1, C2 and C3. We will soon see a method for doing this… p.274 3 2. Matching atomic sentences In propositional logic it’s easy to match positive and negative literals. P R Unification ¬P, ¬R ¬R In first order logic, it’s not clear how we can match atomic sentences with different inner structure. P(f(a), b) ¬P(x, b), R(x, c) Atomic sentences with different terms can be matched by unification. This is a form of pattern matching that binds variables to values. The two sentences in the example match if the variable x is bound to the term f(a). This must also be done in the resulting sentence. Q(b, c) P(f(a), b) ¬P(x, b), R(x, c) ??? {x = f(a)} R(f(a), c) p.275 Resolution in first order logic If we know KB, is S a valid conclusion? KB ├ S the same as Transformation 1. Is the set of formulas KB ∪ {¬S} unsatisfiable? transformation New set of quantifierfree formulas in CNF 2. 3. 4. 5. resoluiion (using unification) 6. If we can prove contradiction, then the answer to our original question is yes! Our goal is to transform a first order formula to a propositional formula in CNF so we can use resolution. Rewrite ⇔ and ⇒ according to rules. Move ¬ inwards. Standardize variables. Skolemize (drop existential qualifiers). Drop universal quantifiers. Distribute ∨ over ∧. Note that the resulting formula is not equivalent to the original one, but it will be satisfiable if and only if the original one is. p.296 Skolemize with ∃ at the beginning Skolemize with ∃ inside Assume that the following formula is ∃x∀y(Greater(x,y) ∨ x = y) there exists and x, we can choose that x and call it a, so the formula can be written If ∀y(Greater(a,y) ∨ a = y) a is called a Skolem constant. Assume that the following formula is true ∀x∃y(Greater(x,y)) ∨ x = Smallest true When ∃ is inside an ∀, we cannot introduce a Skolem constant. The y that x is greater than, might not be the same y for all x. Instead we introduce a Skolem function f, so the formula becomes ∀x(Greater(x,f(x))) ∨ x = Smallest 4 Transformation example A complete example ∀x∃y(P(x,y) ⇒ ¬∀x(Q(x,y) ∨ P(x,y))) rewrite ⇒ ∀x∃y(¬P(x,y) ∨ ¬∀x(Q(x,y) ∨ P(x,y))) move ¬ inwards ∀x∃y(¬P(x,y) ∨ ∃x ¬(Q(x,y) ∨ P(x,y))) ∀x∃y(¬P(x,y) ∨ ∃x (¬Q(x,y) ∧ ¬P(x,y))) standardize variables ∀x∃y(¬P(x,y) ∨ ∃z (¬Q(z,y) ∧ ¬P(z,y))) Skolemize ∀x(¬P(x,f(x)) ∨ (¬Q(g(x),f(x)) ∧ ¬P(g(x),f(x)))) drop universal quantifiers ¬P(x,f(x)) ∨ (¬Q(g(x),f(x)) ∧ ¬P(g(x),f(x))) distribute ∨ over ∧ (¬P(x,f(x)) ∨ ¬Q(g(x),f(x))) ∧ (¬P(x,f(x)) ∨ ¬P(g(x),f(x))) set of clauses {¬P(x,f(x)) ∨ ¬Q(g(x),f(x)), ¬P(x,f(x)) ∨ ¬P(g(x),f(x))} From ”Horses are animals”, it follows that ”The head of a horse is the head of an animal”. Demonstrate that this inference is valid by carrying out the following steps: – Translate the premise and the conclusion to the language of first order logic using the predicates HeadOf(h, x), Horse(x) and Animal(x). – Negate the conclusion. Convert the premise and the negated conclusion into CNF. – Use resolution to show that the conclusion follows from the premise (i.e. deduce a contradiction from the negated conclusion). 1. Formulas in first order logic 2. Transform to CNF The premise The The conclusion ∀x(Horse(x) ⇒ Animal(x)) ∀x∃y(HeadOf(y,x) ∧ Horse(x) ⇒ Animal(x)) 2. Transform to CNF The premise ∀x(Horse(x) ⇒ Animal(x)) ∀x(¬Horse(x) ∨ Animal(x)) ¬Horse(x) ∨ Animal(x) {¬Horse(x) ∨ Animal(x)} convert ⇒ drop ∀ set of clauses 3. Resolution (using unification) negated conclusion ¬∀x∃y(HeadOf(y,x) ∧ Horse(x) ⇒ Animal(x)) ¬Horse(x) ∨ Animal(x) convert ⇒ ¬∀x∃y(¬(HeadOf(y,x) ∧ Horse(x)) ∨ Animal(x)) move ¬ inwards ∃x∀y ¬(¬(HeadOf(y,x) ∧ Horse(x)) ∨ Animal(x)) ∃x∀y(HeadOf(y,x) ∧ Horse(x) ∧ ¬Animal(x)) Skolemize HeadOf(y,a) Horse(a) ¬Animal(a) {x = a} Animal(a) ∀y(HeadOf(y,a) ∧ Horse(a) ∧ ¬Animal(a)) HeadOf(y,a) ∧ Horse(a) ∧ ¬Animal(a) drop ∀ set of clauses {HeadOf(y,a), Horse(a), ¬Animal(a)} 5 Formal languages Language What exists in the world? (Ontological commitment) Knowledge engineering What an agent believes about facts (Epistemological commitment) 1. 2. 3. Propositional logic facts true/false/unknown First order logic facts, objects, relations true/false/unknown Temporal logic facts, objects, relations, times true/false/unknown Probability theory facts degree of belief 0..1 Fuzzy logic facts with degree of truth 0..1 known interval value 4. 5. 6. 7. Identify the task. Assemble the relevant knowledge. Decide on a vocabulary of predicates, functions, and constants. Encode general knowledge about the domain. Encode a description of the specific problem instance. Pose queries to the inference procedure and get answers. Debug the knowledge base. p.244 Situation calculus p.261 Situation calculus Both propositional and first order logic by default assumes a static world that does not change. But an intelligent agent has to be able to reason about the result of actions and how they change the world. We will briefly discuss situation calculus, implemented in first order logic, that gives us that possibility. We will use the Wumpus world as example. A situation encode everything there is to know about the world at a given time. The function Result(a, s) names the new situation after doing action a in situation s. Fluents are functions and predicates that vary from one situation to the next. Example: ¬Holding(G1, S0) Eternal functions and predicates indicate what is true in all situations. Example: Gold(G1) p.328 Situations Describing actions The simplest way to describe actions is by possibility axioms and effect axioms. Examples of possibility axioms: At(A,x1,s) ∧ Adjacent(x1,x2) ⇒ Poss(Go(x1,x2),s) Gold(g) ∧ At(A,x,s) ∧ At(g,x,s) ⇒ Poss(Grab(g),s) Examples of effect axioms: Poss(Go(x1,x2),s) ⇒ At(A,x2,Result(Go(x1,x2),s)) Poss(Grab(g),s) ⇒ Holding(g,Result(Grab(g),s)) 6 Example of initial knowledge base Searching for gold At(A,[1,1],S0) At(G1,[1,2],S0) Gold(G1) Adjacent([1,1],[1,2]) Adjacent([1,2],[1,1]) agent is at [1,1] object G1 is at [1,2] object G1 is the gold [1,1] is next to [1,2] [1,2] is next to [1,1] Plus possibility and effect axioms… The frame problem Our first step is Go([1,1],[1,2]) which is possible according to the knowledge base, including possibility and effect axioms. Our next step is Grab(G1) since the gold is in [1,2], but this is not possible! From the knowledge base and the axioms we cannot deduce that the gold is in [1,2] in the situation S1 = Result(Go([1,1],[1,2],S0). Intuitively this should be obvious, but it is not stated in our axioms! Solving the frame problem The effect axiom says what changes, but they don’t say what stays the same (e.g. that the gold is still in [1,2]). This is an example of the frame problem. In the real world, most things stay the same from one situation to another. How can we encode this knowledge? If we have A actions and F fluent predicates in our language, we need AF frame axioms (where most of them say that a fluent is not affected at all). This seems difficult! Instead of writing out the effect of each action, we consider how each fluent predicate evolves over time. We use successor-state axioms instead: action is possible ⇒ (fluent is true in result state ⇔ action’s effect made it true ∨ it was true before and action left it alone) Example: Poss(a, s) ⇒ (At(A,y,Result(a,s)) ⇔ a = Go(x,y) ∨ (At(A,y,s) ∧ a ≠ Go(y,z))) p.331 Problems in situation calculus Representational frame problem You should be able to… – How can we represent what stays the same? Inferential frame problem – How can we find the result of a sequence of n actions in a reasonable time? Ramification problem – How can we represent implicit effect, e.g. that the gold moves with the agent when grabbed? Qualification problem – What happens if the agent crashes inside the cave? Explain basic concepts (e.g. tautology, satisfiability) Formulate sentences in both propositional and first order logic Decide if a formula is true or false in a given interpretation Use resolution in both propositional and first order logic Briefly discuss advanced concepts (e.g. different kinds of logic) 7