AI chap6

advertisement

Chapter 6 Logical Agents

In previous chapters

 agents solve problem by searching

 no knowledge (past experience) is used

 Humans solve problem

 by reusing knowledge and

 new facts concluded under reasoning of knowledge

 even under partially observable environment

 Reasoning is very important

 especially in the world where the results of actions are unknown

Knowledge-based agent

The central component of a KB agent

 its memory – knowledge base , or KB

 A KB is a set of representation of facts about the world

 Each of this representation is called a sentence

The sentences are expressed in a language called

 knowledge representation language

(  syntax)

Design of knowledge-based agent

The state diagram, transition operators, search tree

 no longer applicable to build a KB agent because we don ’ t know the results

A new design (design of KB agent) includes:

 A formal language to express its knowledge

 A means of carrying out reasoning in this language

These two elements together constitute

 logic

Design of knowledge-based agent

TELL and ASK

 the way to add new sentences

 and the way to query what sentences the

KB contains

Another main component of KB agent

 the inference engine

 This is to answer the query by ASK

 based on the sentences TELLed in the KB

KB

Tell

Ask KB

Tell

Ask

Inference engine

KB

Design of knowledge-based agent

The query

 may not be answered directly from the sentences

 but usually indirectly

At the initial stage of KB

 Before anything is TELLed

 It contains some background knowledge about the domain

 Example: location(desk, kitchen), …

How a KB performs

1.

2.

Each time the agent program is called, it does two things: it TELLs the KB what it perceives it ASKs the KB what action to perform

 reasoning

The process in concluding the action from the sentences in the KB

This reasoning is done logically

 and the action found is proved to be better than all others

(given the agent knows its goals)

The WUMPUS world environment

PEAS description

P erformance measure

 +1000 for picking up the gold

 -1000 falling into a pit, or being eaten by the wumpus

 -1 for each action taken

 -10 for using up the arrow

E nvironment

 A 4x4 grid of rooms.

 The agent always start in the square [1,1]

 facing to right

 locations of gold and wumpus

 chosen randomly uniformly

 from squares, except [1,1]

 pit

 every square, except [1,1], may be a pit

 with probability 0.2

Actuators (for the agent)

 move forward, turn left, turn right by 90 o

 if there is wall, move forward has no effect

 dies if the agent

 enters a square with pit, or with a live wumpus

 it is safe to enter if the wumpus is dead

 Grab

 pick up an object that is in the same square

 Shoot

 fire an arrow in a straight line in the facing direction

 continues until it hits the wumpus, or a wall

 can be used only 1 time

Sensors – there are 5

 perceive a stench ( 臭味 ) if the agent is

 in the square with wumpus (live or dead)

 and in the directly adjacent squares

 perceive a breeze ( 風 ) if

 in the square directly adjacent to a pit

 perceive a glitter ( 耀眼 )

 in the square containing the gold

 perceive a bump ( 撞 )

 when walks into a wall

 perceive a scream (in all [x,y])

 when the wumpus was killed

The percept is then expressed

 as a state of five elements

 if in a square of stench and breeze only,

 the percept looks like

 [ Stench, Breeze, None, None, None ]

Restriction on the agent

 it cannot perceive the locations adjacent to itself

 but only its own location

 so it is a partially observable environment

 the actions are contingent (nondeterministic)

In most cases,

 the agent can retrieve the gold safely

About 21% of the environments,

 there is no way to succeed

Example,

 the squares around the gold are pits

 or the gold is in a square of pit

Based on the examples,

 thinking logically (rationally) can let the agent take the proven correct actions

Propositional logic

The symbols of propositional logic are

 the logical constants True and False

 propositional symbols such as P and Q the logical connectives

,

,

,

,

, and ()

Sentences

 True and False are themselves sentences

 Propositional symbol such P and Q

 is itself a sentence

Wrapping “ ( ) ” around a sentence

 yields a sentence

A sentence

 formed by combining sentences with one of the five logical connectives:

: negation

: conjunction

Antecedent /

Premise

: disjunction

: implication. P

Q : if P then Q

: bidirectional

Conclusion /

Consequent

An atomic sentence

 one containing only one symbol

A literal

 either an atomic sentence

 or a negated one

A complex sentence

 sentences constructed from simpler sentences using logical connectors

Semantics

The semantics of propositional logic

 simply interpretation of truth values to symbols

 assign True or False to the logical symbols

 e.g., a model m

1

= {P

1,2

= false, P

2,2

= false, P

3,1

= true} the easiest way to summarize the models

 a truth table

A simple knowledge base

Use wumpus world as an example

 where only pits are defined

For each i, j:

 P i,j

= true if there is a pit in [i, j]

 B i,j

= true if there is a breeze in [i, j]

The knowledge base includes (rules)

 There is no pit in [1,1]

R1 :

P

1,1

A square is breezy if and only if

 a pit in a neighboring square

 all squares must be stated now, just show several

Breeze percepts

R2: B

1,1

R3: B

2,1

(P

1,2

(P

1,1

P

2,1

P

2,2

)

P

3,1

)

 from the agent during runtime

Then the KB contains

R4 :

B

1,1

R5 : B

2,1

 the five rules, R

1 to R

5

Inference

Based on the existing KB

 a set of facts or rules

 we ASK a query, e.g., is P

2,2 true?

Inference is performed to give the answer

In this case, a truth table is used

For large KB,

 a large amount of memory necessary

 to maintain the truth table

 if there are n symbols in KB

 there are totally 2 n rows (models) in the truth table

 which is O(2 n ), prohibited

 also the time required is not short

 so, we use some rules for inference

 instead of using truth tables

Reasoning patterns propositional logic

An inference rule

 a rule capturing a certain pattern of inference

 we write α├ β or

List of inference rules

β

All logical equivalences in the following

 can be used as inference rules

We start with R

1

 no pit in [1,2] through R

5 and show how to prove

 From R

2

, apply biconditional elimination

The preceding deviation – called a proof

 a sequence of applications of inference rules

 similar to searching a solution path

 every node is viewed as a sentence

 we are going to find the goal sentence

 hence all algorithms in Ch.3 and Ch.4 can be used

 but it is proved to be NP-complete

 an enormous search tree of sentences can be generated exponentially

Resolution

This is a complete inference algorithm

 derive all true conclusions from a set of premises

Consider it 

 Agent

 returns from [2,1] to [1,1]

 then goes to [1,2]

 Perception?

 stench no breeze

 All these facts

 added to KB

Similarly to getting R

10

, we know

Apply biconditional elimination to R

3

, then M.P. with R

5

, then

Then apply resolution rule: ¬ P

2,2 with P

2,2 in R

15

We know ¬ P

1,1 is not pit: in R

13 resolves

The previous rules are unit resolution

 where l and m are complementary literals

 m = ¬ l

 the left is called a clause

 a disjunction of literals

 where the left one is called a unit clause

For full resolution rule

Two more examples for resolution

One more technical aspect in resolution

 factoring – removal of multiple copies e.g., resolve (A

B) with (A

 ¬ B) generate (A

A) = A

Conjunctive Normal Form

Resolution rule – a weak point

 applied only to disjunctions of literals

 but most sentences are conjunctive

 so we transform the sentences in CNF

 i.e., expressed as a conjunction of disjunctions of literals

 a sentence in k-CNF has k literals per clause

( l

1,1

 …  l

1,k

)

 … 

( l n,1

 l n,k

)

Conversion procedure

Resolution algorithm

Inference based on resolution

 proof by contradiction, or refutation i.e., show (KB

 ¬ α ) is unsatisfiable

 e.g., To prove α, first assume ¬ α, to see the result can be true or not.

 True  ¬ α = True, False  α = True

The first step, all sentences

 (KB

 ¬ α ) are converted into CNF

 Then apply resolution rules to this CNF

 then see which result it is

Forward and backward chaining

In many practical situations,

 resolution is not needed

 reason?

 real-world KB only has Horn clauses

 a disjunction of literals of which

 at most one is positive e.g., ¬ L

1,1

¬ B

1,1

¬ Breeze

B

1,1

P

1,2

P

2,1 is not

 why only 1 positive literal?

Three reasons

1. Horn clause  implication

 premise = conjunction of positive literals

 and conclusion = a single positive literal

 e.g., (L

1,1

Breeze) => B

1,1

 a Horn clause with no positive literals

 (A

B) => false

2. Inference with Horn clauses

 work with Forward and Backward chaining

 both of them

 an easy and natural algorithm

3. Reasoning with Horn clauses

 work in time linear in the size of KB

 cheap for many propositional KB in practice

Forward chaining

 to produce new information

 based on the set of known facts and clauses

 the new information will then be added to KB

 and produce another set of information

 e.g.,

(L

1,1

Breeze) => B

1,1

L

1,1

Breeze

Then? B

1,1 is added

When we want to prove q or find q

Using forward-chaining

 producing new fact continues until

 the query q is added

 no new facts can be generated

 this process runs in linear time

This kind of inference

 called data-driven

 inference starts with the known data

 i.e., clauses (rules) are driven with data

AND-OR graph

 conjunction -multiple links with an arc indication

 disjunction -multiple links without an arc indication

Backward chaining

 the opposite of forward chaining

 A query q is asked

 if it is known already, finished

 otherwise, the algorithm finds all implications that conclude q (the clauses with q as head)

 try to prove all premises in those matched implications

 every premise is then another query q ’

 what is that?

 Prolog matching and unification

 also runs in linear time or fewer than linear time

 only relevant clauses are matched and used

Agents based on Propositional Logic

Finding pits and wumpuses, for every [x,y]

 rule for breeze

 rule for stench

B x , y

( P x , y

1

P x , y

1

P x

1 , y

P x

1 , y

)

S x , y

( W x , y

1

W x , y

1

W x

1 , y

W x

1 , y

)

 rules for wumpus

 at least one wumpus: W

1,1

W

1,2

 … 

W

4,4

 and at most one wumpus

 for any two squares, one of them must be wumpus-free

 with n squares, how many sentences? n(n-1)/2 e.g., ¬ W

1,1

¬ W

1,2

 for a 4 x 4 world, there are 105 sentences containing 64 distinct symbols. (each square has S, B, W, P, … )

Keeping track of location & orientation

for every square, the player

 may face 4 different orientation

 Up, Down, Left, Right

 in order to let the player forward

L x , y

FacingRigh t

Forward

L x

1 ,y

 hence the rules are increased to 4 times

 this greatly affect the efficiency

 because too many rules !!

It still works in small domain (4 x 4)

 how about (10 x 10)? and even (100 x 100)?

The main problem

 there are too many distinct propositions to handle

The weakness of Propositional Logic

 lack of expressiveness

 every similar variable must be listed out!!

 we need another powerful device

 First-order logic – discussed in chapter 8

Download