AI-04a-Informed Search and Exploration.ppt

advertisement

An Introduction to

Artificial Intelligence

Lecture 4a: Informed Search and Exploration

Ramin Halavati (halavati@ce.sharif.edu)

In which we see how information about the state space can prevent algorithms from blundering about the dark.

Outline

• Best-first search

• Greedy best-first search

• A * search

• Heuristics

• Local search algorithms

• Hill-climbing search

• Simulated annealing search

• Local beam search

• Genetic algorithms

UN INFORMED?

• Uninformed:

– To search the states graph/tree using Path

Cost and Goal Test .

INFORMED?

• Informed:

– More data about states such as distance to goal.

– Best First Search

• Almost Best First Search

• Heuristic

– h(n): estimated cost of the cheapest path from n to goal. h(goal) = 0.

– Not necessarily guaranteed, but seems fine.

Greedy Best First Search

• Compute estimated distances to goal.

• Expand the node which gains the least estimate.

Greedy Best First Search Example

• Heuristic: Straight Line Distance (H

SLD

)

Greedy Best First Search Example

Properties of Greedy Best First Search

• Complete?

– No, can get stuck in loop.

• Time?

– O(b m ) , but a good heuristic can give dramatic improvement

• Space?

– O(b m ), keeps all nodes in memory

A * search

• Idea: avoid expanding paths that are already expensive

• Evaluation function f(n) = g(n) + h(n)

– g(n) = cost so far to reach n

– h(n) = estimated cost from n to goal

– f(n) = estimated total cost of path through n to goal

A * search example

A* vs Greedy

Admissible Heuristics

• h(n) is admissible :

– if for every node n , h(n) ≤ h * (n), h * (n): the true cost from n to goal.

– Never Overestimates.

– It’s Optimistic.

– Example: h

SLD

(n) (never overestimates the actual road distance)

A* is Optimal

• Theorem : If h(n) is admissible, A * using

TREE-SEARCH is optimal

– TREE-SEARCH:

To re-compute the cost of each node, each time you reach it.

– GRAPH-SEARCH:

To store the costs of all nodes, the first time you reach em.

Optimality of A * ( proof )

• Suppose some suboptimal goal G

2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G .

• f(G

2

) = g(G

2

)

• g(G

2

) > g(G) since G

2

• f(G) = g(G) since h (G

2

) = 0 is suboptimal since h (G) = 0

• f(G

2

) > f(G)

• h(n) ≤ h * from above

(n) since h is admissible

• g(n) + h(n) ≤ g(n) + h * (n)

• f(n) ≤ f(G)

Hence f(G

2

) > f(n) , and A * will never select G

2 for expansion

Consistent Heuristics

• h(n) is consistent if:

– for every node n ,

– every successor n' of n generated by any action a ,

– h(n) ≤ c(n,a,n') + h(n')

• Consistency:

– Monotonicity

– Triangular Inequality.

– Usually at no extra cost!

• Theorem : If h(n) is consistent, A * using

GRAPH-SEARCH is optimal

Optimality of A *

• A * expands nodes in order of increasing f value

• Gradually adds " f -contours" of nodes

• Contour i has all nodes with f=f i

, where f i

< f i+1

Properties of A*

• Complete?

Yes (unless there are infinitely many nodes with f ≤ f(G) )

• Time?

Exponential

• Space?

Keeps all nodes in memory (b d

)

• Optimal?

Yes

How to Design Heuristics?

E.g., for the 8-puzzle:

• h

1

(n) = number of misplaced tiles

• h

2

(n) = total Manhattan distance

(i.e., no. of squares from desired location of each tile)

Admissible heuristics

• h

1

(n) = Number of misplaced tiles

• h

2

(n) = Total Manhattan distance

Effective Branching Factor

• If A* finds the answer

– by expanding N nodes,

– using heuristic h(n),

– At depth d,

– b* is effective branching factor if:

• 1+b*+(b*) 2 +…+(b*) d

= N+1

Dominance

• If h

2

(n) ≥ h

1

(n) for all n (both admissible)

• then h

2 dominates h

1

.

• h

2 is better for search.

• h

2 is more realistic.

• h (n)=max(h

1

(n), h

2

(n),… ,h m

(n))

• Heuristic must be efficient .

How to Generate Heuristics?

• Formal Methods

– Relaxed Problems

– Pattern Data Bases

• Disjoint Pattern Sets

– Learning

• ABSOLVER, 1993

– A new, better heuristic for 8 puzzle.

– First heuristic for Rubik’s cube.

“ Relaxed Problem ” Heuristic

• A problem with fewer restrictions on the actions.

• The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem.

• 8-puzzle:

– Main Rule:

• A tile can be moved from square A to B if A is horizontally or vertically adjacent to B and B is empty.

– Relaxed Rules:

• A tile can move from square A to square B if A is adjacent to

B. (h

2

)

• A tile can move from square A to square B if B is blank.

• A tile can move from square A to square B. (h

1

)

“ Sub Problem ” Heuristic

• The cost to solve a subproblem.

• It IS admissible.

“ Pattern Database ” Heuristics

• To store the exact solution cost to some sub-problems.

“ Disjoint Pattern ” Databases

• Disjoint Pattern Databases.

– To add the result of several Pattern-

Database heuristics.

• Speed Up: 10 3 times for 15-Puzzle and 10 times for 24-Puzzle.

6

• Separablity: Rubik’s cube vs. 8-Puzzle.

Learning Heuristics from Experience

• Machine Learning Techniques.

• Feature Selection

– Linear Combinations

BACK TO MAIN SEARCH METHOD

• What’s wrong with A*? It’s both Optimal and Optimally Efficient.

– MEMORY

Memory Bounded Heuristic Search

• Iterative Deepening A* (IDA*)

– Similar to Iterative Deepening Depth First

Search

– Bounded by f-cost.

– Memory: b*d

Recursive Best First Search

• Main Idea:

– To search a level with limited f-cost, based on other open nodes with continuous update.

Recursive Best

First Search

Recursive Best First Search, Sample

Recursive Best First Search, Sample

• Complete?

Yes, given enough space.

• Space?

b * d

• Optimal?

Yes, if admissible.

• Time?

Hard to analyze. It depends…

Memory, more memory …

• A*: b d

• IDA*, RBFS: b*d

• What about exactly 10 MB?

Memory-Bounded A*

• MA*

• Simplified Memory Bounded A* (SMA*)

– To store as many nodes as possible (the

A* trend).

– When memory is full, remove the worst current node and update its parent.

SMA* Example

SMA* Code

To be continued …

Download