A* Search

advertisement
Problem-solving as search
History
Problem-solving as search – early insight of AI.
Newell and Simon’s theory of human intelligence and
problem-solving.
Early examples:
• 1956: Logic Theorist (Allen Newell & Herbert Simon)
• 1958: Geometry problem solver (Herbert Gelernter)
• 1959: General Problem Solver (Herbert Simon & Alan
Newell)
• 1971: STRIPS (Stanford Research Institute Problem
Solver, Richard Fikes & Nils Nilsson)
Real-World Problem-Solving as Search
Examples:
• Route/Path finding: Robots, cars, cell-phone routing, airline routing,
characters in video games, …
• Layout of circuits
• Job-shop scheduling
• Game playing (e.g., chess, go)
• Theorem proving
• Drug design
Classic AI Toy Problem: 8-puzzle
initial
state
goal
state
2
8
3
1
1
6
4
8
5
7
7
2
4
6
Notion of “searching a state space”
Pictures from http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
3
5
8-puzzle search tree
28 3
16 4
7 5
28 3
1 4
76 5
28 3
16 4
7 5
28 3
6 4
1 7 5
28 3
1 4
76 5
2 3
18 4
76 5
Pictures from http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
28 3
16 4
7 5
28 3
1 64
7 5
What is size of state space?
What is size of state space for 8-puzzle?
Size of state space  9! = 181,440
Size of 15-puzzle state space?  16! = 2 x 1013
Size of 24-puzzle state space?  25! = 1.5 x 1025
What is size of state space for 8-puzzle?
Size of state space  9! = 181,440
Size of 15-puzzle state space?  16! = 2 x 1013
Size of 24-puzzle state space?  25! = 1.5 x 1025
Can’t do exhaustive search!
Approximate number of states
Tic-Tac-Toe: 39
Checkers: 1040
Rubik’s cube: 1019
Chess: 10120
In general, a search problem is formalized as :
• state space
• special start and goal state(s)
• operators that perform allowable transitions between
states
• cost of transitions
All these can be either deterministic or probabilistic.
State space as a tree/graph
Search as tree search
Solutions: “winning” state, or path to winning state
How to solve a problem by searching
1.
Define search space
•
Initial, goal, and intermediate states
2.
Define operators for expanding a given state into its possible
successor states
•
Defines search tree
3.
Apply search algorithm (tree search) to find path from initial
to goal state, while avoiding (if possible) repeating a state
during the search.
4.
Solution is
•
path from initial to goal state (e.g., traveling salesman
problem)
•
or, simply a goal state, which might not be initially
known (e.g., drug design)
Missionaries and cannibals
Three missionaries and three cannibals are on
the left bank of a river.
There is one canoe which can hold one or two
people.
Find a way to get everyone to the right bank,
without ever leaving a group of missionaries
in one place outnumbered by cannibals in
that place.
From http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
Missionaries and cannibals
Three missionaries and three cannibals are on
the left bank of a river.
There is one canoe which can hold one or two
people.
Find a way to get everyone to the right bank,
without ever leaving a group of missionaries
in one place outnumbered by cannibals in
that place.
How to set this up as a search problem?
From http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
Missionaries and cannibals
State space:
Size?
Initial state:
Goal state:
Operators:
Cost of transitions:
Search tree:
Drug design
Example: Search for sequence of up to N amino acids that forms
protein shape that matches a particular receptor on a pathogen.
(Note: There are 20 amino acids to choose from at each locus in the
string.)
Drug design
State space:
Size?
Initial state:
Goal state:
Operators:
Cost of transitions:
Search tree:
Search Strategies
A strategy is defined by picking the order of node expansion.
Strategies are evaluated along the following dimensions:
1. completeness – does it always find a solution if one exists?
2. optimality – does it always find a optimal (least-cost or
highest value) solution?
3. time complexity – number of nodes generated/expanded
4. space complexity – maximum number of nodes in memory
Time and space complexity are often measured in terms of:
b – maximum branching factor of the search tree
d – depth of the least-cost solution
m – maximum depth of the state space (may be infinite)
Adapted from http://www.cs.uiuc.edu/class/sp06/cs440/Lectures/lec2.pp
Search methods
• Uninformed search:
1.
2.
3.
4.
5.
Breadth-first
Depth-first
Depth-limited
Iterative deepening depth-first
Bidirectional
• Informed (or heuristic)
search (deterministic or
stochastic):
1. Greedy best-first
2. A* (and many variations)
3. Hill climbing
4. Simulated annealing
5. Genetic algorithm
6. Tabu search
7. Ant colony optimization
• Adversarial search:
1. Minimax with alpha-beta
pruning
Uninformed strategies
Breadth-first: Expand all nodes at depth d before
proceeding to depth d+1
Depth-first: Expand deepest unexpanded node
Depth-limited: Depth-first search with a cutoff at a
specified depth limit
Iterative deepening: Repeated depth-limited searches,
starting with a limit of zero and incrementing once each
time
http://www.cse.unl.edu/~choueiry/S03-476-876/searchapplet/index.html
Uninformed Search Properties
Breadth-first: Complete? Optimal? Time? Space?
Depth-first: Complete? Optimal? Time? Space?
Depth-limited: Complete? Optimal? Time? Space?
Iterative deepening: Complete? Optimal? Time? Space?
Informed (heuristic) Search
• What is a “heuristic”?
• Examples:
• 8 puzzle
• Missionaries and Cannibals
• Tic Tac Toe
• Traveling Salesman Problem
• Drug design
Best-first greedy search
1.current state = initial state
2. Expand current state
3. Evaluate offspring states s with heuristic h(s),
which estimates cost of path from s to goal state
4.current state = argmins h(s) for s 
offspring(current state)
5.If current state ≠ goal state, go to step 2.
http://alumni.cs.ucr.edu/~tmatinde/projects/cs455/TSP/heuristic/
Travellinganimation.htm
Search Terminology
Completeness
• solution will be found, if it exists
Optimality
• least cost solution will be found
Admissable heuristic h
 s, h never overestimates true cost from state s to goal state
Best first greedy search: Complete? Optimal?
8-puzzle heuristics: Hamming distance, Manhattan distance: Admissible?
Example of non-admissable heuristic for 8-puzzle?
A* Search
Uses evaluation function f (n)= g(n) + h(n)
where n is a node.
1. g is a cost function
• Total cost incurred so far from initial state at
node n
2. h is an heuristic
Best first search is A* with g = 0.
h1(start state) =
h2(start state) =
A* Pseudocode
give code and show example on 8-puzzle
A* Pseudocode
create the open list of nodes, initially containing only our starting node
create the closed list of nodes, initially empty
while (we have not reached our goal) {
consider the best node in the open list (the node with the lowest f value)
if (this node is the goal) { then we're done }
else {
move the current node to the closed list and consider all of its successors
for (each successor) {
if (this suceessor is in the closed list and our current g value is lower) {
update the successor with the new, lower, g value
change the sucessor’s parent to our current node }
else if (this successor is in the open list and our current g value is lower) {
update the suceessor with the new, lower, g value
change the sucessor’s parent to our current node }
else this sucessor is not in either the open or closed list {
add the successor to the open list and set its g value } } } }
Adapted from: http://en.wikibooks.org/wiki/Artificial_Intelligence/Search/Heuristic_search/Astar_Search#Pseudo-code_A.2A
A* search is complete, and is optimal if h is
admissible
Proof of Optimality of A*
Suppose a suboptimal goal G2 has been generated and is
in the OPEN list.
Let n be an unexpanded node on a shortest path to an
optimal goal G1.
f(G2) = g(G2) since h(G2) = 0
start
 g(G1) since G2 is suboptimal
n
G2
f(G2)  f(n) since h is admissible
G1
Since f(G2)  f(n), A* will never select
G2 for expansion
Variations of A*
• IDA* (iterative deepening A*)
• ARA* (anytime repairing A*)
• D* (dynamic A*)
(From http://aima.eecs.berkeley.edu/slides-pdf/)
(From http://aima.eecs.berkeley.edu/slides-pdf/)
(From http://aima.eecs.berkeley.edu/slides-pdf/)
Example of Simulated Annealing
Netlogo simulation
Simulated Annealing is complete (if you run it for a long
enough time!)
Genetic Algorithms
Similar to hill-climbing, but with a population of “initial
states”, and stochastic mutation and crossover
operations for search.
Download