CSC 480: Artificial Intelligence

advertisement
Chapter Overview
Search
 Motivation
 Informed
 Objectives

 Search



as Problem-Solving
problem formulation
problem types
 Uninformed






Search
breadth-first
depth-first
uniform-cost search
depth-limited search
iterative deepening
bi-directional search
© 2000-2008 Franz Kurfess


Search
best-first search
search with heuristics
memory-bounded search
iterative improvement search
 Constraint
Satisfaction
 Important Concepts and
Terms
 Chapter Summary
Search 3
Examples
 getting



start: home on Clearview Lane
goal: Cal Poly CSC Dept.
operators: move one block, turn
 loading





a moving truck
start: apartment full of boxes and furniture
goal: empty apartment, all boxes and furniture in the truck
operators: select item, carry item from apartment to truck, load item
 getting

from home to Cal Poly
settled
start: items randomly distributed over the place
goal: satisfactory arrangement of items
operators: select item, move item
© 2000-2008 Franz Kurfess
Search 6
Motivation
 search
strategies are important methods for many
approaches to problem-solving
 the use of search requires an abstract formulation of
the problem and the available steps to construct
solutions
 search algorithms are the basis for many
optimization and planning methods
© 2000-2008 Franz Kurfess
Search 8
Objectives
 formulate

states, initial state, goal state, successor functions (operators), cost
 know

the fundamental search strategies and algorithms
uninformed search


breadth-first, depth-first, uniform-cost, iterative deepening, bi-directional
informed search

best-first (greedy, A*), heuristics, memory-bounded, iterative improvement
 evaluate

appropriate problems as search tasks
the suitability of a search strategy for a problem
completeness, time & space complexity, optimality
© 2000-2008 Franz Kurfess
Search 9
Problem-Solving Agents
 agents
 goal




whose task it is to solve a particular problem
formulation
what is the goal state
what are important characteristics of the goal state
how does the agent know that it has reached the goal
are there several possible goal states

are they equal or are some more preferable
 problem



formulation
what are the possible states of the world relevant for solving the
problem
what information is accessible to the agent
how can the agent progress from state to state
© 2000-2008 Franz Kurfess
Search 11
Problem Formulation
 formal
specification for the task of the agent
 goal
specification
 states of the world
 actions of the agent
 identify
the type of the problem
 what
knowledge does the agent have about the state of
the world and the consequences of its own actions
 does the execution of the task require up-to-date
information

sensing is necessary during the execution
© 2000-2008 Franz Kurfess
Search 12
Well-Defined Problems
 problems
 initial

state
starting point from which the agent sets out
 actions

(operators, successor functions)
describe the set of possible actions
 state

with a readily available formal specification
space
set of all states reachable from the initial state by any sequence of
actions
 path

sequence of actions leading from one state in the state space to
another
 goal

test
determines if a given state is the goal state
© 2000-2008 Franz Kurfess
Search 14
Well-Defined Problems (cont.)
 solution

path from the initial state to a goal state
 search

time and memory required to calculate a solution
 path



cost
determines the expenses of the agent for executing the actions in a
path
sum of the costs of the individual actions in a path
 total

cost
cost
sum of search cost and path cost
overall cost for finding a solution
© 2000-2008 Franz Kurfess
Search 15
Selecting States and Actions
 states
describe distinguishable stages during the
problem-solving process
 dependent
on the task and domain
 actions
move the agent from one state to another
one by applying an operator to a state
 dependent
on states, capabilities of the agent, and
properties of the environment
 choice
of suitable states and operators
 can
make the difference between a problem that can or
cannot be solved (in principle, or in practice)
© 2000-2008 Franz Kurfess
Search 16
Example: From Home to Cal Poly
 states
 locations:


obvious: buildings that contain your home, Cal Poly CSC dept.
more difficult: intermediate states


blocks, street corners, sidewalks, entryways, ...
continuous transitions
 agent-centric

states
moving, turning, resting, ...
 operators
 depend
on the choice of states
 e.g. move_one_block
 abstraction
is necessary to omit irrelevant details
 valid:
can be expanded into a detailed version
 useful: easier to solve than in the detailed version
© 2000-2008 Franz Kurfess
Search 17
Example Problems
 toy






problems
vacuum world
8-puzzle
8-queens
cryptarithmetic
vacuum agent
missionaries and cannibals
 real-world





© 2000-2008 Franz Kurfess
route finding
touring problems


problems
traveling salesperson
VLSI layout
robot navigation
assembly sequencing
Web search
Search 18
Simple Vacuum World

states



initial state


left, right, suck
goal test


any legitimate state
successor function (operators)


two locations
dirty, clean
all squares clean
path cost

one unit per action
Properties: discrete locations, discrete dirt (binary), deterministic
© 2000-2008 Franz Kurfess
Search 19
More Complex Vacuum Agent
 states

configuration of the room
dimensions, obstacles, dirtiness

 initial

state
locations of agent, dirt
 successor

move, turn, suck
 goal

test
all squares clean
 path

function (operators)
cost
one unit per action
Properties: discrete locations, discrete dirt, deterministic,
d * 2n states for dirt degree d,n locations
© 2000-2008 Franz Kurfess
Search 20
8-Puzzle
 states

location of tiles (including blank tile)
 initial

state
any legitimate configuration
 successor


move tile
alternatively: move blank
 goal

test
any legitimate configuration of tiles
 path

function (operators)
cost
one unit per move
Properties: abstraction leads to discrete configurations, discrete moves,
deterministic
9!/2 = 181,440 reachable states
© 2000-2008 Franz Kurfess
Search 21
8-Queens

incremental formulation

states






empty board

add a queen to any square

Properties: 3*1014 possible
sequences; can be reduced to
2,057
© 2000-2008 Franz Kurfess

no queen attacked
path cost

irrelevant (all solutions equally
valid)
move a queen to a different
square
goal test


all 8 queens on board
successor function (operators)

all queens on board
no queen attacked
arrangement of 8 queens on the
board
initial state

path cost


states
goal test


arrangement of up to 8 queens on
the board
successor function (operators)

complete-state formulation

initial state



irrelevant (all solutions equally
valid)
Properties: good strategies can
reduce the number of possible
sequences considerably
Search 22
8-Queens Refined
 simple
solutions may lead to very high search costs
fields, 8 queens ==> 648 possible sequences
 64
 more
refined solutions trim the search space, but
may introduce other constraints
 place


queens on “unattacked” places
much more efficient
may not lead to a solutions depending on the initial moves
 move
an attacked queen to another square in the same
column, if possible to an “unattacked” square

much more efficient
© 2000-2008 Franz Kurfess
Search 23
Cryptarithmetic
 states

puzzle with letters and digits
 initial

state
only letters present
 successor

replace all occurrences of a letter by a digit not used yet
 goal


test
only digits in the puzzle
calculation is correct
 path

function (operators)
cost
all solutions are equally valid
© 2000-2008 Franz Kurfess
Search 24
Missionaries and Cannibals

states


number of missionaries, cannibals, and boats on the banks of a river
illegal states


initial states


all missionaries, cannibals, and boats are on one bank
successor function (operators)

transport a set of up to two participants to the other bank


{1 missionary} | { 1cannibal} | {2 missionaries} | {2 cannibals} |
{1 missionary and 1 cannibal}
goal test


missionaries are outnumbered by cannibals on either bank
nobody left on the initial river bank
path cost

number of crossings
also known as “goats and cabbage”, “wolves and sheep”, etc
© 2000-2008 Franz Kurfess
Search 25
Route Finding
 states

locations
 initial

state
starting point
 successor

move from one location to another
 goal

test
arrive at a certain location
 path

function (operators)
cost
may be quite complex

money, time, travel comfort, scenery, ...
© 2000-2008 Franz Kurfess
Search 26
Traveling Salesperson
 states


locations / cities
illegal states
each city may be visited only once
visited cities must be kept as state information


 initial


state
starting point
no cities visited
 successor

move from one location to another one
 goal


test
all locations visited
agent at the initial location
 path

function (operators)
cost
distance between locations
© 2000-2008 Franz Kurfess
Search 27
VLSI Layout
 states

positions of components, wires on a chip
 initial


state
incremental: no components placed
complete-state: all components placed (e.g. randomly, manually)
 successor


incremental: place components, route wire
complete-state: move component, move wire
 goal


test
all components placed
components connected as specified
 path

function (operators)
cost
may be complex

distance, capacity, number of connections per component
© 2000-2008 Franz Kurfess
Search 28
Robot Navigation
 states


locations
position of actuators
 initial

state
start position (dependent on the task)
 successor

movement, actions of actuators
 goal

test
task-dependent
 path

function (operators)
cost
may be very complex

distance, energy consumption
© 2000-2008 Franz Kurfess
Search 29
Assembly Sequencing
 states
 location
 initial
 no
of components
state
components assembled
 successor
 place
 goal
component
test
 system
 path
function (operators)
fully assembled
cost
 number
of moves
© 2000-2008 Franz Kurfess
Search 30
Searching for Solutions
 traversal


from the initial state to a goal state
legal sequence of actions as defined by successor function (operators)
 general




a
procedure
check for goal state
expand the current state


of the search space
determine the set of reachable states
return “failure” if the set is empty
select one from the set of reachable states
move to the selected state
search tree is generated

nodes are added as more states are visited
© 2000-2008 Franz Kurfess
Search 31
Search Terminology
 search

generated as the search space is traversed



the search space itself is not necessarily a tree, frequently it is a graph
the tree specifies possible paths through the search space
expansion of nodes

as states are explored, the corresponding nodes are expanded by applying
the successor function


this generates a new set of (child) nodes
the fringe (frontier) is the set of nodes not yet visited


tree
newly generated nodes are added to the fringe
search strategy


determines the selection of the next node to be expanded
can be achieved by ordering the nodes in the fringe

e.g. queue (FIFO), stack (LIFO), “best” node w.r.t. some measure (cost)
© 2000-2008 Franz Kurfess
Search 32
Example: Graph Search
1
A
4
1
1
1
5
S
3
1
D
3
2
C
2
3
3
3
B
2
G
0
4
E
1
2
 the

graph describes the search (state) space
each node in the graph represents one state in the search space

 this


e.g. a city to be visited in a routing or touring problem
graph has additional information
names and properties for the states (e.g. S, 3)
links between nodes, specified by the successor function

properties for links (distance, cost, name, ...)
© 2000-2008 Franz Kurfess
Search 33
1
A
4
1
1
1
5
S
3
3
2
C
2
1
Graph
and Tree
D
3
3
G
0
3

4
B
2

E
1
2

S
3
1
A
4
1
1
D
3
C
2
1
G
0
D
3
3
G
0

C
2
1
3
1
5
D
3
2
3
G
0
E
1
4
3
B
2
2
3
G
0
3
E
1
C
2
4
G
0
1
G
0
D
3
3
G
0
© 2000-2008 Franz Kurfess

2
4
3
2
G
0
E
1
4
E
1

the tree is generated
by traversing the
graph
the same node in the
graph may appear
repeatedly in the tree
the arrangement of
the tree depends on
the traversal strategy
(search method)
the initial state
becomes the root
node of the tree
in the fully expanded
tree, the goal states
are the leaf nodes
cycles in graphs may
result in infinite
branches
G
0
4
G
0
G
0
Search 34
General Tree Search Algorithm
function TREE-SEARCH(problem, fringe) returns solution
fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if EMPTY?(fringe) then return failure
node := REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
fringe := INSERT-ALL(EXPAND(node, problem), fringe)


generate the node from the initial state of the problem
repeat



return failure if there are no more nodes in the fringe
examine the current node; if it’s a goal, return the solution
expand the current node, and add the new nodes to the fringe
© 2000-2008 Franz Kurfess
Search 38
Evaluation Criteria
 completeness
 if
there is a solution, will it be found
 time
complexity
 how
long does it take to find the solution
 does not include the time to perform actions
 space
complexity
 memory
required for the search
 optimality
 will
the best solution be found
main factors for complexity considerations:
branching factor b, depth d of the shallowest goal node, maximum path length m
© 2000-2008 Franz Kurfess
Search 40
Search Cost and Path Cost
 the
search cost indicates how expensive it is to
generate a solution
 time
complexity (e.g. number of nodes generated) is
usually the main factor
 sometimes space complexity (memory usage) is
considered as well
 path
cost indicates how expensive it is to execute the
solution found in the search
 distinct
 total
from the search cost, but often related
cost is the sum of search and path costs
© 2000-2008 Franz Kurfess
Search 41
Selection of a Search Strategy
 most
of the effort is often spent on the selection of an
appropriate search strategy for a given problem
 uninformed


number of steps, path cost unknown
agent knows when it reaches a goal
 informed

search (blind search)
search (heuristic search)
agent has background information about the problem

map, costs of actions
© 2000-2008 Franz Kurfess
Search 42
Search Strategies
 Uninformed







Search
breadth-first
depth-first
uniform-cost search
depth-limited search
iterative deepening
bi-directional search
constraint satisfaction
© 2000-2008 Franz Kurfess
 Informed




Search
best-first search
search with heuristics
memory-bounded search
iterative improvement search
Search 43
Breadth-First
 all
the nodes reachable from the current node are
explored first
 achieved
by the TREE-SEARCH method by appending
newly generated nodes at the end of the search queue
function BREADTH-FIRST-SEARCH(problem) returns solution
return TREE-SEARCH(problem, FIFO-QUEUE())
Time Complexity
bd+1
Space Complexity
bd+1
b
branching factor
Completeness
yes (for finite b)
d
depth of the tree
Optimality
yes (for non-negative
path costs)
© 2000-2008 Franz Kurfess
Search 44
Breadth-First Snapshot 1
1
2
3
Initial
Visited
Fringe
Current
Visible
Goal
Fringe: [] + [2,3]
© 2000-2008 Franz Kurfess
Search 45
Breadth-First Snapshot 2
1
2
4
3
Initial
Visited
Fringe
Current
Visible
Goal
5
Fringe: [3] + [4,5]
© 2000-2008 Franz Kurfess
Search 46
Breadth-First Snapshot 3
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
3
5
6
7
Fringe: [4,5] + [6,7]
© 2000-2008 Franz Kurfess
Search 47
Breadth-First Snapshot 4
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
3
5
6
7
9
Fringe: [5,6,7] + [8,9]
© 2000-2008 Franz Kurfess
Search 48
Breadth-First Snapshot 5
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
9
3
5
10
6
7
11
Fringe: [6,7,8,9] + [10,11]
© 2000-2008 Franz Kurfess
Search 49
Breadth-First Snapshot 6
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
9
3
5
10
6
11
12
7
13
Fringe: [7,8,9,10,11] + [12,13]
© 2000-2008 Franz Kurfess
Search 50
Breadth-First Snapshot 7
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
9
3
5
10
6
11
12
7
13
14
15
Fringe: [8,9.10,11,12,13] + [14,15]
© 2000-2008 Franz Kurfess
Search 51
Breadth-First Snapshot 8
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
9
3
5
10
6
11
12
7
13
14
15
17
Fringe: [9,10,11,12,13,14,15] + [16,17]
© 2000-2008 Franz Kurfess
Search 52
Breadth-First Snapshot 9
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
9
18
3
5
10
6
11
12
7
13
14
15
19
Fringe: [10,11,12,13,14,15,16,17] + [18,19]
© 2000-2008 Franz Kurfess
Search 53
Breadth-First Snapshot 10
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
12
7
13
14
15
21
Fringe: [11,12,13,14,15,16,17,18,19] + [20,21]
© 2000-2008 Franz Kurfess
Search 54
Breadth-First Snapshot 11
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
7
13
14
15
23
Fringe: [12, 13, 14, 15, 16, 17, 18, 19, 20, 21] + [22,23]
© 2000-2008 Franz Kurfess
Search 55
Breadth-First Snapshot 12
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
14
15
Note:
The goal node
is “visible”
here, but we can
not perform the
goal test yet.
25
Fringe: [13,14,15,16,17,18,19,20,21] + [22,23]
© 2000-2008 Franz Kurfess
Search 56
Breadth-First Snapshot 13
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
15
27
Fringe: [14,15,16,17,18,19,20,21,22,23,24,25] + [26,27]
© 2000-2008 Franz Kurfess
Search 57
Breadth-First Snapshot 14
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
Fringe: [15,16,17,18,19,20,21,22,23,24,25,26,27] + [28,29]
© 2000-2008 Franz Kurfess
Search 58
Breadth-First Snapshot 15
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29] + [30,31]
© 2000-2008 Franz Kurfess
Search 59
Breadth-First Snapshot 16
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 60
Breadth-First Snapshot 17
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [18,19,20,21,22,23,24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 61
Breadth-First Snapshot 18
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [19,20,21,22,23,24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 62
Breadth-First Snapshot 19
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [20,21,22,23,24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 63
Breadth-First Snapshot 20
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [21,22,23,24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 64
Breadth-First Snapshot 21
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [22,23,24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 65
Breadth-First Snapshot 22
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [23,24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 66
Breadth-First Snapshot 23
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [24,25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 67
Breadth-First Snapshot 24
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
Note:
The goal test is
positive for this
node, and a
solution is
found in 24
steps.
15
28
29
30
31
Fringe: [25,26,27,28,29,30,31]
© 2000-2008 Franz Kurfess
Search 68
Uniform-Cost -First
 the
nodes with the lowest cost are explored first
 similar
to BREADTH-FIRST, but with an evaluation of the
cost for each reachable node
 g(n) = path cost(n) = sum of individual edge costs to reach
the current node
function UNIFORM-COST-SEARCH(problem) returns solution
return TREE-SEARCH(problem, COST-FN, FIFO-QUEUE())
Time Complexity
bC*/e
Space Complexity
bC*/e
Completeness
yes (finite b, step costs >= e)
Optimality
yes
© 2000-2008 Franz Kurfess
b
branching factor
C*
cost of the optimal solution
e
minimum cost per action
Search 69
Uniform-Cost Snapshot
Initial
Visited
Fringe
Current
Visible
Goal
1
4
3
2
3
7
2
4
2
5
4
6
7
Edge Cost
2
8
3
16
17
5
9
4
7
18
10
2
19
4
4
20
4
11
8
21
6
22
4
12
5
23
4
24
3
6
13
3
25
9
14
4
26
2 8
27
15
3
28
9
29
2
30
Fringe: [27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 15(16), 21(18)]
© 2000-2008 Franz Kurfess
+ [22(16), 23(15)]
31
Search 70
9
Uniform Cost Fringe Trace
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
[1(0)]
[3(3), 2(4)]
[2(4), 6(5), 7(7)]
[6(5), 5(6), 7(7), 4(11)]
[5(6), 7(7), 13(8), 12(9), 4(11)]
[7(7), 13(8), 12(9), 10(10), 11(10), 4(11)]
[13(8), 12(9), 10(10), 11(10), 4(11), 14(13), 15(16)]
[12(9), 10(10), 11(10), 27(10), 4(11), 26(12), 14(13), 15(16)]
[10(10), 11(10), 27(10), 4(11), 26(12), 25(12), 14(13), 24(13), 15(16)]
[11(10), 27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 15(16), 21(18)]
[27(10), 4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 23(15), 15(16), 22(16), 21(18)]
[4(11), 25(12), 26(12), 14(13), 24(13), 20(14), 23(15), 15(16), 23(16), 21(18)]
[25(12), 26(12), 14(13), 24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 21(18)]
[26(12), 14(13), 24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 21(18)]
[14(13), 24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 21(18)]
[24(13),8(13), 20(14), 23(15), 15(16), 23(16), 9(16), 29(16),21(18), 28(21)]
Goal reached!
Notation: [Bold+Yellow: Current Node; White: Old Fringe Node; Green+Italics: New Fringe Node].
Assumption: New nodes with the same cost as existing nodes are added after the existing node.
© 2000-2008 Franz Kurfess
Search 71
Breadth-First vs. Uniform-Cost
 breadth-first
 only
optimal if all step costs are equal
 uniform-cost
 optimal

always expands the shallowest node
considers the overall path cost
for any (reasonable) cost function
non-zero, positive
 gets
bogged down in trees with many fruitless, short
branches

 both
low path cost, but no goal node
are complete for non-extreme problems
 finite
number of branches
 strictly positive search function
© 2000-2008 Franz Kurfess
Search 72
Depth-First
 continues
exploring newly generated nodes
 achieved
by the TREE-SEARCH method by appending
newly generated nodes at the beginning of the search
queue

utilizes a Last-In, First-Out (LIFO) queue, or stack
function DEPTH-FIRST-SEARCH(problem) returns solution
return TREE-SEARCH(problem, LIFO-QUEUE())
Time Complexity
bm
Space Complexity
b*m
b
branching factor
Completeness
no (for infinite branch length)
m
maximum path length
Optimality
no
© 2000-2008 Franz Kurfess
Search 73
Depth-First Snapshot
Initial
Visited
Fringe
Current
Visible
Goal
1
2
4
8
16
17
5
9
18
19
3
10
20
6
11
21
22
12
23
24
7
13
25
14
26
27
15
28
29
30
31
Fringe: [3] + [22,23]
© 2000-2008 Franz Kurfess
Search 74
Depth-First vs. Breadth-First
 depth-first



goes off into one branch until it reaches a leaf node
not good if the goal is on another branch
neither complete nor optimal
uses much less space than breadth-first


much fewer visited nodes to keep track of
smaller fringe
 breadth-first

complete and optimal


is more careful by checking all alternatives
under most circumstances
very memory-intensive
© 2000-2008 Franz Kurfess
Search 75
Backtracking Search
 variation
 only


one successor node is generated at a time
even better space complexity: O(m) instead of O(b*m)
even more memory space can be saved by incrementally modifying
the current state, instead of creating a new one



of depth-first search
only possible if the modifications can be undone
this is referred to as backtracking
frequently used in planning, theorem proving
© 2000-2008 Franz Kurfess
Search 76
Depth-Limited Search
 similar
to depth-first, but with a limit
 overcomes
problems with infinite paths
 sometimes a depth limit can be inferred or estimated from
the problem description

in other cases, a good depth limit is only known when the problem
is solved
 based
on the TREE-SEARCH method
 must keep track of the depth
function DEPTH-LIMITED-SEARCH(problem, depth-limit) returns solution
return TREE-SEARCH(problem, depth-limit, LIFO-QUEUE())
Time Complexity
bl
Space Complexity
b*l
Completeness
no (goal beyond l, or infinite branch length)
Optimality
no
© 2000-2008 Franz Kurfess
b
branching factor
l
depth limit
Search 77
Iterative Deepening
 applies
LIMITED-DEPTH with increasing depth limits
 combines
advantages of BREADTH-FIRST and DEPTHFIRST methods
 many states are expanded multiple times

 in

doesn’t really matter because the number of those nodes is small
practice, one of the best uninformed search methods
for large search spaces, unknown depth
function ITERATIVE-DEEPENING-SEARCH(problem) returns solution
for depth := 0 to unlimited do
result := DEPTH-LIMITED-SEARCH(problem, depth-limit)
if result != cutoff then return result
Time Complexity
bd
Space Complexity b*d
b
branching factor
Completeness
yes
d
tree depth
Optimality
yes (all step costs identical)
© 2000-2008 Franz Kurfess
Search 78
Bi-directional Search
 search
simultaneously from two directions
 forward
from the initial and backward from the goal state
 may
lead to substantial savings if it is applicable
 has severe limitations
 predecessors
must be generated, which is not always
possible
 search must be coordinated between the two searches
 one search must keep all nodes in memory
Time Complexity
bd/2
Space Complexity
bd/2
b
branching factor
Completeness
yes (b finite, breadth-first for both directions)
d
tree depth
Optimality
yes (all step costs identical, breadth-first for
© 2000-2008 Franz Kurfess both directions)
Search 79
Improving Search Methods
 make
algorithms more efficient
 avoiding
repeated states
 utilizing memory efficiently
 use
additional knowledge about the problem
 properties

more interesting areas are investigated first
 pruning

(“shape”) of the search space
of irrelevant areas
areas that are guaranteed not to contain a solution can be
discarded
© 2000-2008 Franz Kurfess
Search 80
Avoiding Repeated States
 in
many approaches, states may be expanded
multiple times
 e.g.
iterative deepening
 problems with reversible actions
 eliminating
repeated states may yield an exponential
reduction in search cost
 e.g.

some n-queens strategies
place queen in the left-most non-threatening column
 rectangular

grid
4d leaves, but only 2d2 distinct states
© 2000-2008 Franz Kurfess
Search 81
Informed Search
 relies
on additional knowledge about the problem or
domain
 frequently
 used
expressed through heuristics (“rules of thumb”)
to distinguish more promising paths towards a
goal
 may
be mislead, depending on the quality of the heuristic
 in
general, performs much better than uninformed
search
 but
frequently still exponential in time and space for
realistic problems
© 2000-2008 Franz Kurfess
Search 82
Best-First Search
 relies
on an evaluation function that gives an indication of
how useful it would be to expand a node



family of search methods with various evaluation functions
usually gives an estimate of the distance to the goal
often referred to as heuristics in this context
 the


node with the lowest value is expanded first
the name is a little misleading: the node with the lowest value for the
evaluation function is not necessarily one that is on an optimal path to
a goal
if we really know which one is the best, there’s no need to do a search
function BEST-FIRST-SEARCH(problem, EVAL-FN) returns solution
fringe := queue with nodes ordered by EVAL-FN
return TREE-SEARCH(problem, fringe)
© 2000-2008 Franz Kurfess
Search 83
Greedy Best-First Search
 minimizes
the estimated cost to a goal
 expand
the node that seems to be closest to a goal
 utilizes a heuristic function as evaluation function



f(n) = h(n) = estimated cost from the current node to a goal
heuristic functions are problem-specific
often straight-line distance for route-finding and similar problems
 often
better than depth-first, although worst-time
complexities are equal or worse (space)
function GREEDY-SEARCH(problem) returns solution
return BEST-FIRST-SEARCH(problem, h)
Completeness Time Complexity Space Complexity Optimality
no
bm
bm
no
b: branching factor, d: depth of the solution, m: maximum depth of the search tree, l: depth limit
© 2000-2008 Franz Kurfess
Search 84
Greedy Best-First Search Snapshot
1
7
2
4
6
3
5
Initial
Visited
Fringe
Current
Visible
Goal
9
5
6
7
5
6
7
Heuristics 7
8
7
9
5
10
4
11
3
2
12
13
4
14
5
15
6
8
7
6
5
4
3
2
1
0
1
2
3
4
5
6
7
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Fringe: [13(4), 7(6), 8(7)] + [24(0), 25(1)]
© 2000-2008 Franz Kurfess
Search 85
A* Search
 combines
greedy and uniform-cost search to find the
(estimated) cheapest path through the current node
 f(n)
= g(n) + h(n)
= path cost + estimated cost to the goal
 heuristics must be admissible

never overestimate the cost to reach the goal
 very
good search method, but with complexity problems
function A*-SEARCH(problem) returns solution
return BEST-FIRST-SEARCH(problem, g+h)
Completeness Time Complexity Space Complexity Optimality
yes
bd
bd
yes
b: branching factor, d: depth of the solution, m: maximum depth of the search tree, l: depth limit
© 2000-2008 Franz Kurfess
Search 86
A* Snapshot
1
9
9
4
3
11
10
7
2
3
7
2
7
2
4
10
4
6
5
Initial
Visited
Fringe
Current
Visible
Goal
5
6
13
5
6
7
Edge Cost
2
5
4
4
4
3
11
8
3
7
4
9
7
5
10
2
4
4
11
8
6
3
2
12
5
4
4
13
4
9
9
Heuristics 7
12
3
13
6
14
5
15
2 8
3
6
f-cost
9
10
2
13
8
7
6
5
4
3
2
1
0
1
2
3
4
5
6
7
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Fringe: [2(4+7), 13(3+2+3+4), 7(3+4+6)] + [24(3+2+4+4+0), 25(3+2+4+3+1)]
© 2000-2008 Franz Kurfess
Search 87
A* Snapshot with all f-Costs
4
3
11
10
7
2
3
7
2
17
5
7
2
11
6
4
Initial
Visited
Fringe
Current
Visible
Goal
9
1
4
10
5
6
13
5
6
7
Edge Cost
2
20
8
3
24 24
4
21
7
4
5
9
7
29 23
4
14
5
10
2
4
18
4
13
4
11
8
19
6
18
3
11
2
12
5
3
4
12
4
13
3
16 13
6
4
14
2
13 14 13
18
22
5
6
15
8 3
Heuristics 7
f-cost
9
10
2
25 21 31
25
8
7
6
5
4
3
2
1
0
1
2
3
4
5
6
7
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
© 2000-2008 Franz Kurfess
9
9
Search 88
A* Properties
 the
value of f never decreases along any path
starting from the initial node
 also
known as monotonicity of the function
 almost all admissible heuristics show monotonicity

 this
those that don’t can be modified through minor changes
property can be used to draw contours
 regions
where the f-cost is below a certain threshold
 with uniform cost search (h = 0), the contours are circular
 the better the heuristics h, the narrower the contour around
the optimal path
© 2000-2008 Franz Kurfess
Search 89
A* Snapshot with Contour f=11
4
3
11
10
7
2
3
7
2
17
5
7
2
11
6
4
Initial
Visited
Fringe
Current
Visible
Goal
9
1
4
10
5
6
13
5
6
7
Edge Cost
2
20
8
3
24 24
4
21
7
4
5
9
7
29 23
4
14
5
10
2
4
18
4
13
4
11
8
19
6
18
3
11
2
12
5
3
4
12
4
13
3
16 13
6
4
14
2
13 14 13
18
22
5
6
15
8 3
Heuristics 7
f-cost
9
10
2
25 21 31
25
8
7
6
5
4
3
2
1
0
1
2
3
4
5
6
7
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
© 2000-2008 Franz Kurfess
9
9
Contour
Search 90
A* Snapshot with Contour f=13
4
3
11
10
7
2
3
7
2
17
5
7
2
11
6
4
Initial
Visited
Fringe
Current
Visible
Goal
9
1
4
10
5
6
13
5
6
7
Edge Cost
2
20
8
3
24 24
4
21
7
4
5
9
7
29 23
4
14
5
10
2
4
18
4
13
4
11
8
19
6
18
3
11
2
12
5
3
4
12
4
13
3
16 13
6
4
2
13
8
7
6
5
4
3
2
1
0
1
16
17
18
19
20
21
22
23
24
25
14
13
14
9
9
18
22
5
6
15
8 3
Heuristics 7
f-cost
9
10
2
25 21 31
25
3
4
5
6
7
27
28
29
30
31
Contour
2
26
© 2000-2008 Franz Kurfess
Search 91
Optimality of A*
 A*
will find the optimal solution
 the
 A*
first solution found is the optimal one
is optimally efficient
 no
other algorithm is guaranteed to expand fewer nodes
than A*
 A*
is not always “the best” algorithm
 optimality

 it
refers to the expansion of nodes
other criteria might be more relevant
generates and keeps all nodes in memory

improved in variations of A*
© 2000-2008 Franz Kurfess
Search 92
Complexity of A*
 the
number of nodes within the goal contour search
space is still exponential
 with
respect to the length of the solution
 better than other algorithms, but still problematic
 frequently,
space complexity is more severe than
time complexity
 A*
keeps all generated nodes in memory
© 2000-2008 Franz Kurfess
Search 93
Memory-Bounded Search
 search
algorithms that try to conserve memory
 most are modifications of A*
 iterative
deepening A* (IDA*)
 simplified memory-bounded A* (SMA*)
© 2000-2008 Franz Kurfess
Search 94
Iterative Deepening A* (IDA*)
 explores
paths within a given contour (f-cost limit) in
a depth-first manner
 this
saves memory space because depth-first keeps only
the current path in memory

but it results in repeated computation of earlier contours since it
doesn’t remember its history
 was
the “best” search algorithm for many practical
problems for some time
 does have problems with difficult domains


contours differ only slightly between states
algorithm frequently switches back and forth

similar to disk thrashing in (old) operating systems
© 2000-2008 Franz Kurfess
Search 95
Recursive Best-First Search
 similar
to best-first search, but with lower space
requirements
 O(bd)
instead of O(bm)
 it
keeps track of the best alternative to the current
path
 best
f-value of the paths explored so far from predecessors
of the current node
 if it needs to re-explore parts of the search space, it knows
the best candidate path
 still may lead to multiple re-explorations
© 2000-2008 Franz Kurfess
Search 96
Simplified Memory-Bounded A*
(SMA*)
 uses
all available memory for the search
 drops

nodes from the queue when it runs out of space
those with the highest f-costs
 avoids

re-computation of already explored area
keeps information about the best path of a “forgotten” subtree in its
ancestor
 complete
if there is enough memory for the shortest
solution path
 often better than A* and IDA*


but some problems are still too tough
trade-off between time and space requirements
© 2000-2008 Franz Kurfess
Search 97
Heuristics for Searching
 for
many tasks, a good heuristic is the key to finding
a solution
 prune
the search space
 move towards the goal
 relaxed
problems
 fewer
restrictions on the successor function (operators)
 its exact solution may be a good heuristic for the original
problem
© 2000-2008 Franz Kurfess
Search 98
8-Puzzle Heuristics
 level




of difficulty
around 20 steps for a typical solution
branching factor is about 3
exhaustive search would be 320 =3.5 * 109
9!/2 = 181,440 different reachable states

distinct arrangements of 9 squares
 candidates


number of tiles in the wrong position
sum of distances of the tiles from their goal position

city block or Manhattan distance
 generation

for heuristic functions
of heuristics
possible from formal specifications
© 2000-2008 Franz Kurfess
Search 99
Local Search and Optimization
 for
some problem classes, it is sufficient to find a
solution
 the
path to the solution is not relevant
 memory
requirements can be dramatically relaxed
by modifying the current state
 all
previous states can be discarded
 since only information about the current state is kept, such
methods are called local
© 2000-2008 Franz Kurfess
Search 100
Iterative Improvement Search
 for
some problems, the state description provides all
the information required for a solution
 path
costs become irrelevant
 global maximum or minimum corresponds to the optimal
solution
 iterative
improvement algorithms start with some
configuration, and try modifications to improve the
quality
 8-queens:
number of un-attacked queens
 VLSI layout: total wire length
 analogy:
state space as landscape with hills and
valleys
© 2000-2008 Franz Kurfess
Search 101
Hill-Climbing Search
 continually
moves uphill
 increasing
value of the evaluation function
 gradient descent search is a variation that moves downhill
 very
simple strategy with low space requirements
 stores
only the state and its evaluation, no search tree
 problems
 local

maxima
algorithm can’t go higher, but is not at a satisfactory solution
 plateau

area where the evaluation function is flat
 ridges

search may oscillate slowly
© 2000-2008 Franz Kurfess
Search 102
Simulated Annealing
 similar
to hill-climbing, but some down-hill movement
 random
move instead of the best move
 depends on two parameters


∆E, energy difference between moves; T, temperature
temperature is slowly lowered, making bad moves less likely
 analogy
to annealing
 gradual
cooling of a liquid until it freezes
 will
find the global optimum if the temperature is
lowered slowly enough
 applied to routing and scheduling problems
 VLSI
layout, scheduling
© 2000-2008 Franz Kurfess
Search 103
Local Beam Search
 variation
of beam search
path-based method that looks at several paths “around”
the current one
a
 keeps
k states in memory, instead of only one
 information

between the states can be shared
moves to the most promising areas
 stochastic
local beam search selects the k successor
states randomly
 with
a probability determined by the evaluation function
© 2000-2008 Franz Kurfess
Search 104
Genetic Algorithms (GAs)
 variation
of stochastic beam search
 successor
states are generated as variations of two parent
states, not only one
 corresponds to natural selection with sexual reproduction
© 2000-2008 Franz Kurfess
Search 105
GA Terminology
 population

set of k randomly generated states
 generation


population at a point in time
usually, propagation is synchronized for the whole population
 individual


one element from the population
described as a string over a finite alphabet


binary, ACGT, letters, digits
consistent for the whole population
 fitness


function
evaluation function in search terminology
higher values lead to better chances for reproduction
© 2000-2008 Franz Kurfess
Search 106
GA Principles
 reproduction

the state description of the two parents is split at the crossover point



determined in advance, often randomly chosen
must be the same for both parents
one part is combined with the other part of the other parent


one or both of the descendants may be added to the population
compatible state descriptions should assure viable descendants


depends on the choice of the representation
may not have a high fitness value
 mutation

each individual may be subject to random modifications in its state
description

usually with a low probability
 schema

useful components of a solution can be preserved across generations
© 2000-2008 Franz Kurfess
Search 107
GA Applications
 often
used for optimization problems
 circuit
layout, system design, scheduling
 termination
 “good
enough” solution found
 no significant improvements over several generations
 time limit
© 2000-2008 Franz Kurfess
Search 108
Constraint Satisfaction
 satisfies

additional structural properties of the problem
may depend on the representation of the problem
 the
problem is defined through a set of variables and a set of
domains


variables can have possible values specified by the problem
constraints describe allowable combinations of values for a subset of
the variables
 state

in a CSP
defined by an assignment of values to some or all variables
 solution



to a CSP
must assign values to all variables
must satisfy all constraints
solutions may be ranked according to an objective function
© 2000-2008 Franz Kurfess
Search 109
CSP Approach
 the
goal test is decomposed into a set of constraints
on variables
 checks
for violation of constraints before new nodes are
generated

must backtrack if constraints are violated
 forward-checking

looks ahead to detect unsolvability
based on the current values of constraint variables
© 2000-2008 Franz Kurfess
Search 110
CSP Example: Map Coloring
 color
a map with three colors so that adjacent
countries have different colors
A
variables:
A, B, C, D, E, F, G
C
G
B
D
E
F
values:
{red, green, blue}
constraints:
“no neighboring regions
have the same color”
legal combinations for A, B:
{(red, green), (red, blue),
(green, red), (green, blue),
(blue, red), (blue, green)}
© 2000-2008 Franz Kurfess
Search 111
Constraint Graph
 visual
representation of a CSP
 variables
are nodes
 arcs are constraints
A
C
G
the map coloring example
represented as constraint graph
B
D
© 2000-2008 Franz Kurfess
E
F
Search 112
Benefits of CSP
 standardized
representation pattern
 variables
with assigned values
 constraints on the values
 allows the use of generic heuristics

no domain knowledge is required
© 2000-2008 Franz Kurfess
Search 113
CSP as Incremental Search Problem
 initial
 all
state
(or at least some) variables unassigned
 successor
function
 assign
a value to an unassigned variable
 must not conflict with previously assigned variables
 goal
test
 all
variables have values assigned
 no conflicts possible

 path
not allowed in the successor function
cost
 e.g.
a constant for each step
 may be problem-specific
© 2000-2008 Franz Kurfess
Search 114
CSPs and Search
 in
principle, any search algorithm can be used to
solve a CSP
 awful

n*d for n variables with d values at the top level, (n-1)*d at the next
level, etc.
 not

branching factor
very efficient, since they neglect some CSP properties
commutativity: the order in which values are assigned to variables
is irrelevant, since the outcome is the same
© 2000-2008 Franz Kurfess
Search 115
Backtracking Search for CSPs
a
variation of depth-first search that is often used for
CSPs
 values
are chosen for one variable at a time
 if no legal values are left, the algorithm backs up and
changes a previous assignment
 very easy to implement

initial state, successor function, goal test are standardized
 not

very efficient
can be improved by trying to select more suitable unassigned
variables first
© 2000-2008 Franz Kurfess
Search 116
Heuristics for CSP
 most-constrained
variable (minimum remaining
values, “fail-first”)
 variable
with the fewest possible values is selected
 tends to minimize the branching factor
 most-constraining
variable
 variable
with the largest number of constraints on other
unassigned variables
 least-constraining
value
 for
a selected variable, choose the value that leaves more
freedom for future choices
© 2000-2008 Franz Kurfess
Search 117
Analyzing Constraints
 forward
checking
 when
a value X is assigned to a variable, inconsistent
values are eliminated for all variables connected to X

identifies “dead” branches of the tree before they are visited
 constraint
propagation
 analyses
interdependencies between variable
assignments via arc consistency



an arc between X and Y is consistent if for every possible value x of
X, there is some value y of Y that is consistent with x
more powerful than forward checking, but still reasonably efficient
but does not reveal every possible inconsistency
© 2000-2008 Franz Kurfess
Search 118
Local Search and CSP
 local
search (iterative improvement) is frequently used for
constraint satisfaction problems


values are assigned to all variables
modification operators move the configuration towards a solution
 often

called heuristic repair methods
repair inconsistencies in the current configuration
 simple


minimizes the number of conflicts with other variables
solves many problems very quickly

 can

strategy: min-conflicts
million-queens problem in less than 50 steps
be run as online algorithm
use the current state as new initial state
© 2000-2008 Franz Kurfess
Search 119
Analyzing Problem Structures
 some
problem properties can be derived from the
structure of the respective constraint graph
 isolated




sub-problems
no connections to other parts of the graph
can be solved independently
e.b. “islands” in map-coloring problems
dividing a problem into independent sub-problems reduces
complexity tremendously

ideally from exponential to polynomial or even linear
 tree


if the constraint graph is a tree, the CSP can be solved in time
linear in the number of variables
sometimes solutions can be found by reducing a general graph to a
tree

nodes are removed or collapsed
© 2000-2008 Franz Kurfess
Search 120
CSP Example: Map Coloring (cont.)
 most-constrained-variable
A
heuristic
C
G
B
D
E
© 2000-2008 Franz Kurfess
F
Search 121
8-Queens with Min-Conflicts
 one
queen in each column
 usually
several conflicts
 calculate
the number of conflicts for each possible
position of a selected queen
 move the queen to the position with the least
conflicts
© 2000-2008 Franz Kurfess
Search 122
8-Queens Example 1
1
2
2
1
3
2
2
1
© 2000-2008 Franz Kurfess
2
2
2
1
3
2
3
1
Search 123
8-Queens Example 1
1
3
2
3
1
2
1
2
© 2000-2008 Franz Kurfess
1
2
3
3
0
2
1
1
Search 124
8-Queens Example 1
 solution
found in 4 steps
 min-conflicts heuristic
 uses additional heuristics to
select the “best” queen to
move

try to move out of the corners

© 2000-2008 Franz Kurfess
similar to least-constraining
value heuristics
Search 125
8-Queens Example 2
2
0
0
2
1
2
1
2
© 2000-2008 Franz Kurfess
2
1
0
2
0
3
2
2 1
2
1
2
1
2
2
Search 126
8-Queens Example 2
1
1
3 0
2
2
0 1
2
1
2
2
2
1 2
2
3
1
3
1
1 1
© 2000-2008 Franz Kurfess
1
0
2
0
1
1
3
1 2
0
2
1
2
1
2
Search 127
8-Queens Example 2
0
0
1
1
1
1
1
© 2000-2008 Franz Kurfess
1
1
0
2
0
1
1
3
1 2
0
2
1
2
1
2
Search 128
CSP Properties

discrete variables over finite domains

relatively simple


number of variable assignments can be dn




domain size d, n variables
exponential time complexity (worst-case)
in practice, generic CSP algorithms can solve problems much larger than
regular search algorithms
more complex problems may require the use of a constraint language



e.g. map coloring, 8-queens
it is not possible to enumerate all combinations of values
e.g. job scheduling (“precedes”, “duration”)
related problems are studied in the field of Operations Research

often continuous domains; e.g. linear programming
© 2000-2008 Franz Kurfess
Search 133
Important Concepts and Terms
















agent
A* search
best-first search
bi-directional search
breadth-first search
depth-first search
depth-limited search
completeness
constraint satisfaction
depth-limited search
genetic algorithm
general search algorithm
goal
goal test function
greedy best-first search
heuristics
© 2000-2008 Franz Kurfess

















initial state
iterative deepening search
iterative improvement
local search
memory-bounded search
operator
optimality
path
path cost function
problem
recursive best-first search
search
space complexity
state
state space
time complexity
uniform-cost search
Search 136
Chapter Summary
 tasks

can often be formulated as search problems
initial state, successor function (operators), goal test, path cost
 various
search methods systematically comb the search
space

uninformed search


breadth-first, depth-first, and variations
informed search

best-first, A*, iterative improvement
 the
choice of good heuristics can improve the search
dramatically

task-dependent
© 2000-2008 Franz Kurfess
Search 137
© 2000-2008 Franz Kurfess
Search 138
Download