Automated Planning The State Space and Forward-Chaining State Space Search Jonas Kvarnström Automated Planning Group Department of Computer and Information Science Linköping University jonas.kvarnstrom@liu.se – 2016 We tend to use toy examples in very simple domains To learn fundamental principles To focus on algorithms and concepts, not domain details To create readable, comprehensible examples Always remember: Real-world problems are larger, more complex 3 jonkv@ida About Examples 4 Domain 1: Towers of Hanoi (define (domain hanoi) (:requirements :strips) (:predicates (clear ?x) (on ?x ?y) (smaller ?x ?y)) clear: "nothing on top of x" on: "x on top of y" smaller: "y is smaller than x" (:action move :parameters (?disc ?from ?to) :precondition (and (smaller ?to ?disc) (on ?disc ?from) (clear ?disc) (clear ?to)) :effect (and (clear ?from) (on ?disc ?to) (not (on ?disc ?from)) (not (clear ?to)))) ) (define (problem hanoi3) (:domain hanoi) (:objects peg1 peg2 peg3 d1 d2 d3) To shorten the description: (:init Disks and pegs are "equivalent" (smaller peg1 d1) (smaller peg1 d2) (smaller peg1 d3) Pegs are the largest disks, (smaller peg2 d1) (smaller peg2 d2) (smaller peg2 d3) so they cannot be moved (smaller peg3 d1) (smaller peg3 d2) (smaller peg3 d3) (smaller d2 d1) (smaller d3 d1) (smaller d3 d2) d1 (clear peg2) (clear peg3) (clear d1) d2 (on d3 peg1) (on d2 d3) (on d1 d2)) (:goal (and (on d3 peg3) (on d2 d3) (on d1 d2))) d3 ) peg1 peg2 peg3 jonkv@ida ToH 1: Towers of Hanoi 5 How many states exist for this problem? (define (domain hanoi) (:requirements :strips) (:predicates (clear ?x) (on ?x ?y) (smaller ?x ?y)) Answer: (:action move :parameters (?disc ?from ?to) Every assignment :precondition (and (smaller ?to ?disc) (on ?disc ?from) (clear ?disc) (clear ?to))of values to(not the (clear ground atoms :effect (and (clear ?from) (on ?disc ?to) (not (on ?disc ?from)) ?to)))) is one state ) (define (problem hanoi3) (:domain hanoi) 6 objects (:objects peg1 peg2 peg3 d1 d2 d3) (:init 26 combinations of "clear" (smaller peg1 d1) (smaller peg1 d2) (smaller peg1 d3) 26∗6 combinations of "on" (smaller peg2 d1) (smaller peg2 d2) (smaller peg2 d3) 26∗6 combinations of "smaller" (smaller peg3 d1) (smaller peg3 d2) (smaller peg3 d3) (smaller d2 d1) (smaller d3 d1) (smaller d3 d2) 𝟐𝟐𝟕𝟕𝟕𝟕 combinations in total: (clear peg2) (clear peg3) (clear d1) 302231′454903′657293′676544 (on d3 peg1) (on d2 d3) (on d1 d2)) (:goal (and (on d3 peg3) (on d2 d3) (on d1 d2))) ) jonkv@ida ToH 2: Number of States 6 Suppose we don't include rigid predicates ("smaller") in the state? (define (domain hanoi) (:requirements :strips) (:predicates (clear ?x) (on ?x ?y) (smaller ?x ?y)) Answer: (:action move :parameters (?disc ?from ?to) :precondition (and (smaller ?to ?disc) (on ?disc ?from) (clear ?disc) (clear ?to))of values Every assignment :effect (and (clear ?from) (on ?disc ?to) (not (on ?disc ?from)) ?to)))) to(not the (clear ground atoms ) is one state (define (problem hanoi3) (:domain hanoi) (:objects peg1 peg2 peg3 d1 d2 d3) 6 objects (:init 26 combinations of "clear" (smaller peg1 d1) (smaller peg1 d2) (smaller peg1 d3) 26∗6 combinations of "on" (smaller peg2 d1) (smaller peg2 d2) (smaller peg2 d3) (smaller peg3 d1) (smaller peg3 d2) (smaller peg3 d3) (smaller d2 d1) (smaller d3 d1) (smaller d3 d2) 𝟐𝟐𝟒𝟒𝟒𝟒 combinations in total: (clear peg2) (clear peg3) (clear d1) 4′398046′511104 (on d3 peg1) (on d2 d3) (on d1 d2)) (:goal (and (on d3 peg3) (on d2 d3) (on d1 d2))) ) jonkv@ida ToH 3: Without Rigid Predicates 7 How many states are reachable from the given initial state, using the given actions? 27 out of 302231454903657293676544 The other states still exist! s17 clear(peg1) is true clear(peg2) is false clear(peg3) is false on(d1,peg1) is false on(d3,peg22) is true … More other states jonkv@ida ToH 4: Reachable From… 8 States are not inherently "reachable" or "unreachable" They are reachable from a specific starting point! jonkv@ida ToH 5: Reachable States Suppose this was your initial state Unreachable from "all disks in the right order"! d2 d3 d1 peg1 peg2 peg3 Then other states would be reachable from this state If the preconditions hold, then move can be applied d3 d2 d1 peg1 peg2 peg3 Breaks the rules for Towers of Hanoi But the states exist – they obey no rules 9 jonkv@ida ToH 6: Reachable from "Forbidden" Suppose this was your initial state: (and (on peg1 peg2) (on d1 d2) (on d2 d1) … ) Then other states would be reachable If the preconditions hold, then move can be applied Can't even be visualized – physically impossible But the states exist – they are just combinations of true/false values 10 jonkv@ida ToH 7: Reachable from "Impossible" A larger (but still tiny) example… Most reachable state spaces are far less regular, can have dead ends, … 11 Towers of Hanoi 7 disks 2187 reachable states jonkv@ida ToH 8: Larger Reachable 12 Domain 2: The Blocks World You Initial State Your greatest desire A A C B B C jonkv@ida BW 1: Blocks World 13 We will generate classical sequential plans One object type: Blocks A ▪ ▪ ▪ ▪ common blocks world version, with 4 operators (pickup ?x) (putdown ?x) (unstack ?x ?y) (stack ?x ?y) Predicates used: ▪ (on ?x ?y) ▪ (ontable ?x) ▪ (clear ?x) ▪ (holding ?x) ▪ (handempty) – takes ?x from the table – puts ?x on the table – takes ?x from on top of ?y – puts ?x on top of ?y – block ?x is on block ?y – ?x is on the table – we can place a block on top of ?x – the robot is holding block ?x – the robot is not holding any block With 𝑛𝑛 blocks: 2𝑛𝑛 unstack(A,C) 2 +3𝑛𝑛+1 putdown(A) A C B D (not (exists (?y) (on ?y ?x))) (not (exists (?x) (holding ?x))) states pickup(B) stack(B,C) jonkv@ida BW 2: Model 14 (:action pickup :parameters (?x) :precondition (and (clear ?x) (on-table ?x) (handempty)) :effect (and (not (on-table ?x)) (not (clear ?x)) (not (handempty)) (holding ?x))) (:action putdown :parameters (?x) :precondition (holding ?x) (:action unstack :parameters (?top ?below) :precondition (and (on ?top ?below) (clear ?top) (handempty)) :effect (and (holding ?top) (clear ?below) (not (clear ?top)) (not (handempty)) (not (on ?top ?below)))) (:action stack :parameters (?top ?below) :precondition (and (holding ?top) (clear ?below)) :effect (and (not (holding ?top)) (not (clear ?below)) (clear ?top) (handempty) (on ?top ?below))) :effect (and (on-table ?x) (clear ?x) (handempty) (not (holding ?x)))) jonkv@ida BW 3: Operator Reference We assume we know the initial state Let’s see which states are reachable from there! Here: Start with s0 = all blocks on the table holding(A) Many other states "exist", but are not reachable from the current starting state pickup(A) s2 holding(A) handempty ontable(A) clear(A) s3 handempty clear(A) ontable(A) on(A,A) unstack(A,A) handempty ontable(A) clear(A) 15 s4 holding(A) clear(A) ontable(A) jonkv@ida BW 4: Reachable State Space, 1 block A on Table B on Table A on Table Holding B B on A on Table 16 2048 states in total Reachable from "all on table": 5 states, 8 transitions Holding A B on Table A on B on Table jonkv@ida BW 5: Reachable State Space, 2 blocks A on Table B on Table C on table 17 524'288 states in total Reachable from "all on table": 22 states, 42 transitions Looking nice and symmetric… jonkv@ida BW 6: Reachable State Space, 3 blocks 18 536'870'912 states in total Reachable from "all on table": 125 states, 272 transitions jonkv@ida BW 7: Reachable State Space, 4 blocks 19 2'199'023'255'552 states in total Reachable from "all on table": 866 states, 2090 transitions jonkv@ida BW 8: Reachable State Space, 5 blocks 20 Reducing the State Space Size: Standard PDDL model: 2 ▪ 2𝑛𝑛 +3𝑛𝑛+1 = 2'199'023'255'552 states, 866 reachable Omit (ontable ?x), (clear ?x) ▪ In physically achievable states, these can be deduced from (on ?x ?y), (holding ?x) ▪ 2𝑛𝑛 2 +𝑛𝑛+1 = 2'147'483'648 states, 866 reachable Also use a state variable representation ▪ Add type block-or-nothing, size 6 (values A, B, C, D, E, nothing) ▪ Use (= (block-below ?x) ?y), where ?y can be "nothing" ▪ (𝑛𝑛 + 1)𝑛𝑛 ⋅ 2𝑛𝑛+1 = 497'664 states, 866 reachable Is planning time reduced with fewer unreachable states? Depends on the planning algorithm! jonkv@ida BW 9: Reducing State Space Size, 5 blocks Example: Unable to return crack(egg5) Can never return to the leftmost part of the state space 21 jonkv@ida State Space: Not Symmetric 22 Example: Disconnected parts of the state space I don't have a helicopter I do have a helicopter No action for buying a helicopter, no action for losing it Will stay in the partition where you started! jonkv@ida State Space: Disconnected 24 Initial (current) state Goal states We simply need a path in a graph – not even a shortest path! jonkv@ida The Planning Problem Paths are found through graph search Search is the basis for many (most?) forms of planning! Many search methods already exist – can we simply apply them? We'll begin with the "most natural" idea: Start in the initial state Apply actions step by step See where you end up Many names, one concept: Forward search Forward-chaining search Forward state space search Progression … 26 jonkv@ida Forward State Space Search (1) 27 Many reachable states – must generate them as we go Forward State Space Initial search node 0 = initial state Forward planning, forward-chaining, progression: Begin in the initial state Corresponds directly to the initial state Edges correspond to actions Child node 1 = result state Child node 2 = result state The successor function / branching rule: To expand a state s, generate all states that result from applying an action that is applicable in s Now we have multiple unexpanded nodes! A search strategy chooses which one to expand next jonkv@ida Forward State Space Search (2) 28 … … clear(A) on(A,B) ontable(B) handempty clear(A) on(A,B) ontable(B) handempty Reach a node with the same state can prune (discard the node, backtrack) clear(A) on(A,B) handempty If preconditions and goals are positive: Reach a node with a subset of the facts can prune jonkv@ida Forward State Space Search (3): Pruning Blocks world example: Generate the initial state = initial node from the initial state description in the problem A C B D 29 jonkv@ida Forward State Space Search (4) 30 Incremental expansion: Choose a node ▪ First time, the initial state – later, depends on the search strategy used Expand all possible successors ▪ “What actions are applicable in the current state, and where will they take me?” ▪ Generates new states by applying effects Repeat until a goal node is found! A C B D A C B D A C B D A C B D A C B D A C B C B D A D Notice that the BW lacks dead ends. In fact, it is even symmetric. This is not true for all domains! jonkv@ida Forward State Space Search (5) General Forward State Space Search Algorithm forward-search(operators, s0, g) { What strategies are open { <s0, ε> } available and useful? while (open ≠ emptyset) { use a strategy to select and remove <s,path> from open if goal g satisfied in state s then return path Expand the node 31 jonkv@ida Forward State Space Search (6) } foreach a ∈ { ground instances of operators applicable in state s } { s’ apply(a, s) // dynamically generate a new state path’ append(path, a) add <state’, path’> to open To simplify extracting a plan, } a state space search node above } return failure; Is always sound Completeness depends on the strategy includes the plan to reach that state! Technically, we search the space of <state,path> pairs Still generally called state space search… First search strategy: Dijkstra’s algorithm Matches the given forward search ”template” ▪ use a strategy to select and remove <s,path> from open ▪ Selects from open a node n with minimal g(n): Cost of reaching n from the starting point Efficient graph search algorithm: O(|E| + |V| log |V|) ▪ |E| = the number of edges, |V| = the number of nodes Optimal: Returns minimum-cost plans 33 jonkv@ida Forward State Space Search: Dijkstra 34 Running Dijkstra, assuming all actions are equally expensive: Each of yellow/green/red corresponds to multiple search iterations! jonkv@ida Dijkstra: Example No problems? 36 Explores all states that can be reached more cheaply than the cheapest goal node Usually we have many more ”dimensions”, many more nodes within a given distance! cost 8 cost 7 cost 6 cost 4 cost 4 cost 3 cost 2 cost 1 Goal nodes jonkv@ida Dijkstra’s Algorithm 37 Blocks States States reachable Transitions (edges) from "all on table" in reachable part 0 2 1 0 1 32 2 2 2 2048 5 8 3 524288 22 42 4 536870912 125 272 5 2199023255552 866 2090 6 36028797018963968 7057 18552 7 2361183241434822606848 65990 186578 8 618970019642690137449562112 695417 2094752 9 649037107316853453566312041 8145730 152512 25951122 10 272225893536750770770699685 … 9454145691648 … Even # of reachable states will grow too quickly! jonkv@ida Reachable State Space: BW sizes 0–10 38 jonkv@ida 400 blocks Blocks world, 400 blocks initially on the table, goal is a 400-block tower ▪ Given uniform action costs (same cost for all actions), Dijkstra will always consider all plans that stack less than 400 blocks! ▪ Stacking 1 block: = 400*399 plans, … ▪ Stacking 2 blocks: > 400*399 * 399*398 plans, … ▪ More than 163056983907893105864579679373347287756459484163478267225862419762304263994207997664258213955766581163654137118 163119220488226383169161648320459490283410635798745232698971132939284479800304096674354974038722588873480963719 240642724363629154726632939764177236010315694148636819334217252836414001487277618002966608761037018087769490614 847887418744402606226134803936935233568418055950371185351837140548515949431309313875210827888943337113613660928 318086299617953892953722006734158933276576470475640607391701026030959040303548174221274052329579637773658722452 549738459404452586503693169340418435407383263781602533940396297139180912754853265795909113444084441755664211796 274320256992992317773749830375100710271157846125832285664676410710854882657444844563187930907779661572990289194 810585217819146476629300233604155183447294834609054590571101642465441372350568748665249021991849760646988031691 394386551194171193333144031542501047818060015505336368470556302032441302649432305620215568850657684229678385177 725358933986112127352452988033775364935611164107945284981089102920693087201742432360729162527387508073225578630 777685901637435541458440833878709344174983977437430327557534417629122448835191721077333875230695681480990867109 051332104820413607822206465635272711073906611800376194410428900071013695438359094641682253856394743335678545824 320932106973317498515711006719985304982604755110167254854766188619128917053933547098435020659778689499606904157 077005797632287669764145095581565056589811721520434612770594950613701730879307727141093526534328671360002096924 483494302424649061451726645947585860104976845534507479605408903828320206131072217782156434204572434616042404375 21105232403822580540571315732915984635193126556273109603937188229504400 1.63 * 101735 Dijkstra is efficient in terms of the search space size: O(|E| + |V| log |V|) The search space is exponential in the size of the input description… But computers are getting very fast! Suppose we can check 10^20 states per second ▪ >10 billion states per clock cycle for today’s computers, each state involving complex operations Then it will only take 10^1735 / 10^20 = 10^1715 seconds… But we have multiple cores! The universe has at most 10^87 particles, including electrons, … Let’s suppose every one is a CPU core only 10^1628 seconds > 10^1620 years The universe is around 10^10 years old 39 jonkv@ida Fast Computers, Many Cores 40 Dijkstra’s algorithm is completely impractical here Visits all nodes with cost < cost(optimal solution) Breadth first would not work Visits all nodes with length < length(optimal solution) Iterative deepening would not work Saves space, still takes too much time Depth first search would normally not work Always extends the plan if possible, always takes the first applicable action Could work in some domains and some problems, by pure luck… Usually either doesn’t find the goal, or finds very inefficient plans [movies/4_no-rules] The state space is fine, but we need different algorithms! But first, another direction… jonkv@ida Impractical Algorithms Backward Search jonas.kvarnstrom@liu.se – 2016 We’ve seen state space search in this direction: 42 If we are here initial What can the next action be? Forward search, initial goal Where do we end up? goal goal goal Known initial state Deterministic actions Every node corresponds to one well-defined state jonkv@ida Forward Search 43 jonkv@ida Backward Search Classical Planning: Find a path in a finite graph initial Where would we have to be before? What about this direction: Backward, goal initial? What can the previous action be? goal goal goal If we want to be here Forward search uses applicable actions Backward search uses relevant actions: Must achieve some requirement, must not contradict any requirement What stack(A,B) needs ————— What the goal needs, but stack(A,B) did not achieve We want to achieve: holding(A) clear(B) ————— ontable(C) ontable(D) … 44 We want to achieve this… A B C D jonkv@ida Backward Search: Intuitions We want to achieve: holding(A) clear(B) ————— ontable(C) ontable(D) … We want to achieve: holding(D) ————— ontable(C) ontable(D) … 45 We want to achieve this… A B C D jonkv@ida Backward Search: Intuitions (2) Classical planning: Given a known state, applying an action gives single possible result at(home) drive-to(shop) at(shop) But given an action and a desired result, there can be many possible starting states at(home) at(work) at(restaurant) drive-to(shop) at(shop) 47 jonkv@ida Backward and Forward: Asymmetric! Intuitive successor node: Before stacking: Must be holding A B must be clear … stack(A,B) is relevant: It could be the last action in a plan achieving this goal… Formal successor node: 48 jonkv@ida Backward Search: More Details Intuitive goal: A B C D Formal goal: clear(B), on(B,C), ontable(C) clear(D), ontable(D) holding(A) Applying stack(A,B) here leads to the goal state Suppose there is a single goal state: clear(B), on(B,C), ontable(C) clear(D), ontable(D) holding(A), handempty Applying stack(A,B) here leads to the goal state sg = { clear(A), on(A,B), on(B,C), ontable(C), clear(D), ontable(D), handempty } clear(B), on(B,C), ontable(C) clear(D), ontable(D) holding(A), on(A,B) Applying stack(A,B) here leads to the goal state Every node must correspond to many possible states! 49 sg = { clear(A), on(A,B), on(B,C), ontable(C), clear(D), ontable(D), handempty } stack(A,B) { {clear(B), on(B,C), ontable(B) clear(D), ontable(D), holding(A) }, { clear(B), on(B,C), ontable(B) clear(D), ontable(D) holding(A), handempty }, { clear(B), on(B,C), ontable(B) clear(D), ontable(D) holding(A), on(A,B) }, … } putdown(D) A search node corresponds to a set of states from which we know how to reach the goal { { … }, { … }, { … }, … } jonkv@ida Search Space 50 sg = { clear(A), on(A,B), on(B,C), ontable(C), clear(D), ontable(D), handempty } Note: The above defined one goal state sg = { clear(A), on(A,B), on(B,C), ontable(C), clear(D), ontable(D), handempty } Exactly these atoms are true! Even if we started with one goal state, we will eventually get several Let's allow many goal states from the beginning, as in the state transition system: Sg = set of goal states jonkv@ida Search Space 51 Forward search in the state space: γ(s,a) = { the state resulting from executing a in state s } a is applicable to s iff precond+(a) ⊆ s and s ∩ precond–(a) = ∅ Γ(s) = { γ(s,a) | a ∈ A and a is applicable to s } Which state do I end up in? Successors are created only from applicable actions All possible successor states Backward search, if a node is a set of goal states g = { s1,…,sn } γ-1(g,a) = { s | a is applicable to s and γ(s,a) is in g } a is relevant for g iff it contributes to g and it does not destroy any part of g Γ-1(g) = { γ-1 (g,a) | a ∈ A and a is relevant for g } Regression through an action: Which states could I start from? Successors are created only from relevant actions (vague definition, clarified later) All possible successor subgoals jonkv@ida State Sets and Regression 52 Given our previous goal state: Sg = { sg } stack-1(A,B) putdown-1(D) γ-1(Sg, stack(A,B)) { {clear(B), on(B,C), ontable(B) clear(D), ontable(D), holding(A) }, { clear(B), on(B,C), ontable(B) clear(D), ontable(D) holding(A), handempty }, γ-1(Sg, putdown(D)) Storing (and regressing over) arbitrary sets of goal states is too expensive! … } Γ-1(Sg) { { … }, { … }, { … }, … } jonkv@ida Search Space Using the classical goal representation: The (sub)goal of a search node is an arbitrary set of ground goal literals: g = { in(c1,p3), …, in(c5,p3), ¬on(A,B) } As usual, can't represent arbitrary sets of states – but works for: Classical goals (already sets of ground literals) Classical effects (conjunction of literals) Classical preconditions (conjunction of literals) What happens if we allow arbitrary (disjunctive) preconditions? 53 jonkv@ida Classical Representation 54 Using the classical goal representation: The (sub)goal of a search node is an arbitrary set of ground goal literals: g = { in(c1,p3), …, in(c5,p3), ¬on(A,B) } All goals except effects(a) must already have been true precond(a) must have been true, so that a was applicable γ-1(g,a) = ((g – effects(a)) ∪ precond(a)), representing { s | a is applicable to s and γ(s,a) satisfies g } Γ-1(g) = { γ-1 (g,a) | a ∈ A and a is relevant for g } Backward / regression: Which states could I start from? Efficient: Just one set (partial state), not a set of sets (set of states)! a is relevant for g iff g ∩ effects(a) ≠ ∅ and g+ ∩ effects–(a) = ∅ and g– ∩ effects+(a) = ∅ All successor subgoals Contribute to the goal (add positive or negative literal) Do not destroy it jonkv@ida Classical Representation and Regression 55 Regression Example: Classical Representation on(B,C) clear(B) clear(A) ontable(A) handempty Stronger than the original goal! on(B,C) holding(A) clear(B) on(B,C) on (A B) clear(A) handempty Initial state: on(A,C) clear(A) ontable(B) clear(B) clear(D) ontable(C) ontable(D) handempty Relevant: Achieves on(A,B), does not delete any goal fact (g – effects(a)) on(A,B) precond(a) holding(B) clear(C) The goal is not already achieved… Goal: on(A,B) on(B,C) Relevant: Achieves on(B,C), does not delete any goal fact Represents many possible goal states! jonkv@ida Regression 56 How about expressivity? Suppose we have disjunctive preconditions ▪ (:action travel :parameters (?from ?to – location) :precondition (and (at ?from) (or (have-car) (have-bike))) :effects (and (at ?to) (not (at ?from)))) How do we apply such actions backwards? ▪ More complicated (at pos1) disjunctive (or (have-car) goals to achieve? (have-bike)) ▪ Additional branching? (at pos1) (have-car) (at pos1) (have-bike) (at pos2) (at pos2) Similarly for existentials ("exists block [ on(block,A) ]"): One branch per possible value Some extensions are less straight-forward in backward search (but possible!) jonkv@ida Backward and Forward Search: Expressivity Forward search … 57 Backward search … I can reach this node from the initial state… But what comes next? Can I reach the goal? Efficiently? on(B,C) clear(B) on(A,B) clear(A) handempty I can reach the goal from this node… But what comes before? Can I reach it from s0? Efficiently? on(A,B) on(B,C) jonkv@ida Backward and Forward Search: Unknowns Forward search Backward search … clear(A) on(A,B) ontable(B) handempty clear(A) on(A,B) ontable(B) handempty 58 … clear(A) on(A,B) handempty Reach a node with the same state can prune If preconditions and goals are positive: Reach a node with a subset of the facts can prune clear(A) on(A,B) ontable(B) clear(A) on(A,B) ontable(B) handempty Reach a node with the same or stronger goal can prune jonkv@ida Backward and Forward Search: Pruning FORWARD SEARCH Problematic when: 59 BACKWARD SEARCH Problematic when: There are many applicable actions There are many relevant actions Blind search knows Blind search knows high branching factor need guidance if an action is applicable, but not if it will contribute to the goal high branching factor need guidance if an action contributes to the goal, but not if you can achieve its preconditions Blind backward search is generally better than blind forward search: Relevance tends to provide better guidance than applicability This in itself is not enough to generate plans quickly! jonkv@ida Backward and Forward Search: Problems 61 Even with conjunctive preconds: (clear A) (on-table A) (handempty) (clear A) (on A B) (handempty) (holding A) (clear B) (clear C) (clear A) (on A C) (handempty) (clear A) (on A D) (handempty) High branching factor No reason to decide now which block to unstack A from jonkv@ida Lifted Search (1) (:action pickup :parameters (?x) :precondition (and (clear ?x) (on-table ?x) (handempty)) :effect (and (not (on-table ?x)) (not (clear ?x)) (not (handempty)) (holding ?x))) (:action unstack :parameters (?top ?below) :precondition (and (on ?top ?below) (clear ?top) (handempty)) :effect (and (holding ?top) (clear ?below) (not (clear ?top)) (not (handempty)) (not (on ?top ?below)))) 62 General idea in lifted search: Keep some variables uninstantiated (not ground "lifted") (clear A) (on-table A) (handempty) Next step: How to check for actions achieving (on A ?x)? (clear A) (on A ?x) (handempty) (holding A) (clear B) (clear C) Requires unification – see the book, fig 4.3 Applicable to other types of planning – will return later! But isn't enough to make unguided backward search efficient... jonkv@ida Lifted Search (2) 64 Forward and backward search are useless without guidance! Add general guidance mechanisms to the planner Provide more specific information about each domain Use a different search space and search algorithm Typically: Heuristics to avoid blind search, judge which actions seem promising Control formulas Hierarchical Task Networks Partial Order Causal Link Satisfiability-based planning Planning graphs Start here – for the labs! jonkv@ida Where do we go from here?