Session 22

An Introduction to Planning


810:161

Artificial Intelligence


today's slides in PDF


The Planning Problem

The planning problem is this:

Given an initial state description I, a goal state description G, and a set of operators O, produce a plan P such that executing P in a state described by I results in state described by G.

Planning is, in many ways, a reasoning task like all others. An agent needs to find a solution to a problem, and it can do so by doing search or by drawing inferences. But in other ways it is a special kind of reasoning task -- it presents an agent with particular difficulties and particular possibilities. For these reasons, among others, many AI researchers consider planning an "application" of AI, and attempt to solve it using just the basic techniques we have studied thus far.

But such a view is somewhat simplistic. Even if we can implement a planning agent using search or rules or logic (and, of course, we can), we can also discuss the issues at a higher level -- at a level in which the vocabulary is one of planning, and not search or rules or logic.

What kinds of knowledge might a planner wish to have? What will the agent do with the knowledge? To use its knowledge in this way, will the agent want to represent it in a particular way? This connection between knowledge representation and knowledge use is the same one we saw when we studied specialized logical representations (Horn clause form, disjunctive normal form) used by particular kinds of inference engines (modus ponens, resolution). We should be discussing a task and methods for solving it.

The idea of such a knowledge level, or task level, at which to characterize intelligent architectures is relatively recent in AI terms: 1980 or so. But it should be a major component of any discussion of intelligent agents.

Back to planning...

Can we view this sort of problem as a logical inference problem? Sure, provided we extend the ontology of our logic to include sequences of situations and temporal axioms to reason over them. (Collectively, these are often called a situation calculus.)

What are the advantages of such an approach?

But what prices do we pay?

Can we instead view planning as a more general state-space search problem? Sure: I is the initial state description, G is a goal state or a goal-test function, and O is the "move" generator.

What are the advantages of such an approach?

But again we pay a heavy price:

In general, planning presents some unique problems:

Our approach will be to blend these two approaches to (we hope) maximum benefit:

The three key ideas that underlie most planning research are:

From all of this, the goals of planning research become to provide two "specializations" of the state-space search model that facilitate efficient planning:


Goal-Stack Planning

The earliest type of planner did not follow the three key ideas of planning listed above. The program STRIPS represented states as logical sentences, but it did not allow steps to be added to the plan anywhere. (Indeed, STRIPS interleaved planning and execution, and so was rigidly ordered.) STRIPS did decompose problems, but it did not treat them completely independently, since the order in which subgoals were planned was the same as the order in which they were generated.

The STRIPS representation:


The Goal-Stack Planning Algorithm

We are currently considering STRIPS, the earliest type of AI planner. It did not follow all three of the key ideas of planning listed above, but it did represent states as logical sentences and decompose problems.

Input:

Output: success or failure

Local state:

The algorithm:

  1. world-state = I

  2. Push achieve(g1,..., gn) onto problem-stack.

  3. Loop:

    1. If problem-stack is empty, return SUCCESS.

    2. N = pop(problem-stack)

    3. If N = achieve(g1, ..., gn) then

      1. If N is true in the world-state, then go to 3.
      2. If N has previously been attempted, then return FAILURE.
      3. Mark N as attempted.
      4. Push N back on problem-stack.
      5. Order the gi.
      6. Push achieve(gi) on problem-stack in order.

    4. Else if N = achieve(g) then

      1. If N is true in the world-state, then go to 3.
      2. Choose an operator from o from O that possibly adds g. If none exists, fail.
      3. Choose a set of variable bindings that grounds o and makes o add g.
      4. Push apply(o) onto problem-stack.
      5. Push achieve(precondition(o)) onto the problem-stack.

    5. Else if N = apply(o)

      1. world-state = execute(o, world-state)


Using STRIPS in the Blocks World

Operators from the slides

Demonstrate: Stack A on B from Initial State of A on table, B on table


Wrap Up


Eugene Wallingford ==== wallingf@cs.uni.edu ==== November 27, 2001