## Session 5

### Heuristic Search

#### Artificial Intelligence

today's slides in PDF

### Recap: Search for Goals

One way for an agent to decide on action to take is to "think ahead" about the effects of its actions on the world. In order to search for an action in this way, the agent must:

• Formulate the goal. (What is desired?) For elective goals, this is not trivial. And it is something that our programs don't often worry about, because the goals are specified by the programmer!

• Formulate the problem. (What needs to be done?) It is at this step that the agent defines what a state is, what the operators are, and what the goal test is.

• Formulate the solution. (What is the answer? How do I get there?) Search through (potentially, all possible) states for a goal state.

#### Problem Formulation

Formulating a problem for search requires:

• defining what states look like,
• defining what operations are possible (relevant) and how each affects the state of the world, and
• defining how to tell when the search is done -- either success or failure.

The result of problem formulation is a problem definition, which consists of:

• the state space
• What states will be considered?
• What actions will be considered?
• What is the initial state?

• the goal test -- How does the agent know when it is done?

#### Solution Formulation

Formulating a solution for search involves applying one of the search techniques we learned about last time. Those techniques all specialize a generic systematic search algorithm by telling the algorithm which states to favor whenever it has a choice among more than one.

Systematic search can favor...

• "old" states ............... breadth-first
• "new" states ............... depth-first

The other techniques that Ginsberg discusses in Chapter 3 are all variations of the same systematic search them that we developed in our last session.

#### Closing Thoughts

Notice that the problem formulation is subjective (there are multiple formulations for any problem that satisfy the definitions of our terms), while search is objective (an algorithm can be provided that defines the exact order in which states will be explored).

In doing search, the key is to select a good strategy for biasing the generic algorithm.

### Using Knowledge of the Problem to Improve Search

BFS and DFS are uninformed search strategies. Uninformed search algorithms explore the states of a problem without knowing anything about the environment. These algorithms work in the same way for every problem, because they do not consider any features that are specific to the problem being solved.

Knowledge is power. Intelligent agents often use knowledge of a domain to make search very efficient -- or even to make search go away! Consider how you play tic-tac-toe, or how an expert chessplayer plays the opening phase of the game. In each case, knowledge of the problem can take a situation that requires search for a novice and replaces it with reflex or table look-up.

How can we use knowledge to improve search?

• To change the order in which we consider the nodes.
• To throw away nodes from the list of nodes to consider.

(Of course, we can also use knowledge before doing the search to formulate the problem better or select the appropriate search algorithm...)

One type of simple extension to search is to use knowledge about path cost. Each operator "costs" something to apply. If my goal is to be in Champaign, Illinois, then there is a cost associated with traveling to Cedar Rapids, and a cost associated with traveling to Waverly.

Consider our simple map from last time... Knowledge of the cost of applying an operator is problem-specific, because the operators are problem-specific. So any search algorithm that uses these costs in its deliberation is called informed.

### Uniform Cost Search

In Uniform Cost Search (UCS), we use a strategy that sorts the list in ascending order on path cost. The path cost of a node n is usually called g(n).

This algorithm is a variant of BFS, because it tends to explore states closer to the initial state first. (Applying more operators usually means more total cost.) But UCS's definition of "close" is more practical, because it counts the real cost of a sequence of actions, rather than estimating the cost by the number of actions.

Generally, newly-expanded nodes will enter the queue near the back, with older nodes nearer the front. But a path of low real cost, even if it requires more steps, can leap-frog ahead of a path with fewer operators.

How does UCS stack up against our evaluation criteria from last time?

• completeness: guaranteed to find a solution if one exists
• time: generally expensive for the same reasons as BFS, but oftenan improvement
• space: generally expensive for the same reasons as BFS, but often an improvement
• optimality: only if "cost" of applying each operator > 0

The problem with UCS is that it is too much like BFS. We aren't using much domain knowledge, just sorting on better information already available.

### Greedy Search

Where to look next? Well, consider the task of search while the algorithm is running: It turns out that, if my goal is to be in Champaign, Illinois, then there are two costs associated with traveling to Cedar Rapids, and two costs associated with traveling to Waverly.

UCS uses g(n) to guide its search. g(n) measures the cost expended in getting from the initial state to the current state, the sunk cost of taking the path. (Remember, the agent doing search is thinking ahead at this point, so it hasn't actually expended the cost, just imagined what would happen if it did.)

What about looking at what seems to be a more important measure: what will it cost to get from the current state to the goal?

Greedy search orders the unexplored states in ascending order of their expected future path cost. Usually, we don't have a way to know the exact future cost. (In most cases, the knowledge needed to compute future cost would also compute the solution!) For this reason, greedy search uses a heuristic function, called h(n), to estimate future cost. h(n).

Hill climbing is usually a very effective search technique, but it falls victim to several kinds of problem: foothills, plateaus, and ridges. How can we solve them? (Randomize direction, step size... or jump randomly!)

Greedy search doesn't look like an uninformed method any more, since it uses information that is very problem-specific. But that information can be provided as an argument to the search algorithm, so the general-purpose algorithm still works.

Greedy approaches are often quite efficient, but they generally are not optimal:

• h(n) is an estimate, so it can be wrong.
• Greedy search considers only on future cost, but not past cost.

How does greedy search stack up against our evaluation criteria from last time?

• completeness: no, for the same reasons as DFS, but usually a big improvement
• time: quite efficient, for the same reasons as DFS
• space: quite efficient, for the same reasons as DFS
• optimality: no

### Wrap Up

• Reading -- Read Chapter 5 in Ginsberg. (I know, I know. :-)

• Homework -- None.

Eugene Wallingford ==== wallingf@cs.uni.edu ==== September 20, 2001