## Session 4

### Satisfying Goals through Uninformed Search

#### Artificial Intelligence

today's slides in PDF

### Recap: Searching for Answers

Intelligent agents sometimes act by reflex, and sometimes they act by looking up the right answer. But these behaviors work only in limited environments -- environments that are simple, static, or both. How can an agent choose an action in a complex environment, trying to achieve a goal that it knows little about? One approach is to search for an answer. An agent can consider what effect each possible action would have on the world and which action -- or sequence of actions -- leads to the goal.

We might expect an intelligent agent to consider its options in a systematic way, in a way that makes some sense, either logically or based on its knowledge of the world. For now, let's consider agents that act logically, but with relatively little knowledge about the the goal it is trying to achieve. XS

This algorithm describes a systematic way to search for an action, without yet committing to the system it will use:

• INPUT:
• the starting situation (the start state)
• a goal to achieve
• a search strategy

• OUTPUT:
• a sequence of actions (called operators)
that transforms the problem's start state into a goal state
OR
• an announcement that no such sequence can be found

• STEPS
1. Initialize the set of states to be considered to include the start state.
2. Repeat the following:
• If the set of unconsidered states is empty, then announce failure.
• Choose a state to consider, based on the search strategy.
• If the state chosen is the goal state, then return the sequence of actions that leads to this state.
• Add to the set of unconsidered states all of the states that can be reached from the current state by doing one action.

In a nutshell, this algorithm says to keep considering possible states of the world until you find one that matches your goal. When you find one that does, do the actions that will take you to the desired state. How will you know what those actions are? Each time you look at a state and see that it isn't our goal, add all the states that you can get to from that state to the set of things to be considered.

### Examples of Systematic Search

Consider how a program might search a map to find a route to where it wants to be. Next week, when I go to my conference, I must travel from Cedar Falls to the Champaign, Illinois, area. I have decided to drive, so I have to negotiate this simplified map:

In order to apply our algorithm, we need to define this problem in the vocabulary of search:

What is the initial state?
Cedar Falls

What is the goal state?
Champaign

What is the set of operators?
The set of roads available to me: { Cedar Falls-Quad Cities, Quad Cities-Cedar Falls, Cedar Falls-Dubuque, Dubuque-Cedar Falls, Dubuque-Quad Cities, Quad Cities-Dubuque, Dubuque-Chicago, Chicago-Dubuque, Dubuque-Champaign, Champaign-Dubuque, Quad Cities-Champaign, Champaign-Quad Cities, Chicago-Champaign, Champaign-Chicago }

When we apply our search algorithm, we have to provide a "search strategy", which tells us how to choose a state from the set of unexplored states. This strategy acts as a bias toward a certain kind of state, as is the primary way that we can control the general direction of the algorithm.

We can use this algorithm to simulate the sort of random search we talked about last time. We could give the algorithm a strategy with no bias on the order we consider the states: choose a state randomly from the set. How successful is this algorithm likely to be?

A simple bias that we could give the algorithm is a bias toward "old" states, those states added to the set earliest. Using this approach, which path will the algorithm choose to Champaign? When do you imagine that this sort of bias is a good one to have?

To implement this in a program, we could represent the set of unexplored states as a list, sorted in ascending operators on the number of steps it takes to reach each state.

Another simple bias that we could give the algorithm is a bias toward "new" states, those states added to the set last. Using this approach, which path will the algorithm choose to Champaign? When do you imagine that this sort of bias is a good one to have?

To implement this in a program, we could represent the set of unexplored states as a list, sorted in descending operators on the number of steps it takes to reach each state.

### Evaluating Search Strategies

Now that we have at least two reasonable search strategies available to us, we have to be able to figure out what their strengths and weaknesses are. That way, when faced with a particular problem, we (or our program!!) will be able to make a good choice of search strategy to use.

Four important criteria that can be used to evaluate a search strategy are:

• completeness: does it guarantee to find a solution if one exists?
• time: how long does it take?
• space: how much memory is required?
• optimality: does it find the "best" solution, if there are many?

The first strategy we considered above, a bias toward old states, is usually called breadth-first search, because it will consider all states at one distance from the initial state before it considers any states farther away.

How does BFS stack up against our evaluation criteria?

• completeness: yes
• time: generally expensive
• space: generally expensive
• optimality: only if all operators "cost" the same

The expensiveness of BFS is a product of the problem's branching factor. If there aren't many operators available in a given state, then the cost of BFS can be low. But... for most problems in the world, an agent has many operators available at any time. And, even worse, finding a goal usually requires a long sequence of actions, which means that the agent must search deep into the "tree" of possibilities.

A strategy with a bias toward old states is called depth-first, because it will consider all solutions along one path of states before it considers any along an alternative path.

How does depth-first search (DFS) measure up?

• completeness: only if the search space is finite
• time: modest
• space: modest
• optimality: no!

The low cost of space in DFS is a result of only needing to store the path leading to a state, plus immediate siblings of each state on the path. The low cost of time in DFS is a result of the fact that in, most problems, solutions require a long sequence of actions--and DFS considers long sequences of actions sooner than BFS!

### Search in Context

Let's step back for a second from the details of search to remind ourselves of what we've done. In order to plan ahead to find a solution, an intelligent agent goes through three distinct phases:

• Formulate the goal. (What is desired?) For elective goals, this is not trivial. And it is something that our programs don't often worry about, because the goals are specified by the programmer!

• Formulate the problem. (What needs to be done?) If an agent plans to search for a path to its goal, then this is the step where it must define the problem in the vocabulary of search: how to represent a state , what the initial state is, what the operators are, and what the goal test is.

The result of "problem formulation" is a concrete definition of the problem:

• the state space:
• What states will be considered?
• What actions will be considered?
• What is the initial state?
• the goal test:
How does the agent know when it is done?
• (later) a path cost function:
How will we compare potential solutions?

• Formulate the solution. (What is the answer? How do I get there?) If an agent plans to search for a path to its goal, then this is the step where it applies its search algorithm to search through (potentially, all possible) states for a path to a goal state. At this step, it must choose a search bias.

Why does problem formulation follow goal formulation? Why does problem solution follow problem formulation?

We will not do much with goal formulation at this point. Why? In general, the problem is too hard. In narrow domains, we understand enough about goal formulation to incorporate it into our discussion. As you may imagine, much of what we consider intelligent behavior lies in formulating the 'right' goals!

### Formulating the Problem

Doing problem formulation for simple problems can't really capture the complexity of formulating problems in complex environments. Artificial tasks are fun, and a good way to start, but:

• In some environments, the agent can determine which state it is in, and it knows exactly the effect of each action. These are the easy problems.

• In some environments, the agent either cannot tell which state it is in or does not know exactly the effect of each action. These are harder!! (Think back to when you were learning to write programs...)

• In some environments, the agent is ignorant both of its initial state and of the effects of its actions. (For example, consider an infant in almost any environment.) This is called the exploration problem. The agent must sense during execution and try to learn the effects of its actions and a way to characterize the state of the world. The reasoning component of the agent cannot think ahead with enough precision to map out one sequence of actions to solve the problem.

This leads to a sort of interleaving of deliberation and action that we often do when, say, solving a crossword puzzle. The puzzle doer needs to begin acting before it finishes reasoning, in order to use the information gained during the reasoning step.

### Formulating a Solution

The last step is where the agent actually searches for something. Search involves using an algorithm like the one we've been considering to explore the set of possible states, in search of a sequence of operations that leads from the initial state to a goal state.

Because search employs an algorithm, formulating a solution is objective: the algorithm defines the exact order in which states will be explored. This is different from problem formulation, which is subjective: there are multiple formulations for any problem that satisfy the definitions of our terms.

### Uninformed Search

Random, breadth-first, and depth-first search strategies are considered uninformed because they do not use any knowledge about the problem being solved. The algorithm we've been considering works as well for finding a route between two cities as it does for solving a crossword puzzle, as well for scheduling experiments on the Hubble Space Telescope as it does for proving that a logical statement is true.

This generality is both a source of strength and of weakness:

• An intelligent agent should be able to solve problems that it hasn't seen before, by using a general intelligence about how to solve problems. So uninformed search is an essential tool for an intelligent agent.

• But algorithms that work everywhere don't tend to work very well anywhere, which means that they will be sub-optimal in some important way for almost every problem. We've said that an intelligent agent is willing to settle for satisfactory answers in lieu of optimal ones, but it is more intelligent to do as well as you can than to settle for any old solution. And we can do better.

How can an agent use knowledge of the world to do a better job solving problems?

### Wrap Up

• Reading -- Chapters 4 in Ginsberg.

• Homework -- None for now. But you do have a deadline for your research paper coming up...

Eugene Wallingford ==== wallingf@cs.uni.edu ==== September 6, 2001