Session 29

Greedy Algorithms and Dynamic Programming


CS 3530
Design and Analysis of Algorithms


Let's Solve... Gold Collection

You are taken to a room with an nxm-square foot floor. Gold coins are spread throughout the room, with at most one per square foot. For example:

a floor with gold

You want those coins! Serendipitously, you have a set of robots that can pick up a gold coin in any cells they visit. Unfortunately:

Design an algorithm to determine the smallest number of robots you will need to retrieve all of the coins. Your algorithm should take as input

For our example layout above, the inputs would be:

    7 8     # a 7x8 room
    2 6     #
    3 2     #
    3 8     #
    4 3     # a list of x y coordinates
    4 5     # for the ten gold coins,
    5 2     #   with 1 1 being
    5 5     #   the upper left corner
    6 8     #
    7 5     #
    7 6     #

Quick Question: What is the correct output for this example board?      (3).

It may be difficult to code your algorithm in full detail at this point. It's okay if you can only describe it at a high level. Just be sure that someone else in the class can understand your idea.

Hint: Try your algorithm on some simple cases first. What are some problematic layouts, given our robots limits? How well does your algorithm handle them? Look for disconfirmatory evidence -- evidence that shows your idea doesn't work!



Gold Collection: Debrief

Quick Question: What sort of real-world problems does this puzzle model?

( A current example: fixing the Hubble space telescope. )

What did you discover? Does generalizing the problem to allow multiple coins per cell affect your solution?

How might we represent this problem?

Are there any regularities in the problem that we can use to find an optimal solution? What design techniques, such as dynamic programming, can we use to solve the problem?

One intuition: greed. We could try to maximize the number of coins collected by each robot. Let the first robot grab as many coins as possible, then let a second do the same, and so on, until all of the coins are taken.

Unfortunately, no. An especially strong harvest by one robot can divide the remaining gold-containing cells into disjoint regions. Consider this example:

a problematic floor with gold

A maximal first robot will garner seven coins, from row 3 and then column eight. The remaining four coins require two robots, for a total of three. However, two robots are enough in this case. (Do you see it?)

Can we salvage the greedy approach by handling this as a special case? Maybe, but are there other special cases to consider? Maybe greed doesn't pay here...

Let's try a new idea, based on the second example. Have each robot peel off the leftmost strip of coin-containing cells, like peeling the layers of an onion. ((Onions!))

In the second example, the optimal solution is to grab the first three coins from row 3, the two coins from row 5, and (optionally) the coin in row 7. Notice the decision point at (3,4). Both simplicity ("Don't change directions with gold ahead!") and greed ("Grab as many coins you can!") would send the robot further down row 3, but both result in a bad split of the remaining coins.

This new approach is optimal. How can we justify the claim? The underlying regularity involves disjoint cells.

Greed doesn't always pay. The problem is that, in general, there is no good way to know in advance. So we we shouldn't settle for our first solution if we prefer better results than it provides.

Yogi Berra, a man of wisdom

Don't let your intuitions lead you astray. Don't settle for the first idea you come up with. That doesn't mean that you cannot be creative... Make analogies. Operate as if they are correct. Tinker to improve. But work with lots of examples. And, most important, try to prove your theory wrong. Algorithm design works a lot like science.

I'm having deja vu all over again.



An Example Where Greed Pays

Let's consider another classic problem of computer science:

Given a weighted connected graph and one of the vertices in the graph, called the source, find the shortest path from the source to every other vertex.

For example, what is the shortest path from node a to every other node in this graph?

an unirected, weighted graph

This is called the shortest path problem. Now, let's look at a classic algorithm that uses both a greedy approach and a dynamic programming approach to solve it.

Note. This problem can be generalized to the task of finding the shortest path between all pairs of nodes in the graph. For the general problem, dynamic programming also works nicely. See the Floyd-Warshall Algorithm. (Yes, that Floyd.)



Dijkstra's Algorithm

One of the classic algorithms of computer science is Dijkstra's Algorithm for finding the shortest path from any node in a graph to all others. It is famous for its place in time, but it is also important for the ideas it contains. It is greedy (using a priority queue), and it uses dynamic programming. The combination of the two is perhaps most important insight in this algorithm.

Dijkstra's Algorithm works only for graphs in which the weights are non-negative. Fortunately, many real-world problems are modeled with non-negative weights, or can be transformed into equivalent problems using only non-negative weights.

This algorithm exploits a simple invariant: After the ith iteration, it has identified the i closest neighbors of the source.

Our example.   We determine the closest neighbor to a in our example graph by inspection. It is b, at a distance of 3. We have just jumped to a solution for iteration 1.

On the i+1-th iteration, the only candidates for the i+1-th closest neighbor are the nodes linked to the source itself or to one of the i closest neighbors.

Our example.   For iteration 2, we need to consider only c and d (which are neighbors to a) and e and f (which are neighbors to b).

The i+1-th closest neighbor is the node that we can reach in the shortest total distance from the source to an existing neighbor and then from the existing neighbor to the node. This is where dynamic programming can help us.

For each node in the neighborhood, we will keep track of two pieces of information: d, the shortest distance from the source to the node, and the preceding node in the path that gives the shortest distance. We sort the list on distance, breaking ties arbitrarily.

Our example.   We record this information for each node in the "neighborhood":
    node  distance   link
      b       3        a
    ---------------------
      d       4        a
      c       5        a
      e       6        b      (3 + 3)
      f       8        b      (3 + 5)

Given this information, we find the next nearest neighbor simply by choosing the node with the smallest d among the nodes in the neighborhood. This is where a priority queue comes in handy, perhaps in the form of a heap; it makes finding and removing an extreme value easy!

Once we find the i+1-th closest neighbor, we must update the data for all the remaining nodes, in case we now know that they are closer to the source than we were able to know before.

Our example.   We select d as the next nearest neighbor and update the information for the remaining nodes:
    node  distance   link
      b       3        a
      d       4        a
    ---------------------
      c       5        a      no change
      e       5        d      change -- (4 + 1)
      f       8        b

Dijkstra's algorithm continues doing this, selecting one new nearest neighbor per iteration, until all nodes have been processed.

Our example.   We select need three more iterations to finish our graph. First we choose c and update the remaining nodes:
    node  distance   link
      b       3        a
      d       4        a
      c       5        a
    ---------------------
      e       5        d      no change
      f       8        b      no change

(We could just as well have chosen e here.) Then:

    node  distance   link
      b       3        a
      d       4        a
      c       5        a
      e       5        d
    ---------------------
      f       7        e      change -- (4 + 1 + 2)

And finally:

    node  distance   link
      b       3        a
      d       4        a
      c       5        a
      e       5        d
      f       7        e
    ---------------------

Quick Question: Why must weights be non-negative in order for Dijkstra's algorithm to work?

Note that this is a simpler form of dynamic programming than we used in our two previous examples. Here, we have a heap or priority queue of node data, not a two-dimensional table. The key idea is the same, though... We record solutions to solved subproblems and use them to solve the remaining subproblems.



Adapting Dijkstra's Algorithm

Scenario 1. Suppose we need to find the shortest path between two given vertices. In this case, both the source and the sink are specified.

The most efficient way to solve this problem is to use Dijkstra's approach and preempt it. Start the algorithm at one the two vertices. Stop as soon as the other is added to the tree of solutions.

Scenario 2. We are given a graph where the vertices also bear non-negative weights. The cost of a path includes the cost of its vertices, too.

This is a great place to do a simple problem transformation.

Now we can apply Dijkstra's Algorithm to the new graph, and return the answer.

Quick Question: Would simply adding the vertex's weight to each path as we update its neighbors' data work?

Scenario 3. We need to find a spanning tree of a graph.

No problem! Apply Dijkstra's Algorithm to the graph, starting at any vertex. Its output is a spanning tree for the graph.

Alas, it is not an optimal spanning tree. (Why not?) Instead use Prim's Algorithm or Kruskal's Algorithm. Both are greedy and optimal.



Fun Applications

... how do they compute the shortest path from Kevin Bacon to an arbitrary actor? See the Oracle of Bacon.

In math and computing, we have our own version's of the Bacon number. Check out the Erdös Number Project and the Ward Number.

... shortest paths in scale-free networks. The power law. A source of some great undergraduate research projects.



Wrap Up



Eugene Wallingford ..... wallingf@cs.uni.edu ..... April 28, 2014