You are given a long roadway on which there are n parking stations, where n may be quite large. The distance between adjacent stations is always 1. At m ≤ n distinct stations, we place one wagon each. We must move all m wagons to a single station, k. We would like to minimize the moving cost.
For example, if we have twenty stations with wagons at Stations 7 and 17:
01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 -- --
Then an optimal value for k is 12, with a total moving distance of 10. In this case, any k between 7 and 17 will work as well. When m > 2, we rarely have as many options.
Design an algorithm for identifying a station k that requires the minimum total moving distance for all m wagons. Your algorithm should take as input m, the length of the road, n, the number of wagons, and a list of n station numbers.
Be sure to support your claim with evidence. You might offer some examples, several examples of different types, an informal argument, or -- gasp! -- a proof.
Here are a few examples to start you off.
20 2 2 10 20 4 2 5 10 14 20 6 2 4 6 12 15 17
What did you discover?
A common intuition. This is similar to the concept of "center of gravity". Compute the average of the wagon's station numbers and move all the wagons there.
This works for many examples, including the ones above. But does it work for all cases? How will we know?
We could try to construct a counterexample. Consider instead:
20 5 2 5 10 14 15
We have moved only one of the wagons. Where is the center of gravity now? 9.2. Is that the ideal station? No!
One way we might try to debug our algorithm is to choose the position of the wagon closest to the mean. Which examples does this work for? Which counterexample breaks this approach?
Well, maybe we just need to always round up. Let's try another example with a "skew":
20 4 1 2 3 20
Uh-oh. The average is 6.5. With k = 7, the total moving distance is 28. But in this case, k = 6 is better, with a moving distance of 26. Maybe we need to round down in this case...
But no. k = 5 is better still (total moves = 24). k = 4 gives a total of only 22, and k = 3 gives 20. Th slide stops here, because k = 2 also gives 20.
If you want an even starker example, try
20 4 1 2 3 100
Here, the average really misleads us, and the cost of the moves slides until we reach... k = 3.
What is the common idea? We need a different sense of "average": the median, not the mean.
Why did the center of gravity work when it did? What invariant does does the median algorithm take advantage of?
Don't let your intuitions lead you astray. Don't settle for the first idea you come up with. That doesn't mean that you cannot be creative... Make analogies. Operate as if they are correct. Tinker to improve. But work with lots of examples. And, most important, try to prove your theory wrong. Algorithm design works a lot like science.
Last time, we looked at an example of dynamic programming, in which we solve work bottom-up to solve subproblems, remember their solutions, and eventually build a solution to the problem itself.
Storing solutions to subproblems is a useful technique in other settings, too. Even a computational nightmare like the standard recursive version of Fibonacci numbers can be tamed by caching the results of the subproblems, as this this cached Fibonacci function written in Scheme shows. There are better ways yet to solve Fibonacci, but for problems where we need to use a top-down approach, adding a "memory function" to a program can be an effective way to improve performance.
The World Series problem allowed us to see the power of dynamic programming to compute a result iteratively from very simple rules, filling a table of partial results that leads to a complete answer.
The technique was straightforward. First, write a recurrence relation for the values in the table, then use the relation to fill the rows (or columns) of a table, one cell at a time.
Think a bit about the effeciency of the dynamic programming approach to that problem. How much time does it require? How much space? Are there situations in which one or the other may be unacceptable?
We can use dynamic programming to solve more efficiently many problems for which brute-force algorithms are exponential. Let's consider one now, a classic of computer science.
Recall the Knapsack Problem, which you read about back in Session 11. We are given n items that we would like to pack in a container of capacity W. Each of the items has a value, vi, and a weight, wi. Our goal is to load the container with the most valuable set of items possible. That is, we'd like to select a subset of the items whose total weight is ≤ W and whose total value is greater than any other subset that fits in the container.
For example, suppose we have a knapsack with a capacity W = 8. We'd like to maximize the value in the knapsack by selecting items from this collection of n = 4 items:
item value weight 1 6 3 2 5 2 3 10 5 4 8 3
A brute-force approach would examine all non-empty subsets of the n items:
input : n, the number of items W, capacity of solution v[0..n-1], item values w[0..n-1], item weights output: m, a subset of n with the maximum value and total weight ≤ capacity m := empty set for ( s : non-empty subsets of n ) if value(s) > value(m) and weight(s) < W m := s return m
This algorithm is, of course, exponential in n. Can we do better with dynamic programming? Yes, if we are willing to make some reasonable assumptions. For today, let's assume that W and all of the wi are positive integers.
With this assumption, we can use an approach similar to the one we used for the World Series problem: Define a recurrence relation that relates subproblems, figure out how to use it to compute values iteratively, and then use that idea to design our algorithm.
To define a recurrence relation, we need to think in terms of subproblems. Consider V[i,j], the value of the maximal subset of the first i items in a container with capacity j. How can we decompose this solution into solutions to smaller problems?
A simple solution is to consider the ith item itself. Either the subset contains the ith item, or it does not.
So V[i,j] is equal to vi+V[i-1,j-wi].
This means that the optimal solution to V[i,j] is the larger of these two values! The only detail left is, what if the ith item doesn't fit in the knapsack? In this case, too, V[i,j] is the same as V[i-1,j].
We can now write the recursive part of the recurrence relation as:
V[i,j] = j-wi ≥ 0 → max( V[i-1,j], vi+V[i-1,j-wi] j-wi < 0 → V[i-1,j]
What is the base case for the relation?
V[i,0] = 0 for i ≥ 0 V[0,j] = 0 for j ≥ 0
We can picture the two-dimensional table of subproblems based on items i and capacities j, where * is our goal, as:
CAPACITIES [0..W] 0 ... j-wi ... i ... W -------------------------------------------- 0 | 0 ... 0 ... 0 ... 0 | | | ITEMS i-1 | 0 ... V[i-1,j-wi] ... V[i-1,j] ... | [0..n] i | 0 ... V[i,j] ... | | | n | 0 ... * | --------------------------------------------
As we did for the World Series problem, we can use our recurrence to fill in the cells of the table moving forward from [0,0], row by row, column by column. For each V[i,j], we consult the values of V[i-1,j] and V[i-1,j-wi] in order to compute the value to put in the cell.
... work through example from above ....
The result for our sample data above is:
j 0 1 2 3 4 5 6 7 8 v w i ---------------------------- 0 | 0 0 0 0 0 0 0 0 0 | 6 3 1 | 0 0 0 6 6 6 6 6 6 | 5 2 2 | 0 0 5 6 6 11 11 11 11 | 10 5 3 | 0 0 5 6 6 11 11 15 16 | 8 3 4 | 0 0 5 8 8 13 14 15 19 | ----------------------------
The optimal subset of the four items, V[4,8], has a value of 19 units. We can determine which items make up that subset by reasoning backward from V[4,8]:
The maximal subset consists of items 1, 2, and 4.
Notice: We fill the table forward, from smallest to largest problem. Once we have a full table, we reason backward from the desired problem to determine the elements of the optimal subset.
How efficient is this approach? The table-building algorithm is θ(nW), and the algorithm for determining the maximal subset is O(n+W).
Quick exercise. Why is the second algorithm in O, not θ?
Quick exercise. Might we annotate the values in the table so that they tell us the maximal subset that gives that value? What trade-off does this require? Would it be computationally feasible for large problems?
If you are looking for a chance to practice your programming skills, implement these algorithms in your favorite language. They are straightforward, and their run-time behavior is illuminating.