## Session 13

### Let's Play a Game!

Our game today is Inversion Swap. It is a simple two-player game played with a list of n arbitrary integers. The players take turns swapping the numbers of an inversion. An inversion is any pair of numbers x and y where x < y and x occurs to the right of y in the list. The first player who cannot move loses the game.

For example, consider this starting position:

```    -1 3 9 4
```

This list contains only one inversion: [9 .. 4]. So Player 1 has only one legal move, resulting in this new game position:

```    -1 3 4 9
```

The second player has no available moves and so loses.

Now consider this starting position:

```    -1 3 9 2
```

Suppose that Player 1 chooses to swap 2 and 9. The new position is:

```    -1 3 2 9
```

Player 2 sees the [3 .. 2] inversion and makes the swap:

```    -1 2 3 9
```

Now it is the first player who has no available moves and loses.

Your first exercise is to play this game twice against one of your classmates. Take turns going first.

Try to find a strategy that works for you, and record it. Don't share it with your opponents!

... lots of fun ensues ...

Now, play two games against another person, but this time, take turns not going first but rather choosing to go first or second.

... more fun ensues ...

You should have some strategy for playing by now, though it may not work all the time or you may not implement it correctly every time. Now play two more games against the same opponent, again taking turns choosing to go first or not.

... even more fun ensues ...

Today's .zip file contains a simple Java program to generate game boards and a file containing the boards we used today, plus a few more!

### Debriefing Inversion Swap

What is your strategy? How well did this strategy work in your games? Is it simple to use? How did going from lists of size 6 to lists of size 15 affect its ease of use?

You may have noticed that some moves change the number of inversions in the list by exactly 1. Other moves change the number of inversions by a larger number, thus limiting your opponent's moves more drastically.

We can recognize an invariant in the game. If the numbers in the list are unique, then every move changes the number of inversions in the list by an odd number! This means that each move changes the 'parity' of the list, from even to odd or from odd to even.

In these cases, the first player always wins the board contains an odd number of inversions -- even if he makes a "suboptimal" move that cuts the number of inversions by only 1 when a 3-change move exists.

Consider this opening position:

```    1 9 3 4 2
```

This list contains five inversions: 9..3, 9..4, 9..2, 3..2, and 4..2. The first player can swap 9..2 and win immediately! If he makes the 4..2 move, though, he leaves 4 inversions for his opponent. Any move by the second player takes the list to an odd number of inversions, leaving first player still in the driver's seat.

This invariant holds only if all values are unique. Consider line 5 in the file of boards we used today. Notice the consecutive 65s followed by the 56... Swapping 56 with the first 65 maintains the parity of inversions by eliminating exactly 2!

As you have probably realized by now, inversions are central to sorting. Sort algorithms have to do two kinds of basic operation: comparisons and swaps. The comparisons identify inversions. Naive approaches for finding all inversions can be quite expensive, O(n²). The swaps eliminate one or more inversion. More efficient algorithms make better and thus fewer swaps.

Quick Exercise: Can you design an algorithm to find the maximum inversion (in terms of position, not of values)?

### Sorting by Divide and Conquer

The typical divide-and-conquer algorithm:

• divides the problem into 2 or more subproblems of the same form and roughly the same size,

• conquers the sub-problems either by applying the same process to them or by aolving small instances directly, and then

• combines the solutions to the subproblems to create a solution to the original.

Often, the 'divide' step is trivial, or nearly so. Combining the answers to the sub-problems often requires more work.

The prototypical divide-and-conquer sorting algorithm is mergesort. The divide step is trivial: split the array v[1..n] into v[1..n/2] and v[n/2+1..n]. Each subarray is solved in the same way. The combine step is to merge the now-sorted halves of the array.

Less commonly, the 'divide' step requires some real work. A great example of this approach is quicksort. Here, the divide step requires us to partition the array such that all values in v[1..k-1] are less than or equal to v[k], and all values in v[k/+1..n] are greater than v[k]. This is a non-trivial process that requires many comparisons and swaps.

The trade-off is that solving the subproblems is trivial, and we have no need to combine the results. This is because the partition works "in place", which ensures that the solved subproblems are already in correct relation to one another. This means that the combination cost happens before making the recursive calls!

### Analyzing the Divide-and-Conquer Sorts

How can we determine the complexity of these algorithms? We can use the same tools we studied earlier to do the job: summations for iterative algorithms, and recurrence relations for recursive ones. But the nature of divide-and-conquer algorithms gives us a few short-cuts.

If we break a problem of size n into a problems of size n/b, then the cost of divide-and-conquer is:

```    T(n) = aT(n/b) + f(n)
```

where a ≥ 1, b > 1, and f(n) is a function that computes the cost of dividing the problem and combining the partial solutions.

This is a general recurrence relation for all divide-and-conquer algorithms. We can do a little complexity-class arithmetic to draw some general conclusions about the algorithm's complexity quickly if we don't need exact counts or coefficients:

```    If f(n) ∈ θ(nd) for some d ≥ 0,
then
if a < bd, then T(n) ∈ θ(nd)
if a = bd, then T(n) ∈ θ(ndlog n)
if a > bd, then T(n) ∈ θ(nlogba)
```

For example, suppose that we have a binary tree of values and that we would like to count the nodes in the tree, or to sum their values. The recurrence relation for the straightforward divide-and-conquer algorithm is

```    A(n) = 2A(n/2) + 1
```

f(n) = 1 ∈ θ(n0). So, a = 2, b = 2, and d = 0. Because 2 > 20 = 1,

```    A(n) ∈ θ(nlogba)
= θ(nlog22)
= θ(n)
```

Let's take a quick look at how mergesort and quicksort fare under such analysis.

• Mergesort. Even in the worst case, the merge step is θ(n), so d = 1. This means mergesort is θ(n log n).

• Quicksort. This is tougher, because the best, worst, and average input cases are so different for the partition step.

• The best case splits each array in half, so f(n) = n. This makes quicksort θ(n log n) exactly.

• In the worst case, the input is already sorted, so the partition splits each array into portions of size 1 and size n-1. This means that f(n) = θ(n²), and quicksort is θ(n²), too.

• If we assume that every split is equally likely, then the average case requires partition to be f(n) = θ(n log n). The result is that quicksort is θ(n log n), too. (With a constant of approximately 1.38!)

So: mergesort is fast almost all the time. Unfortunately, because it doesn't work in place, it requires O(n) space. That's a major drawback for large datasets stored in indexable collections. On the other hand, it is extremely easy to implement mergesort and get very good time performance.

And: quicksort is really fast in the best case, fast on average, but horrible in the worst case. It doesn't require any extra space, though, and so is often the choice for working with arrays.

Quick Exercise: I say that mergesort "doesn't work in place", but it could. Sketch an algorithm that does so, and see what you think.

Quick Exercise: How do mergesort and quicksort fare when the data is in a linked list rather an indexable collection such as an array?

Mergesort now requires no extra space, and quicksort can't be done in place. This is one reason that mergesort is so popular for sorting files and other data on disk.

### Elections, Part 3

It turns out that there are often many ways to divide a problem into subtasks. Our two sorts today show us two different ways. Mergesort divides by the positions of the elements in the list. This is straightforward and easy to implement. Quicksort divides by the values of the elements, which requires some extra work. The pay-off comes in the combination step.

Recall the Elections puzzle from last time. We saw two brute-force algorithms that work fine on small data sets, and then saw a divide-and-conquer approach with much better performance.

In that original top-down effort, we divided the list of candidates by their positions in the list, creating groups of candidates. This approach enabled us to make one pass through the inputs using sqrt(n) range counters, and then perhaps a second pass through the inputs using sqrt(n) candidate counters.

Is there some other way to divide the candidates into groups? If we borrow the partitioning idea from quicksort, we might consider how to divide our list of candidates based on their values rather than their positions. How so?

The answer we hope to generate, if it exists, is the number of the winning candidate. We could decompose each candidate number into its separate bits.

For instance, Candidate 6 can be represented as "110", and Candidate 2 as "010". The groups of candidates can be defined by the bits they share. A vote for Candidate 6 would contribute to the groups for bits 2 and 1, while a vote for Candidate 2 would contribute only to the group for bit 1.

The invariant is similar to before. If a majority candidate exists, then it will cause each bit in the tally of bit counters to have a majority value, too! The majority will be for 0 if that bit is 0 in the candidate's representation, or 1 if the corresponding bit in the majority candidate is 1. We can still have a "false positive" if several candidates contribute to majorities for the bits.

So:

1. Create an array of bit counters, one for each bit position in the largest candidate number. Initialize each element in the array to 0.

2. Make one pass through the list of votes. For each vote, find its bit at each position. For each bit, if the bit is 1, then increment the corresponding bit counter.

3. When done, if every bit counter is different from n/2, then construct the potential "majority candidate" and do Step 4. Otherwise, return 0.

4. Make a second pass through the input, counting the votes for the potential majority candidate. If the candidate ever reaches a majority, then output the candidate. Otherwise, return 0.

This algorithm requires only log2(longest candidate number) counters on the first pass and 1 counter on the second! That is a big improvement over the sqrt(n) counters needed on each pass of our previous algorithm.

This is advanced algorithms alchemy. Don't expect that you will be able to do it at home for a while. But it is within your grasp! The more different ideas you know about algorithms, the more able you will be to find clever, efficient, correct algorithms.

### Wrap Up

• Reading -- Here are a few items that might help you understand divide-and-conquer algorithms and their use in sorting.

• Homework -- Homework 3 is available now and due in one week.

Eugene Wallingford ..... wallingf@cs.uni.edu ..... February 27, 2014