Session 14

Opening Exercise: Mystery Algorithm

Consider this algorithm:

```    ALGORITHM: mystery(n)
INPUT    : integer n
GLOBAL   : array A[1..n]

if n = 1
write A
else
for i ← 1 to n do
mystery(n-1)
if n is odd
swap A[1] and A[n]
else
swap A[i] and A[n]
```

For now, assume that A = [1 2 3 ... n].

• Trace the algorithm for n = 2 and n = 3.
• What is the time efficiency of mystery?
• What does mystery compute?

Hint: create an "instance" of mystery for n = 2, 3, ..., as needed, with no if statements. Trace calls to mystery(n-1) for each value of i.

Mystery = HeapPermute

This algorithm was invented by a guy named Heap, unlike HeapSort, which was invented by a guy named Williams. :-)

For n = 2:

```12
21
```

For n = 3:

```123
213
312
132
231
321
```

It computes the n! permutations of A, systematically.

• Like the Johnson-Trotter algorithm we learned last time, it uses decrease-by-one to generate the permutations one at a time without stacking solutions to the sub-problems.

• Unlike Johnson-Trotter, it does stack recursive calls.

• Unlike Johnson-Trotter, it does not generate its output in minimal-change order.

The swaps are the basic operation. The number of swaps done for an n-element array, S(n), is quite high:

```    S(1) = 0

n
S(n) =   Σ ( S(n-1) + 1 )
i=1

= nS(n-1) + n             for n > 1
```

We can solve this by our old backward substitution approach:

```    S(n) =                          = n S(n-1) + n
= n(nS(n-2) + n) + n       = n2S(n-2) + n2 + n
= n2(nS(n-3) + n) + n2 + n = n3S(n-3) + n3 + n2 + n
...
i
niS(n-i) +  Σ nk
k=1
...
n-1
nn-1S(1) +  Σ nk
k=1
```

Now, S(1) = 0, so that left term goes to zero (whew!), but that still leaves the sum of nk for k = 1 to n-1. This function is θ(n!). The value is approximately n!(e - 1) - 1.

Review: Decrease and Conquer

Last week, we considered decrease-and-conquer algorithms which, like divide-and-conquer algorithms, solve a smaller problem of the same sort in order to generate an answer for the original problem. In decrease-and-conquer, we usually only create one smaller problem to solve, by carving off one or two or some small percentage of the input. Decrease-by-one algorithms solve a problem of size n by first solving a problem of n-1. We first studied insertion sort, a decrease-by-one sorting algorithm, and then a couple of decrease-by-one algorithms for generating permutations. Permutations are an example of a combinatorial object, in which all combinations of some set of elements are considered. We need to generate permutations in order to solve the assignment problem. Today we'll consider subsets, a combinatorial object required for tasks like the knapsack problem, and then some variations of decrease-and-conquer in which each step decreases n by a variable amount.

Refresher Exercise: The Johnson-Trotter Algorithm

Recall the Johnson-Trotter algorithm for generating minimal-change permutations without explicitly solving the subproblems:

```    ALGORITHM: johnson-trotter(n)
INPUT    : integer n

initialize A[1..n] ← 1..n with all arrows pointing left
while there exists a mobile element
k ← the largest mobile integer
swap k and the element it points to
reverse the direction of all elements larger than k
```

Trace the algorithm for n = 3 (or 4).

Generating Subsets

As with permutations, there is a simple top-down decomposition:

... which gives a straightforward algorithm:

```    ALGORITHM: subsets(n)
INPUT    : integer n

partial ← subsets(n-1)
return partial U { S | (SUB + n) for all SUB in partial }
```

Quick Exercise: What is this algorithm's efficiency?

Because subsets(n) has 2n members, we really can't avoid time efficiency of θ(2n). But this algorithm also uses O(2n) space, as it computes all sets explicitly and builds the final solution in memory.

As with permutations, we can do better in regard to space. We can use a variation of decrease-by-1: Generate one member of the result, then generate the remaining members implicitly, that is, without storing all the sub-results anywhere. This draws on the same idea of "morphing" elements one at a time. The key to this approach is to systematically cover whole set of answers.

The trick is to recognize that any bit string of size n corresponds to one member in subsets(n). This transforms the subset generation problem into a counting problem!! We seed the algorithm with the "trivial" 0n, which corresponds to the empty set:

```    value = 0n
forever
print value
value ← value + 1        (modular arithmetic -- bounded # of bits)
if value = 0n
then return
```

Consider the simple examples of n=2 and n=3:

```    00    {}
01    {b}
10    {a}
11    {a, b}

000   {}
001   {c}
010   {b}
011   {b, c}
100   {a}
101   {a, c}
110   {a, b}
111   {a, b, c}
```

Notice the unnatural order of generated series. Similar to the idea of minimal change for permutations, we usually want to see the sets that involve j only if all sets containing 1..j-1 have already been generated. This is sometimes called squashed order.

Quick Exercise: How can we generate the subsets in squashed order?

Hint: We don't have to interpret 010 as "no 1, yes 2, no 3"...

Quick Exercise: How can we generate the subsets in minimal change order?

Hint: Can we use some toggle value, as we did in the minimal-change algorithm, or a set of toggle-like arrows, as we did in the Johnson-Trotter algorithm?

Wrap Up

• Reading -- Review Sections 4.4 and 5.2-5.3.

• Homework -- Homework 4 is due this session.

• Quiz 2 -- Quiz 2 is one week from today.

Eugene Wallingford ==== wallingf@cs.uni.edu ==== March 1, 2005