Session 15

Decrease and Conquer for Permutations


CS 3530
Design and Analysis of Algorithms


Name This Algorithm!

Consider this algorithm:

    ALGORITHM: mystery(A, n)
    INPUT    : integer n
             : array A[1..m], where m ≥ n

    if n = 1
       write A
    else
       for i ← 1 to n do
           mystery(A, n-1)
           if n is odd
              swap A[1] and A[n]
           else
              swap A[i] and A[n]

A loop and recursion? What's up with the that?

Assume that A = [p i l g r i m] as you answer these questions:



Mystery Solved

For n = 2:

    p i
    i p

For n = 3:

    p i l
    i p l
    l p i
    p l i
    i l p
    l i p

It is now no mystery that mystery computes the n! permutations of A. This is Heap's algorithm for generating permutations. It was invented by a guy named Heap -- unlike HeapSort, which was invented by a guy named Williams! Then there is the heap data structure, and "the heap" in dynamic memory allocation. We could confuse ourselves.

The swap is the basic operation. The number of swaps done for an n-element array, S(n), is quite high:

    S(1) = 0

             n
    S(n) =   Σ ( S(n-1) + 1 )
            i=1

         = nS(n-1) + n             for n > 1

We can solve this relation using our old backward substitution approach:

    S(n) =                          = n S(n-1) + n
         = n(nS(n-2) + n) + n       = n2S(n-2) + n2 + n
         = n2(nS(n-3) + n) + n2 + n = n3S(n-3) + n3 + n2 + n
         ...
                       i
         = niS(n-i) +  Σ nk
                      k=1
         ...                              let i = n - 1
                     n-1
         = nn-1S(1) +  Σ nk
                     k=1

Now, S(1) = 0, so that left term goes to zero (whew!), but that still leaves the sum of nk for k = 1 to n-1. This function is θ(n!). The value is approximately n!(e - 1) - 1.

We should not be surprised. The algorithm produces n! items, and that requires n! or more steps. For problems such as this, the constants become really important, because they are the only ways we can improve our approach.



More on Decrease and Conquer

HeapPermute, as this algorithm is called, is an example of a decrease-and-conquer algorithm. On each pass through the loop, it peels off one value, solves the rest of the problem, and then makes a change. It's more complicated than the typical decrease-and-conquer, because it is trying to do a really hard problem as efficiently as possible. (More on that soon.)

Recall that the decrease-and-conquer approach follows from the same motivation as divide-and-conquer: by solving a smaller problem of the same sort, we can sometimes more easily solve the original problem.

In decrease-and-conquer, we usually only create one smaller problem to solve, by carving off one or two or some small percentage of the input. We often incorporate these "peeled off" input values into the solution to the sub-problem in order to reach a solution to the original problem.

The simplest case of decrease-and-conquer is decrease-by-one. The typical decrease-by-one algorithm for a problem of size n:

  1. divides the problem into two parts: a sub-problem of size n-1 and an individual element,
  2. solves the sub-problem of size n-1, with either a recursive call or at least an iterative decreasing process, and then
  3. if necessary, adds the individual element into the sub-problem's solution.

Last time we considered the prototypical decrease-by-one sorting algorithm, insertion sort, and found it to perform better than brute-force selection sort with a very simple algorithm.

Sorting is a difficult problem, but as we have just seen, there are more difficult ones.



Combinatorial Objects

In the last few weeks, we have considered some interesting algorithms to problems such as the board coloring, the knapsack problem, the assignment problem, and several others. Many of the naive or brute-force algorithms that we proposed wanted to do exhaustive search with the possibility of backtracking when we ran into dead ends. These algorithms required permutations of their inputs, or combinations, or all possible subsets. These data values are known collectively as combinatorial objects, and they play an essential role in many important problems.

For such problems, we need to know how to generate the required combinatorial objects exhaustively and in a systematic fashion. Of course, we want to do so as efficiently as possible, because our algorithm needs to act on each of the objects, and there are many of them. Decrease-and-conquer approaches work well for these goals, especially decrease-by-one.

You will notice that decrease-by-one does not make these problems inexpensive; it simply makes them doable in a systematic and understandable way. However, some of these algorithms can be really inefficient, so we are always looking for ways to squeeze better performance out of ideas -- and our data representation.



Generating Permutations

Almost all work on generating permutations assumes that we are permuting the integers 1..n. When not, we use the integers as keys into a sequence of non-integer values.

There is a very simple bottom-up decomposition of permute(n): insert n at all possible locations of all the members of permute(n-1).

                 1

         21              12

    321 231 213     312 132 123

This works, but the output is less than ideal. Notice the change between 213 and 312: the numbers in different positions are two spots apart. On larger sequences, the gap can, of course, be even larger. But algorithms that use the permutations can sometimes benefit from having the changed values in successive elements always be in consecutive positions.

An algorithm which ensures that each new permutation is created by exchanging only two neighboring elements is called a minimal change algorithm. We can add a small detail to our approach that makes it satisfy the minimal change requirement:

    INPUT: n, an integer

    p   ← permute(n-1)
    end ← left
    for each item in p
        start at end and insert n in all possible positions
        toggle end [left ←→ right]

If we do this in the example above, we would insert the 3s into 21 starting on the left, and into 12 starting on the right:

                 1

         21              12

    321 231 213     123 132 312

Notice: every change in the bottom row swaps consecutive values.

Quick Exercise: Now it's your turn. Generate permute(4) using the minimal-change algorithm to extend the third row.

    321  231  213  123  132  312
    ... fill in blank ...

How efficient is the minimal-change approach? Time-wise, we can't do much better. But think about space. The algorithm has to generate and store all the permutations for n-1, n-2, ..., down to 1. That is expensive.

How about this, which may not look like a decrease-by-1 algorithm but which is very much in the same spirit:

This creates an implicit sequence permutations. The new idea is to morph one element repeatedly until all possiblilities have been generated. We can "seed" the process with the trivial [1..n] permutation.

The Johnson Trotter algorithm embodies this idea. It uses two new definitions:

For example, consider this sequence in the permutation of [1..4]:

     3 2 4 1
     → ← → ←

3 and 4 are mobile; 1 and 2 are not.

Here is the algorithm:

    ALGORITHM: johnson-trotter(n)
    INPUT    : integer n

    initialize A = [1 2 3 ... n]
    initialize D = [← ← ← ... ←]

    while there exists a mobile element
        k ← the largest mobile integer in A
        swap k and the element it points to
        reverse the direction of all elements in A larger than k

Consider the cases of n = 2 and n = 3... [ worked in class ].

Quick Exercise: Generate permute(4). Note that, unlike before with our first minimal-change algorithm, we cannot start with the values ofpermute(3) in hand. This algorithm generates them implicitly.

Quick Exercise: What is complexity of the Johnson Trotter algorithm, in both time and space?



HeapPermute vs Johnson-Trotter

Both HeapPermute and Johnson-Trotter computes the n! permutations of n systematically. How do they compare?



Notes

A = [p i l g r i m]

Billy Pilgrim is one of my favorite characters ever, from Slaughterhouse-Five, one of my favorite books ever. So it goes.



Wrap Up



Eugene Wallingford ..... wallingf@cs.uni.edu ..... March 7, 2014