## Session 11

### A Warm-Up Exercise

As we have seen, map is a higher-order function that is awfully handy for solving problems with lists. However, it doesn't work quite so easily on nested lists such as the s-lists we saw last time. So let's make our own map!

 Hello, recursive kitty.

Write a function (map-nlist f nlist), where f is a function that takes a single number argument and nlist is an n-list. N-lists are just like s-lists, but with numbers:

```               <n-list> ::= ()
| (<number-exp> . <n-list>)

<number-exp> ::= <number>
| <n-list>
```

map-nlist returns a list with the same structure as nlst, but where each number n has been replaced with (f n). For example:

```     > (map-nlist even? '(1 4 9 16 25 36 49 64))
'(#f #t #f #t #f #t #f #t)

> (map-nlist add1 '(1 (4 (9 (16 25)) 36 49) 64))
'(2 (5 (10 (17 26)) 37 50) 65)
```

### Using Mutual Recursion to Implement map-nlist

The definition on n-list is mutually inductive, so let's use mutual recursion. The code will look quite a bit like our mutually-recursive subst function from last time.

[ We build it from scratch... ]

The result is something like this:

```    (define map-nlist
(lambda (f nlst)
(if (null? nlst)
'()
(cons (map-numexp f (first nlst))
(map-nlist  f (rest nlst))))))

(define map-numexp
(lambda (f numexp)
(if (number? numexp)
(f numexp)
(map-nlist f numexp))))
```

This is quite nice. map-nlist says exactly what it does: combine the result of mapping f over the numbers in the car with the result of mapping f over the numbers in the cdr. There is no extra detail. map-numexp applies the function, because it is the only code that ever sees a number. If it sees an n-list, it lets map-nlist do the job.

With small steps and practice, this sort of thinking can become as natural to you as writing for loops and defining functions in some other style.

### Recap

In the last two sessions, we have studied how to write recursive programs based on inductively-defined data. Our basic technique is structural recursion, which asks us to mimic the structure of the data we are processing in our function. We then learned two techniques for writing recursive programs when our basic technique needs a little help:

• interface procedures. When our structurally-recursive function requires an argument that the function specification does not provide, we turn it into a helper function. The specified function passes its original agruments to the helper, along with an initial value for the new argument.

• mutual recursion. When our inductive data definition includes two data structures that are defined in terms of one another, we write two functions that are defined in terms of one another.

We have also encountered the idea of program derivation. Mutual recursion creates two functions that call each other. Sometimes, the cost of the extra function calls is high enough that we would like to improve our code, while remaining as faithful as possible to the inductive data definition. Program derivation helps us eliminate the extra function calls without making a mess of our code.

Program derivation is a fancy name for a simple idea. In Racket, expressions are evaluated by repeatedly substituting values. Suppose we have a simple function:

```    (define 2n-plus-1
(lambda (n)
```

Whenever we use the name 2n-plus-1, Racket evaluates it and gets the lambda expression that it names. To evaluate a call to the fucntion, Racket does what you expect: it evaluates the arguments and substitutes them for the formal parameters in the function body. Thus we go from a function application such as:

```    (2n-plus-1 15)
```

to:

```    ((lambda (n)
15)
```

If we stopped here, we would still be making the function call. But we can apply the next step in the substitution model to create this expression:

```    (add1 (* 2 15))
```

Work through program derivation section from Session 10.
subst. The same, but different. Inlining.

### Warm-Up, Part 2

Use program derivation to convert map-nlist into to a single function.

```    (define map-nlist
(lambda (f nlst)
(if (null? nlst)
'()
(cons (map-numexp f (first nlst))
(map-nlist f (rest nlst))))))

(define map-numexp
(lambda (f numexp)
(if (number? numexp)
(f numexp)
(map-nlist f numexp))))
```

.
.
.
.
.

First, we convert to:

```    (define map-nlist
(lambda (f nlst)
(if (null? nlst)
'()
(cons ((lambda (f numexp)
(if (number? numexp)
(f numexp)
(map-nlist f numexp)))
f (first nlst))
(map-nlist f (rest nlst))))))
```

... and then to:

```    (define map-nlist
(lambda (f nlst)
(if (null? nlst)
'()
(cons (if (number? (first nlst))
(f (first nlst))
(map-nlist f (first nlst)))
(map-nlist f (rest nlst))))))
```

This eliminates the back-and-forth function calls between map-nlist and map-numexp. The primary cost is an apparent loss of readability: the resulting function is more complex than the original two. Sometimes, the trade-off is worth it, and as you become a better Racket programmer you will find the one-function solution a bit easier to grok immediately.

We will use program derivation only when we really need it, or when the resulting code is still small and easy to understand.

### Tail Recursion

 Hello, tail recursive kitty.

Take a look at this version of our old friend, factorial, which we saw in passing back in Session 2:

```    (define factorial-aps
(if (zero? n)
(factorial-aps (- n 1) (* n answer)))))
```

You may wonder why the function is written this way. It passes a partial answer along with every recursive call and returns the partial answer when it finishes.

In a very real sense, this function is iterative. It counts down from n to 0, accumulating partial solutions along the way. Consider the sequence of calls made for n = 10:

```    (factorial-aps 10       1)
(factorial-aps  9      10)
(factorial-aps  8      90)
(factorial-aps  7     720)
(factorial-aps  6    5040)
(factorial-aps  5   30240)
(factorial-aps  4  151200)
(factorial-aps  3  604800)
(factorial-aps  2 1814400)
(factorial-aps  1 3628800)
(factorial-aps  0 3628800)
```

This function is also imperative. Its only purpose on each recursive call is to assign new values to n and the accumulator variable. In functional style, though, we pass the new values for the "variable" as arguments on a recursive call.

That sounds a lot like the for loop we would write in an imperative language. On each pass through the loop, we update our running sum and decrement our counter.

At run time, factorial-aps can be just like a loop! Consider the state of the calling function at the moment it makes its recursive call. The value to be returned by the calling function is the same value that will be returned by the called function! The caller does not need to remember any pending operations or even the values of its formal parameters. There is no work left to be done.

In programming languages, the last expression to evaluate in order to know the value of an expression is called the tail call. We call it that because it is the "tail" of the computation.

In the case of factorial-aps, the tail call is a call to factorial-aps itself. In programming languages, we call this function tail recursive.

When a function is tail-recursive, the compiler can take advantage of the fact that the value returned by the calling function is the same as the value returned by the called function to generate more efficient code. How?

It can implement the recursive call "in place", reusing the same stack frame. First, it stores the values passed in the tail call into the same slots that hold the formal parameters of the calling function. Second, it replaces the function call with a goto statement, transferring control back to the top of the calling function.

Illustrate.

By definition, a Racket compiler must do this. The Scheme language definition specifies that every Scheme interpreter must optimize tail calls into equivalent gotos. Racket, a descendant of Scheme, is faithful to this handling of tail calls.

Not all languages do this. The presence of side effects and other complex forms in a language can cause exceptions to the handy return behavior we see in tail recursion. Compilers for such languages usually opt to to be conservative.

For example, Java is not properly tail recursive, and making it so would complicate the virtual machine a bit. So the handlers of Java have not made this a requirement for compilers. Likewise for Python. Still, many programmers think it might be worth the effort. Some Java compilers do optimize tail recursion under certain circumstances, as does gcc. Tail recursion remains a hot topic in programming languages.

With their lack of side effects, functional programming languages are a natural place to eliminate tail recursive calls. In addition to Racket, languages such as Haskell make good use of tail recursion elimination. That leads to some interesting new design patterns as well.

In functional programming, we use recursion for all sorts of repetitive behavior. We often use tail recursion because, as we have seen, the run-time behavior of non-tail recursive functions can be so bad. In other cases, we use tail recursion because structuring our code in this way enables other design patterns that we desire.

### Accumulator Variables

The second argument to our factorial function above is called an accumulator variable. How do we create one when writing a recursive function?

Suppose we started with the standard recursive implementation of factorial:

```    (define factorial
(lambda (n)
(if (zero? n)
1
(* n (factorial (sub1 n))))))
```

What happens on each recursive call?

• factorial must wait for the result of (factorial (sub1 n)) before it can apply the * function to n and the result.

• To wait, it must remember the value of n and the value of *. As you may have learned in prior courses, each call to factorial requires its own stack frame to remember the state of its computation.

But to compute (factorial (sub1 n)), factorial must wait for the result of (factorial (- n 2)), which must wait for the result of (factorial (- n 3)), which must wait for the result of ... and so on. This approach makes a lot of use of the system stack: It computes all of the (factorial (- n k)) values, for n-1 down to 0, before it computes the value of multiplies anything by n!

This process is expensive in its use of space. It is the reason most of us learned early to be wary of recursion for fear of causing a stack overflow.

If only we could write a procedure that evaluates the (* n ...) part of the computation right away. Then we could eliminate the need to save up all those pending computations.

We can do that, by reorganizing the way we compute the answer. That's how I created factorial-aps:

```    (define factorial-aps
(if (zero? n)
(factorial-aps (sub1 n) (* n answer)))))
```

This function evaluates the (* n ...) portion of its work first and then it passes that result as an argument on the recursive call that computes (factorial (sub1 n)). Instead of computing...

```    n * ( (n-1) * ( (n-2) * ... (2 * (1 * 1))))
```

from the "bottom up", as the original fucntion does, factorial-aps computes ...

```    ((((((n * (n-1)) * (n-2)) * ... 2) * 1)))
```

from the "top down". Multiplication is associative, so the answer is the same, and so we are still happy.

As we saw in a cool demo of Racket's behavior during Session 2, this function offers phenomenal performance, because it makes a vast improvement in the amount of space used by the function. That is the performance improvement we saw earlier in the session.

The formal parameter answer is known as an accumulator variable. It accumulates the intermediate results of the computation in much the same way that a local variable accumulates a running total in the loop of a procedural program.

Notice that using an accumulator variable often requires us to create an interface procedure. We have to pass the accumulator as an extra argument on each recursive call. The interface procedure passes the initial value of the accumulator on the first call. This value is usually the identity of the operation being used. With multiplication, that is 1:

```    (define factorial
(lambda (n)
(factorial-aps n 1)))
```

By the way, I use the suffix -aps in the name of my helper function to indicate that it is written in Accumulator Passing Style. That is the name for the style of programming in which use accumulator variables to track our intermediate solutions.

Theoretical Digression (optional). A natural extension of this idea is to make the accumulator variable a function that can be applied to the initial value to compute the desired answer. This defers all of the actual computation until later, which can be handy in a variety of contexts, such as recognizing and handling error conditions.

When the accumulator is a function, we often refer to it as a continuation, because it is the continuation of the computation yet to be done. This may seem strange, but keep in mind that we can pass this function to any function at any time. Passing continuations around -- so-called continuation passing style -- makes it possible to implement all sorts of exotic control structures, such as exceptions, threads, backtracking, and the like. How? Because the called function gets to decide when -- and even if! -- to call the continuation.

Scheme is a minimalist language, in that it tends to provide only a necessary core of operations out of which all other operations can be built. Racket provides a lot more primitive procedures than Scheme, but still it is minimal compared to many other languages that have so many different constructs. This minimalism accounts for its lack of loops, for instance, which can be simulated recursively. Scheme provides support for accessing the "current continuation" of any computation [see the middle of the language definition of control features], because with that we can implement most of the control structures we desire!

Using an accumulator variable to implement factorial has the feel of writing a loop. That the using an accumulator variable gives us these feelings is not a coincidence; as we saw above, they are closely related. In general, accumulator-passing style resembles imperative, sequential programming of the sort you are used to doing in Python, Ada, and Java. Here, we are just doing it through the order of function applications!

While using an accumulator variable can help us to create tail-recursive function, this is only one use of the technique. The true effect of an accumulator variable is that it gives the programmer greater control over the order of execution. Notice that we used the accumulator in our factorial function to do multiplications before function calls. When we use an accumulator variable, we control the order of execution not by doing things in sequence and rearranging the sequence, but by making function calls and rearranging the order in which we nest arguments.

### Using an Interface Procedure to Implement positions-of

You had an opportunity to practice using interface procedures on Homework 4. positions-of required us to create a helper function that kept track of the poition number of each item in the list.

Let's look at one...

Notice: positions-of-at is tail recursive!

### Wrap Up

• Reading -- Read Chapters 6-7 of The Little Schemer. Feel free to finish it if you find that you can't put it down!

• Homework 4 is due today. Homework 5 is available soon and due one week from today.

• As you can infer from the sessions page, Exam 2 is two weeks from today, on Tuesday, February 27. In the meantime, we will practice our recursive programming patterns, apply them to programming languages, and perhaps reconsider functional programming a bit.

Eugene Wallingford ..... wallingf@cs.uni.edu ..... February 13, 2018