We would like to build a round wooden table top with a diameter of 5 feet and a 1-foot diameter hole cut out of the middle, to allow for a pole to hold a sun umbrella. What is the area of the tabletop?
Write a Joy program that assumes two values on the stack: the radius of the inner circle, rI, on top, and the radius of the outer circle, rO, just beneath it. Both are in feet. Your program should compute the area of the ring they form, also in feet. For example:
> 2.5 0.5 your program here 18.8496
Remember these useful Joy stack operators: swap and dup. Oh, and pi is not a primitive, but you can assume it is defined.
We need to square both numbers and multiply each by pi before subtracting, and then subtract _in this order_. So I swapped first, squared the first number and multiplied by pi, and then swapped back so that I could do the same thing to the inner radius. To square, we can dup and multiply:
> 2.5 0.5 swap dup * pi * swap dup * pi * - . 18.8496
There is some duplication there... If this were a homework problem, as it sometimes is on Homework 2, I would probably say:
You may want to write a function to compute the area of a circle and use it to compute ring's area.So:
> DEFINE square == dup *. > DEFINE circle-area == square pi *. > 2.5 0.5 swap circle-area swap circle-area - . 18.8496
Joy programmers often find themselves wanting to apply an operation to the item just beneath the top of the stack. Swapping twice is okay when the operation is short, but as subprograms grow the swaps end up separated in space. This makes programs harder to read and harder to modify.
The language scratches this itch with a higher-order operator: dip. It takes a quoted program as its argument, caches whatever is on top of the stack, executes the program, and pushes the cached value back on the stack.
With dip, can write this:
> 2.5 0.5 [circle-area] dip circle-area - . 18.8496
That is much better! I can even imagine another higher-order operator that takes two arguments: a function to apply to each item (such as circle-area) and a function to combine the results (such as -). It could call dip for us!
Of course, if this were a homework problem, you would be asked to write a function named, say, ring-area ...
> DEFINE ring-area == [circle-area] dip circle-area - . > 2.5 0.5 ring-area. 18.8496
... and put it in a file named homework02.joy to submit.
> "homework02.joy" include. > 2.5 0.5 ring-area. 18.8496 > 20 10 ring-area. 942.478 > 4 3 ring-area. 21.9911
Other than the strange syntax and style (remember when Racket's syntax and style felt much stranger?), this is not too far from what you did for Homework 2. It's not hard to imagine students writing such code, and growing larger programs...
The future of programing may not look like this, but there are reasons to believe it could. Concatenative programming is about function composition, not function application. Everything on the stack -- even the number 10 -- is a function, and all functions compose to create new functions. This is programming at a higher level. It moves us a level in the way that Racket-style functional programming seems to move up a level of abstraction on the imperative style you knew before.
If you want to learn more, let me know. There is a lot of room here to explore.
in class yet. I can't go for that.
After Homework 11, Boom has numbers, arithmetic expressions, local variables, and do-blocks with assignment statements. That's a pretty nice little language.
In Session 27, I implemented function definitions and function calls.
So I merged them. And what's a language without if statements? So added them.
(number m = 4 in (number n = 7 in (if (m == n) then ; try other conditions 100 else -100)))
With approximately one more homework assignment's worth of work, we have a language that we really can use!
> (define fun-exp '(number f = (function (n sum) (if (n == 0) then sum else (call f((n - 1) (sum + n))))) in (call f(10 0)))) > (eval-exp fun-exp) eval-exp: undefined variable f
What's wrong? The region of f. Can we define the function within the body of the local variable expression?
> (define fun-exp '(number f = 0 in (do (f := (function (n sum) (if (n == 0) then sum else (call f((n - 1) (sum + n)))))) (call f(10 0))))) > (eval-exp fun-exp) 55
Why we couldn't define letrec as a syntactic abstraction in Session 17: it's an abstraction of a let that uses an assignment statement -- mutable state. And we hadn't studied that yet.
Now, though, we can define local recursive functions as syntactic sugar in Boom!
Look at some Racket code, then run some Boom code:
> (define fun-exp '(recfun f = (function (n sum) (if (n == 0) then sum else (call f((n - 1) (sum + n))))) in (call f(10 0)))) > (preprocess fun-exp) '(number f = 0 in (do (f := (function (n sum) (if (n == 0) then sum else (call f ((n - 1) (sum + n)))))) (call f (10 0)))) > (eval-exp fun-exp) ; THE EVALUATOR DOESN'T KNOW 55 ; ABOUT recfun EXPS!
How about a homework problem?
> (define hw04-example '(recfun digit-sum = (function (n) (if ((n / 10) == 0) then n else ((n % 10) + (call digit-sum((n / 10)))))) in (call digit-sum(1984)))) > (eval-exp hw04-example) 1984
That may not look all that impressive to you, but think of this: Fifteen weeks ago, you did not know Racket or functional programming. Now, you have implemented a programming language capable of solving homework problems and more. It sounds impressive to me.
Answer three quick questions for me:
For the last one, you might answer with I still don't understand [X] or I have no idea why [X] is part of the course.
Please answer seriously and honestly. This will help me improve the course.
The duality of program and data means that anyone can create a language and write an interpreter for it.
This is not a new idea. It is one of the oldest ideas in computer science. People began to write compilers and language interpreters in the middle 1950s, in assembly language. Soon after that, John McCarthy realized something that gave the idea its full power: we can write a language interpreter in the language being interpreted.
Actually, McCarthy did more: he defined the features of a new language, Lisp, in terms of the language features themselves. This is the idea of the meta-circular interpreter, consisting of two procedures:
These functions evaluate a program in a mutually recursive fashion.
This, too, is one of the most beautiful ideas in computing, as well as the mechanism and inspiration for modern-day interpreters and compilers.
Though McCarthy created Lisp, he did not implement the first Lisp interpreter. McCarthy developed Lisp as a theoretical exercise: an attempt to create a programming alternative to the Turing Machine, using Alonzo Church's lambda calculus. Steve Russell, one of McCarthy's graduate students, suggested that he could implement the theory in an IBM 704 machine language program. McCarthy laughed and told him, "You're confusing theory with practice...". Russell did it any way.
(Thanks to Russell and the IBM 704, we also have the functions named car and cdr -- as well as one of the first video games ever created!)
McCarthy and Russell soon discovered that Lisp was more powerful than the language that they had planned to build as part of their theoretical exercise, and the history of computing was forever changed.
The syntax and semantics of Lisp programs are so sparse and so uniform that the McCarthy's universal Lisp interpreter consisted of about one page of Lisp code. Here it is, on Page 13 of the the Lisp 1.5 Programmer's Manual, published in 1962. (The permanent online home of the manual is at softwarepreservation.org .)
But this is a program. Why settle for a JPG image from a 50-year-old technical report? We are programmers, the dreamers of dreams! So I implemented this universal Lisp interpreter using Racket. It is the file universal-lisp-interp.rkt, in today's zip file.
Let's study it for a few minutes...
It is remarkable how much can be built out of so little. Alan Kay, the creator of modern object-oriented programming, often compares McCarthy's universal Lisp interpreter to Maxwell's equations in physics -- a small, simple set of equations that capture a huge amount of understanding and enable a new way of thinking. I sometimes think of the components of this program as the basic particles out of which all computation is built, akin to the atomic theory of matter. Out of these few primitives, all programs can be built.
I know this probably excites me more than you. But we are still so close to our history. John McCarthy died in October 2011, during a previous offering of this course. Steve Russell is still alive. Unlike other sciences, computer science is still young, and many of its creators are still with us. What a gift.
We see the original DNA of McCarthy's ideas and Russell's code in the tools we use today. This interpreter is at the base of Racket, Scheme, Common Lisp, Clojure, Dylan, and many other languages -- but it is also fundamentally the core of every language you use. Don't miss the opportunity to appreciate big ideas, or where computer science comes from.
But this isn't just archeology; the same ideas drive language design and implementation today. That means they also drive the programming you do today. Consider this white paper that made the rounds last year. A new syntactic abstraction in Java may be coming your way soon...
In the end, the duality of program and data, and the idea of language that bridges the gap between the two, make all programming possible. Even something as ambitious as SuperCollider, a "programming language for real time audio synthesis and algorithmic composition". It's just a language, with interpreters that process programs written in it. Created by people just like you and me.
If you want to play with Forth or Joy, or experiment with different ways to pass parameters, or invent a new language that will change the world, you can do this, too.
This is, in a very real way, a rather long answer to a common question from students: Why Racket?
Digression 1. If you'd like to read more about the history and importance of McCarthy's Lisp, check out Paul Graham's essay, The Roots of Lisp.
Digression 2. I first learned about McCarthy not from Lisp but from my first love, AI. McCarthy coined the term "Artificial Intelligence" when organizing (along with Minsky, Rochester, and Shannon) the 1956 Dartmouth conference that gave birth to the field. I studied McCarthy's work in AI using the language he had created. To me, he was a giant of AI long before I recognized that he was giant of programming languages, too. Like many pioneers of our field, he laid the groundwork in many subdisciplines. They had no choice; they had to build their work out of ideas using only the rawest materials. McCarthy is even credited with the first public descriptions of time-sharing systems and what we now call cloud computing. (For McCarthy's 1970-era predictions about home computers and the cloud, see his The Home Information Terminal, reprinted in 2000.)
My thoughts about the most important ideas we study in this course change from semester to semester. Here is a list of five for 2018: