April 30, 2012 4:00 PM

Processing Old Languages and Thinking of New

I'm beginning to look at the compilers my students produced this semester. I teach a relatively traditional compiler course; we want students to experience as many different important ideas as possible within the time constraints of a semester. As you might expect, my students' programs read in a text file and produce a text file. These files contain a high-level program and an assembly language program, respectively.

I love seeing all the buzz floating around non-textual languages and new kinds of programming environments such as Bret Victor's reactive documents and Light Table. Languages and environments like these make my traditional compiler course seem positively archaic. I still think the traditional course adds a lot of value to students' experience. Before you can think outside of the box, you have to start with a box.

These new programming ideas really are outside the confines of how we think about programs. Jonathan Edwards reminds us how tightly related languages and tools are:

As long as we are programming in descendants of assembly language, we will continue to program in descendants of text editors.

Edwards has been exploring this pool of ideas for a few years now. I first mentioned his work in this blog back in 2004. As he has learned, the challenge we face in trying to re-think how we program is complicated by the network of ideas in which we work. It isn't just syntax or language or IDE or support tools that we have to change. To change one in a fundamental way requires changing them all.

On top of that, once researchers create something new, we will have to find a way to migrate there. That involves education and lots of existing practitioners. Here's hoping that the small steps people are taking with Tangle and Light Table can help us bridge the gap.


Posted by Eugene Wallingford | Permalink | Categories: Software Development

April 24, 2012 4:55 PM

Recursive Discussions about Recursion

The SIGCSE listserv has erupted today with its seemingly annual discussion of teaching recursion. I wrote about one of the previous discussions a couple of years ago. This year's conversation has actually included a couple of nice ideas, so it was worth following along.

Along the way, one prof commented on an approach he has seen used to introduce students to recursion, often in a data structures course. First you cover factorial, gcd, and the Fibonacci sequence. Then you cover the Towers of Hanoi and binary search. Unfortunately, such an approach is all too common. The poster's wistful analysis:

Of the five problems, only one (binary search) is a problem students might actually want to solve. Only two (Fibonacci and Hanoi) are substantially clearer in recursive than iterative form, and both of them take exponential time. In other words, recursion is a confusing way to solve problems you don't care about, extremely slowly.

Which, frankly, I think is the lesson [some CS profs] want to convey.

And this on a day when I talked with my compiler students about how a compiler can transform many recursive programs into iterative ones, and even eliminate the cost of a non-recursive function call when it is in a tail position.

The quoted passage contains my favorite line of the week thus far: In other words, recursion is a confusing way to solve problems you don't care about, extremely slowly. If that's not the message you want to convey to your students, then please don't introduce them to recursion in this way. If that is the message you want to send to your students, then I am sad for you, but mostly for your students.

I sometimes wonder about the experiences that some computer scientists bring to the classroom. It only takes a little language processing to grok the value of recursion. And a data structures course is a perfectly good place for students to see and do a little language processing. Syntax, abstract or not, is a great computer science example of trees. And students get to learn a little more CS to boot!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 21, 2012 3:57 PM

A Conflict Between Fashion and the Unfashionable

Passage of the day, courtesy of Dave Winer:

They have started incubators in every major city on the planet. Unfortunately it hasn't been stylish to learn how to program for a number of years, so there aren't that many programmers available to hire. And it takes years to get really good at this stuff.

Hey, they just need a programmer. Or fifty.

While we teach CS students to program, we need to cultivate an entrepreneurial spirit, too. What an opportunity awaits someone with ideas and the ability to carry them out.


Posted by Eugene Wallingford | Permalink | Categories: General

April 20, 2012 3:14 PM

Better Than Everyone, University Edition

Seth Godin recently mentioned something that Clay Shirky has said about the television industry: Forty years ago, you only had to be better than two other shows. Now you have to better than everybody.

At the same time technology makes it easier for people to put their creations in front of potential viewers, it makes it harder for established players to retain control over market share. As Godin summarized, ".. with a million choices, each show earns the attention it gets in every single moment".

I've mused here periodically about how these same technological changes will ultimately affect universities. It seems that many people agree that education, even higher ed, is "ripe for disruption". Startups such as Boundless are beginning to take their shot at what seems an obvious market, the intersection of education and the beleaguered publishing industry: textbooks.

Though on-line education has been growing now for years, I haven't written anything about it. For one thing, I don't know what I really think of it yet. As much as I think out loud when I blog, I usually at least have a well-formed thought or two. When it comes to on-line education, my brain is still mostly full of mush.

Not long ago, the threat of on-line education to the way traditional universities operate did not seem imminent. That is, I think, starting to change. When the primary on-line players were non-traditional alternatives such as the University of Phoenix, it seemed easy enough to sell the benefits of the brick-and-ivy campus-based education to people. But as these schools slowly build a track record -- and an alumni base -- they will become a common enough part of the popular landscape that they become an acceptable alternative to many people. And as the cost of brick-and-ivy education rises, it becomes harder and harder to sell people on its value.

Of course, we now see a burgeoning in the number of on-line offerings from established universities. Big-name schools like MIT and Harvard have made full courses, and even suites of courses, available on-line. One of my more experienced colleagues began to get antsy when this process picked up speed a few years ago. Who wouldn't prefer MIT's artificial intelligence course over ours? These courses weren't yet available for credit, which left us with hope. We offer our course as part of a coherent program of study that leads to a credential that students and employers value. But in time...

... that would change. And it has. Udacity has spun itself off from Stanford and is setting its sights on a full on-line curriculum. A recent Computer World article talks about MITx, a similar program growing out of MIT. These programs are still being created and will likely offer a different sort of credential than the universities that gave birth to them, at least at the start. Is there still hope?

Less and less. As the article reports, other established universities are now offering full CS programs on-line. The University of Illinois at Springfield started in 2006 and now has more computer science students enrolled in its on-line undergrad and M.S. programs (171 and 146, respectively) than their on-campus counterparts (121 and 129). In June, Oregon State will begin offering a CS degree program on-line.

The natural reaction of many schools is to join in the rush. Schools like many are putting more financial and faculty resources into the creation of on-line courses and programs, because "that's where the future lies".

I think, though, that Shirky's anecdote about the TV industry serves as an important cautionary tale. The caution has two prongs.

First, you have to adapt. When a disruptive technology comes along, you have to respond. You may think that you are good enough or dominant enough to survive the wave, but you probably aren't. Giants that retain their position atop a local maximum when a new technology redefines an industry quickly change from giants to dinosaurs.

Adapting isn't easy. Clayton Christensen and his colleagues have documented how difficult it is for a company that is very good at something and delivering value in its market to change course. Even with foresight and a vision, it is difficult to overcome inertia and external forces that push a company to stay on the same track.

Second, technology lowers barriers for producers and consumers alike. It's no longer enough to be the best teaching university in your state or neighborhood. Now you have to better than everybody. If you are a computer science department, that seems an insurmountable task. Maybe you can be better than Illinois-Springfield (and maybe not!), but how can you be better than Stanford, MIT, and Harvard?

Before joining the rush to offer programs on-line, you might want to have an idea of what it is that you will be the best at, and for whom. With degrees from Illinois-Springfield, Oregon State, Udacity, Stanford, MIT, and Harvard only a few clicks away, you will have to earn the attention -- and tuition -- you receive from every single student.

But don't dally. It's lonely as the dominant player in a market that no longer exists.


Posted by Eugene Wallingford | Permalink | Categories: General

April 15, 2012 8:10 PM

Learning From the Wheel of Reincarnation

Last week I was pointed in the direction of a forty-five old CACM paper. It had an innocuous title, On the Design of Display Processors, and was outside my areas of primary interest, so I might well have passed it by. But it was co-written by Ivan Sutherland, whose work Alan Kay has praised often, and it was recommended by someone on Kay's the Fundamentals of New Computing mailing list, where a lot of neat ideas are discussed. So I printed it out for my daily exercise bike ride. I'm glad I did.

Myer and Sutherland tell the story of needing a display system for a research computer. That was a bigger deal in 1967 than it is today, so they did some research of their own....

Finally we decided to design the processor ourselves, because only in this way, we thought, could we obtain a truly complete display processor. We approached the task by starting with a simple scheme and adding commands and features that we felt would enhance the power of the machine. Gradually the processor became more complex. We were not disturbed by this because computer graphics, after all, are complex. Finally the display processor came to resemble a full-fledged computer with some special graphics features. And then a strange thing happened. We felt compelled to add to the processor a second, subsidiary processor, which, itself, began to grow in complexity.

It was then that we discovered a disturbing truth. Designing a display processor can become a never-ending cyclical process. In fact, we found the process so frustrating that we have come to call it the "wheel of reincarnation." We spent a long time trapped on that wheel before we finally broke free. In the remainder of this paper we describe our experiences. We have written it in the hope that it may speed others on toward "Nirvana."

A mantra from the paper characterizes the authors' time on the wheel: "For just a little more money...". I'll bet that sounds familiar to a lot of researchers, not to mention all of us who buy computing equipment for labs and end users.

I was really happy to read this paper. It's an experience report in which the authors share honestly the mistakes they made. But they paid attention, recognized a pattern, and learned from it. Even better, they wrote what they learned, in hopes of teaching the rest of us.

The wheel of reincarnation is not limited to display design or hardware design. It occurs any place where we encounter complexity. We try to tame it, first by specializing and then by generalizing. The design of programming languages is just as prone to dizzying cycle.

(In software design, we have a related phenomenon, captured in Greenspun's Tenth Rule.)

In language design, we almost have to look for a fixed point at which we stabilize the pendulum between general and specialized. What we most often need as users is the ability to grow systems gracefully over time. This speaks to the value of a good macro system and good compiler design.

Reading this paper reminded me of a couple of lessons I've learned over the years:

  1. I should read and watch everything I can get my hands on from Sutherland, Robert Floyd, and other computer scientists of their generation. They were solving hard problems long ago, in the face of resource limitations few of us can imagine today.
  2. As someone tweeted recently, @fogus, I think: reading these old papers makes me think that I will never have an original idea in your life. But then they also teach me a lot and prepare me to have more and sometimes better ideas.
  3. I need to do my best to hang around with smart, curious people. It's old advice, I know, but it requires action and, in some environments, constant vigilance. Simply eavesdropping on the FONC mailing list raises the level of my brain's activity by a few degrees.

These papers also remind us of a valuable role that academics can play in the software and computing worlds, which are also heavily influenced by industry practitioners. We need to keep papers like these alive, so that the smart and curious people in our classes and in industry will read them. We never know when two ideas will crash into each other and lead to something new and better.


Posted by Eugene Wallingford | Permalink | Categories: Computing

April 11, 2012 4:06 PM

What Penn and Teller Have in Common With a Compilers Course

Early this morning (and I mean early!), Alfred Thompson posted What Do Magic and Computer Science Have in Common?, relaying that Alex Suter of Industrial Light & Magic will give the closing keynote at this summer's Computer Science and Information Technology conference. That sounds pretty cool. The title of his entry conjured up other thoughts for me, though, especially in light of something I said in class yesterday.

Recently, I used a superhero reference in a blog entry. That is how many people feel when they use a program to accomplish something meaningful -- like a superhero. I feel that way, too, sometimes. However, like many other people, I am more prone to magical imagery. To someone who has not learned to code, a program is like an incantation, capable of making the computer do something mystical.

There is a long tradition of magical imagery in computer science. The Hacker's Dictionary tells us that a wizard is someone who knows how a complex piece of software or hardware works (that is, who groks it). A wizard can do things that hackers and mere mortals cannot. The entry for "wizard" has links to other magical jargon, such as heavy wizardry incantation>/A> and magic itself.

Structure and Interpretation of Computer Programs

I tell my Programming Languages students that this is a course in which the "high priests" of computer science reveal their secrets, that after the course students will understand the magic embodied in the interpreters and compilers that process their programs. I should probably refer to wizards, rather high priests, given that so many of the course's ideas are covered masterfully in SICP.

Lots of CS courses reveal magic, or propose it. A course in compliers finishes the job of Programming Languages, driving program translation all the way down to hardware. Artificial Intelligence describes our best ideas for how to make a computer do human-like magic: reasoning, recognizing patterns, learning, and playing Jeopardy!.

In my compliers class yesterday, I was showing my students a technique for generating three-address code from an abstract syntax tree, based in large part on ideas found in the Dragon book (surely not a coincidence). I wrote on the board this template for a grammar rule denoting addition:

    E → E1 + E2

E.place := makeNewTemporaryIdentifier() E.code := [ E1.code ] [ E2.code ] emitCode( E.place " := " E1.place " + " E2.place )

When I finished, I stepped back, looked it over, and realized again just how un-magical that seems. Indeed, when in written in black on white, it looks pretty pedestrian.

the magician due of Penn and Teller

That made me think of another connection between magic and computer science, one that applies to practitioners and outsiders alike. Taking an AI course or a compilers course is like having Penn and Teller explain to you how they made a person disappear or a ball levitate in thin air. For some people, that kills any joy that might have in watching the act. They don't want to know. They want to be amazed. And, knowing that something is implemented -- often in a way that doesn't seem especially artful, performed with an obvious misdirection -- prevents them from being amazed.

That can happen in CS, too. My friends and I came in to our AI course wide-eyed and wanting to be amazed -- and to build amazing things. We studied search and logical resolution, Bayes' Theorem and decision tree induction. And it all looked so... pedestrian. Many of my friends lost their sense of wonder then and there. Without the magic of AI, they were just as interested in operating systems or databases. More interested, really, because AI had let them down. It all looked like parlor tricks.

But there is a second kind of person in the world. Lots of people love to watch Penn and Teller explain a trick. They want to know how it works. They want to watch again, knowing how it works, to see if they can notice the deception. If they don't, they are amazed again. If they do, though, they still feel wonder -- at the skill of the magicians, at the design of the illusion, and even at the way their mind wants to trick them at the moment of execution.

I am of this second kind, for magic and especially for computer science. Yes, I know that compilers and AI programs are, at their cores, implemented using techniques that don't always look all that impressive in the light of day. Sometimes, those techniques look pretty boring indeed.

Yet I am still amazed when a C compiler takes in my humble instructions and creates machine code that compresses a musical file or fills out my tax return. I am amazed when Watson crunches through gazillions of bytes of data in a second or two and beats two most worthy human opponents to the buzzer. I like to hear my friends and colleagues de-mystify their current projects with nitty-gritty details. I still feel wonder -- at their skill, at the cleverness of their designs, and even at the moment the program runs and makes something out of what seems like nothing.

That's one thing magic and computer science have in common.

~~~~

IMAGE 1: a photo of the cover of Structure and Interpretation of Computer Programs. Source: the book's website.

IMAGE 2: a publicity still of magicians Penn and Teller. Source: All-About-Magicians.com.


Posted by Eugene Wallingford | Permalink | Categories: Computing

April 09, 2012 2:53 PM

Should I Change My Major?

Recruiting brochures for academic departments often list the kinds of jobs that students get when they graduate. Brochures for CS departments tend to list jobs such as "computer programmer", "system administrator", "software engineer", and "systems analyst". More ambitious lists include "CS professor" and "entrepreneur". I've been promoting entrepreneurship as a path for our CS grads for a few years now.

This morning, I was browsing the tables at one of my college's preview day sessions and came across my new all-time favorite job title for graduates. If you major in philosophy at my university, it turns out that one of the possible future job opportunities awaiting you is...

Bishop or Pope

Learning to program gives you superhuman strength, but I'm not sure a CS major can give you a direct line to God. I say, "Go for it."


Posted by Eugene Wallingford | Permalink | Categories: General

April 07, 2012 7:08 PM

Teaching the Perfect Class, Again

Some days, I walk out of the classroom thinking, "Well, that didn't go the way I wanted." I'm aware of everything I did wrong, and my mind begins swirling with ideas for next time -- both the next class session and the next time I teach the course. I won't make the same mistakes again, or so I think.

Other days are just the opposite. The stars aligned, and the session seemed to have gone perfectly. My questions provoked discussion. My examples caused every head to nod in appreciation. My jokes brought laughs, not eye rolls.

Ironically, those days are harder to follow. There is a temptation to try to repeat perfection, but that rarely works. Whenever I try, my timing seems off. When my questions, examples, and jokes don't elicit the same responses as the first time, I am confused. I end up trying too hard.

Teachers aren't the only people who face this problem. In this article about the science of music meeting the mind, Yo-Yo Ma describes why there are no immutable laws for expressive performance:

"Every day I'm a slightly different person," Mr. Ma said. "The instrument, which is sensitive to weather and humidity changes, will act differently. There's nothing worse than playing a really a great concert and the next day saying, 'I'm going to do exactly the same thing.' It always falls flat."

Most of the time, it is easier to fix broken things than it is to improve on good ones. Brokenness gives us cues about what to do next. Wholeness doesn't. Trying to repeat perfection in a world that always changes usually leaves us dissatisfied.

So: Treat each performance, each class session, as a chance to create, not maintain. Use ideas that have worked in the past, of course, but use them to create something new, not to try to re-create something that no longer exists.

Fortunately for me, I have far more imperfect days than perfect ones.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 06, 2012 4:29 PM

A Reflection on Alan Turing, Representation, and Universal Machines

Douglas Hofstadter speaking at UNI

The day after Douglas Hofstadter spoke here on assertions, proof's and Gödel's theorem, he gave a second public lecture hosted by the philosophy department. Ahead of time, we knew only that Hofstadter would reflect on Turing during his centennial. I went in expecting more on the Turing test, or perhaps a popular talk on Turing's proof of The Halting Problem. Instead, he riffed on Chapter 17 from I Am a Strange Loop.

In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.

Turing, he said, is another peak in the landscape occupied by Tarski and Gödel, whose work he had discussed the night before. (As a computer scientist, I wanted to add to this set contemporaries such as Alonzo Church and Claude Shannon.) Hofstadter mentioned Turing's seminal paper about the Entscheidungsproblem but wanted to focus instead on the model of computation for which he is known, usually referred to by the name "Turing machine". In particular, he asked us to consider a key distinction that Turing made when talking about his model: that between dedicated and universal machines.

A dedicated machine performs one task. Human history is replete with dedicated machines, whether simple, like the wheel, or complex, such as a typewriter. We can use these tools with different ends in mind, but the basic work is fixed in their substance and structure.

The 21st-century cell phone is, in contrast, a universal machine. It can take pictures, record audio, and -- yes -- even be used as a phone. But it can also do other things for us, if we but go to the app store and download another program.

Hofstadter shared a few of his early personal experiences with programs enabling line printers to perform tasks for which they had not been specifically designed. He recalled seeing a two-dimensional graph plotted by "printing" mostly blank lines that contained a single *. Text had been turned into graphics. Taking the idea further, someone used the computer to print a large number of cards which, when given to members of the crowd at a football game, could be used to create a massive two-dimensional message visible from afar. Even further, someone used a very specific layout of the characters available on the line printer to produce a print-out that appeared from the other side of the room to be a black-and-white photograph of Raquel Welch. Text had been turned into image.

People saw each of these displays as images by virtue of our eyes and mind interpreting a specific configuration of characters in a certain way. We can take that idea down a level into the computer itself. Consider this transformation of bits:

0000 0000 0110 1011 → 0110 1011 0000 0000

A computer engineer might see this as a "left shift" of 8 bits. A computer programmer might see it as multiplying the number on the left by 256. A graphic designer might see us moving color from one pixel to another. A typesetter may see one letter being changed into another. What one sees depends on how one interprets what the data represent and what the process means.

Alan Turing was the first to express clearly the idea that a machine can do them all.

"Aren't those really binary numbers?", someone asked. "Isn't that real, and everything else interpretation?" Hofstadter said that this is a tempting perspective, but we need to keep in mind that they aren't numbers at all. They are, in most computers, pulses of electricity, or the states of electronic components, that we interpret as 0s and 1s.

After we have settled on interpreting those pulses or states as 0s and 1s, we then interpret configurations of 0s and 1s to mean something else, such as decimal numbers, colors, or characters. This second level of interpretation exposes the flaw in popular claims that computers can do "only" process 0s and 1s. Computers can deal with numbers, colors, or characters -- anything that can be represented in any way -- when we interpret not only what the data mean but also what the process means.

(In the course of talking representations, he threw in a cool numeric example: Given an integer N, factor it as 2^a * 3^b * 5^c *7^d ... and use [a.b.c.d. ...] to stand for N. I see a programming assignment or two lying in wait.)

The dual ideas of representation and interpretation take us into a new dimension. The Principia Mathematica describes a set of axioms and formal rules for reasoning about numeric structures. Gödel saw that it could be viewed at a higher level, as a system in its own right -- as a structure of integers. Thus the Principia can talk about itself. It is, in a sense, universal.

This is the launching point for Turing's greatest insight. In I Am a Strange Loop, Hofstadter writes:

Inspired by Gödel's mapping of PM into itself, Alan Turing realized that the critical threshold for this kind of computational universality comes exactly at the point where a machine is flexible enough to read and correctly interpret a set of data that describes its own structure. At this crucial juncture, a machine can, in principle, explicitly watch how it does any particular task, step by step. Turing realized that a machine that has this critical level of flexibility can imitate any other machine, no matter how complex the latter is. In other words, there is nothing more flexible than a universal machine. Universality is as far as you can go!
Alan Turing

Thus was Turing first person to recognize the idea of a universal machine, circa 1935-1936: that a Turing machine can be given, as input, data that encodes its own instructions. This is the beginning of perhaps the biggest of the Big Ideas of computer science: the duality of data and program.

We should all be glad he didn't patent this idea.

Turing didn't stop there, of course, as I wrote in my recent entry on the Turing test. He recognized that humans are remarkably capable and efficient representational machines.

Hofstadter illustrates this with the idea of "hub", a three-letter word that embodies an enormous amount of experience and knowledge, chunked in numerous ways and accreted slowly over time. The concept is assembled in our minds out of our experiences. It is a representation. Bound up in that representation is an understanding of ourselves as actors in certain kinds of interactions, such as booking a flight on an airplane.

It is this facility with representations that distinguishes us humans from dogs and other animals. They don't seem capable of seeing themselves or others as representations. Human beings, though, naturally take other people's representations into their own. This results in a range of familiarities and verisimilitude. We "absorb" some people so well that we feel we know them intimately. This is what we mean when we say that someone is "in our soul". We use the word 'soul' not in a religious sense; we are referring to our essence.

Viewed this way, we are all distributed beings. We are "out there", in other people, as well as "in here", in ourselves. We've all had dreams of the sort Hofstadter used as example, a dream in which his deceased father appeared, seemingly as real as he ever had been while alive. I myself recently dreamt that I was running, and the experience of myself was as real as anything I feel when I'm awake. Because we are universal machines, we are able to process the representations we hold of ourselves and of others and create sensations that feel just like the ones we have when we interact in the world.

It is this sense that we are self-representation machines that gives rise to the title of his book, "I am a strange loop". In Hofstadter's view, our identity is a representation of self that we construct, like any other representation.

This idea underlies the importance of the Turing test. It takes more than "just syntax" to pass the test. Indeed, syntax is itself more than "just" syntax! We quickly recurse into the dimension of representation, of models, and a need for self-reference that makes our syntactic rules more than "just" rules.

Indeed, as self-representation machines, we are able to have a sense of our own smallness within the larger system. This can be scary, but also good. It makes life seem precious, so we feel a need to contribute to the world, to matter somehow.

Whenever I teach our AI course, I encounter students who are, for religious or philosophical reasons, deeply averse to the idea of an intelligent machine, or even of scientific explanations of who we are. When I think about identity in terms of self-representation, I can't help but feel that, at an important level, it does not matter. God or not, I am in awe of who we are and how we got to here.

So, we owe Alan Turing a great debt. Building on the work of philosophers, mathematicians, and logicians, Turing gave us the essential insight of the universal machine, on which modern computing is built. He also gave us a new vocabulary with which to think about our identity and how we understand the world. I hope you can appreciate why celebrating his centennial is worthwhile.

~~~~

IMAGE 1: a photo of Douglas Hofstadter speaking at UNI, March 7, 2012. Source: Kevin C. O'Kane.

IMAGE 2: the Alan Turing centenary celebration. Source: 2012 The Alan Turing Year.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

April 04, 2012 4:39 PM

Computational Search Answers an Important Question

Update: Well, this is embarrassing. Apparently, Mat and I were the victims of a prank by the folks at ChessBase. You'd think that, after more than twenty-five years on the internet, I would be more circumspect at this time of year. Rather than delete the post, I will leave it here for the sake of posterity. If nothing else, my students can get a chuckle from their professor getting caught red-faced.

I stand behind my discussion of solving games, my recommendation of Rybka, and my praise for My 60 Memorable Games (my favorite chess book of all time. I also still marvel at the chess mind of Bobby Fischer.

~~~~

Thanks to reader Mat Roberts for pointing me to this interview with programmer Vasik Rajlich, which describes a recent computational result of his: one of the most famous openings in chess, the King's Gambit, is a forced draw.

Games are, of course, a fertile testbed for computing research, including AI and parallel computation. Many researchers make one of their goals to "solve" a game, that is, to show that, with best play by both players, a game has a particular outcome. Games with long histories and large communities of players naturally attract a lot of interest, and solving one of them is usually considered a valuable achievement.

For us in CS, interest grows as with the complexity of the game. Solving Connect Four was cool, but solving Othello on a full-sized board would be cooler. Almost five years ago, I blogged about what I still consider the most impressive result in this domain: the solving of checkers by Jonathan Schaeffer and his team at the University of Alberta.

the King's Gambit

The chess result is more limited. Rajlich, an International Master of chess and the programmer of the chess engine Rybka, has shown results only for games that begin 1.e4 e5 2.f4 exf4. If White plays 3.Nf3 -- the most common next move -- then Black can win with 3... d6. 3.Bc4 also loses. Only one move for White can force a draw, the uncommon 3.Be2. Keep in mind that these results all assume best play by both players from there on out. White can win, lose, or draw in all variations if either player plays a sub-optimal move.

I say "only" when describing this result because it leaves a lot of chess unsolved, all games starting with some other sequence of moves. Yet the accomplishment is still quite impressive! The King's Gambit is one of the oldest and most storied opening sequences in all of chess, and it remains popular to this day among players at every level of skill.

Besides, consider the computational resources that Rajlich had to use to solve even the King's Gambit:

... a cluster of computers, currently around 300 cores [created by Lukas Cimiotti, hooked up to] a massively parallel cluster of IBM POWER 7 Servers provided by David Slate, senior manager of IBM's Semantic Analysis and Integration department -- 2,880 cores at 4.25 GHz, 16 terabytes of RAM, very similar to the hardware used by IBM's Watson in winning the TV show "Jeopardy". The IBM servers ran a port of the latest version of Rybka, and computation was split across the two clusters, with the Cimiotti cluster distributing the search to the IBM hardware.

Oh, and this set up had to run for over four months to solve the opening. I call that impressive. If you want something less computationally intensive yet still able to beat you me and everybody we know at chess, you can by Rybka, a chess engine available commercially. (An older version is available for free!)

What effect will this result have on human play? Not much, practically speaking. Our brains aren't big enough or fast enough to compute all the possible paths, so human players will continue to play the opening, create new ideas, and explore the action in real time over the board. Maybe players with the Black pieces will be more likely to play one of the known winning moves now, but results will remain uneven between White and Black. The opening leads to complicated positions.

the cover of Bobby Fischer's 'My 60 Memorable Games'

If, like some people, you worry that results such as this one somehow diminish us as human beings, take a look again at the computational resources that were required to solve this sliver of one game, the merest sliver of human life, and then consider: This is not the first time that someone claimed the King's Gambit busted. In 1961, an eighteen-year-old U.S. chess champion named Bobby Fischer published an article claiming that 1.e4 e5 2.f4 exf4 3.Nf3 was a forced loss. His prescription? 3... d6. Now we know for sure. Like so many advances in AI, this one leaves me marveling at the power of the human mind.

Well, at least Bobby Fischer's mind.

~~~~

IMAGE 1: The King's Gambit. Source: Wikimedia Commons.

IMAGE 2: a photograph of the cover of my copy of My 60 Memorable Games by Bobby Fischer. Bobby analyzes a King's Gambit or two in this classic collection of games.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

April 03, 2012 4:00 PM

Intermediate Representations and Life Beyond the Compiler

In the simplest cases, a compiler can generate target code directly from the abstract syntax tree:

generating target code directly from the abstract syntax tree

In many cases, though, there are good reasons why we don't want to generate code for the target machine immediately. One is modularity. A big part of code generation for a particular target machine is machine-dependent. If we write a monolithic code generator, then we will have to reimplement the machine-independent parts every time we want to target a new machine.

Even if we stick with one back-end architecture, modularity helps us. Not all of the machine-dependent elements of code generation depend in the same way on the machine. If we write a monolithic code generator, then any small change in the target machine -- perhaps even a small upgrade in the processor -- can cause changes throughout our program. If instead we write a modular code generator, with modules that reflect particular shear layers in the generation process, a lá How Buildings Learn, then we may be able to contain changes in target machine specification to an easily identified subset of modules.

So, more generally we think of code generation in two parts:

  • one or more machine-independent transformations from an abstract syntax tree to intermediate representations of the program, followed by

  • one or more machine-dependent transformations from the final intermediate representation to machine code.
generating target code directly from the abstract syntax tree

Intermediate representations between the abstract syntax tree and assembly code have other advantages, too. In particular, they enable us to optimize code in machine-independent ways, without having to manipulate a complex target language.

In practice, an intermediate representation sometimes outlives the compiler for which it was created. Chris Clark described an example of this phenomenon in Build a Tree--Save a Parse:

Sometimes the intermediate language (IL) takes on a life of its own. Several systems that have multiple parsers, sophisticated optimizers, and multiple code generators have been developed and marketed commercially. Each of these systems has its own common virtual assembly language used by the various parsers and code generators. These intermediate languages all began connecting just one parser to one code generator.

P-code is an example IL that took on a life of its own. It was invented by Nicklaus Wirth as the IL for the ETH Pascal compiler. Many variants of that compiler arose [Ne179], including the USCD Pascal compiler that was used at Stanford to define an optimizer [Cho83]. Chow's compiler evolved into the MIPS compiler suite, which was the basis for one of the DEC C compilers -- acc. That compiler did not parse the same language nor use any code from the ETH compiler, but the IL survived.

Good language design usually pays off, sometimes in unexpected ways.

(If you like creating languages and writing language processors, Clark's paper is worth a read!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns