A recent entry mentioned that one advantage of short source code for beginners is a smaller space for errors. If a student writes only three lines of code, then any error in the program is probably on one of those three lines. That's better than looking for errors in a 100-line program, at least when the programmer is learning.
This assertion may seem like an oversimplification. What if the students writes a bunch of three-line procedures that call one another? Couldn't an error arise out of the interaction of multiple procedures, and thus lie far from the point at which it is discovered? Sure, but that is usually only a problem if the student doesn't know that each three-line procedure works. If we develop the habit of testing each small piece of code well, or even reasoning formally about its behavior, then we can have confidence in the individual pieces, which focuses the search for an error in the short piece of code that calls them.
This is, of course, one of the motivations behind the agile practices of taking small steps and creating tests for each piece of code as we go along. It is also why programming in a scripting language can help novices. The language provides powerful constructs, which allow the novice programmer to say a lot in a small amount of code. We can trust the language constructs to work correctly, and so focus our search for errors in the small bit of code.
Even still, it's not always as simple as it sounds. I am reminded of an article on a new course proposed by Matthias Felleisen, in which he argues for the use of a limited proof language. Even when we think that a 'real' language is small enough to limit the scope of errors students can make, we are usually surprised. Felleisen comments on the Teach Scheme! experience:
... we used to give students the "simple language of first-order Lisp" and the code we saw was brutal. Students come up with the worst possible solution that you can imagine, even if you take this sentence into account in your predictions.
This led the Teach Scheme! team to create a sequence of language levels that expose students to increasingly richer sets of ideas and primitives, culminating in the complete language. This idea has also been in the Java world, via Dr. Java. Another benefit of using limited teaching languages is that the interpreter or compiler can provide much more specific feedback to students at each level because it, too, can take advantage of the smaller space of possible errors.
Felleisen does not limit the idea of limited language to the programming language. He writes of carefully introducing students to the vocabulary we use to talk about programming:
Freshmen are extremely limited in their vocabulary and "big words" (such as 'specification' and 'implementation') seem to intimidate them. We introduce them slowly and back off often.
When I read this a couple of weeks ago, it troubled me a bit. Not because I disagree with what Felleisen says, but because it seems to conflict with something else I believe and blogged about couple of weeks ago: speak to students in real language, and help the students grow into the language. I have had good experience with children, including my own, when talking about the world in natural language. What makes the experience of our students different.
As I write this, I am less concerned that these conflict. First, Felleisen mentions one feature of the CS1 experience that distinguishes it from my kids' experience growing up: fear. Children don't spend a lot of their time afraid of the world; they are curious and want to know more. They are knowledge sponges. CS1 students come out of a school system that tends to inculcate fear and dampen curiosity, and they tend to think computer science is a little scary -- despite wanting to major in it.
Second, when I speak to children in my usual vocabulary, I take the time to explain what words mean. Sometimes they ask, and sometimes I notice a quizzical curious look on their faces. Elaboration of ideas and words gives us more to talk about (a good thing) and connects to other parts of their knowledge (also good). And I'm sure that I don't speak to kids using only thirteen-letter words; that's not the nature of regular life, at least in my house. In computing jargon words of excessive length are the norm.
So I don't think there's a contradiction in these two ideas. Felleisen is reminding us to speak to students as if they are learners, which they are, and to use language carefully, not simplistically.
Even if there is a contradiction, I don't mind. It would not be the only contradiction I bear. Strange as it may sound, I try to be true to both of these ideas in my teaching. I try not to talk down to my students, instead talking to them about real problems and real solutions and cool ideas. My goal is to help students reach up to the vocabulary and ideas as they need, offering scaffolding in language and tools when they are helpful.
Yesterday, I wrote a bit about scripting languages. It seems odd to have to talk about the value of scripting languages in 2008, as Ronald Loui does in his recent IEEE Computer article, but despite their omnipresence in industry, the academic world largely continues to prefer traditional systems languages. Some of us would like to see this change. First, let's consider the case of novice programmers.
Most scripting languages lack some of the features of systems languages that are considered important for learners, such as static typing. Yet these "safer" languages also get in the way of learning, as Loui writes, by imposing "enterprise-sized correctness" on the beginner.
Early programmers must learn to be creative and inventive, and they need programming tools that support exploration rather than production.
This kind of claim has been made for years by advocates of languages such as Scheme for CS1, but those languages were always dismissed by "practical" academics as toy languages or niche languages. Those people can't dismiss scripting languages so easily. You can call Python and Perl toy languages, but they are used widely in industry for significant tasks. The new ploy of these skeptics is to speak of the "scripting language du jour" and to dismiss them as fads that will disappear while real languages (read: C) remain.
Python and Ruby do seem like the best choices among the scripting languages with the widest and deepest reach. As Loui notes, few people dislike either, and most people respect both, to some level. Both have been designed carefully enough to be learned by beginners and and support a reasonable transition as students move to the next level of the curriculum. Having used both, I prefer Ruby, not only for its OO-ness but also for how free I feel when coding in it. But I certainly respect the attraction many people have to Python, especially for its better developed graphics support.
Some faculty ask whether scripting languages scale to enterprise-level software. My first reaction is: For teaching CS1, why should we care? Really? Students don't write enterprise-level software in CS1; they learn to program. Enabling creativity and supporting exploration are more important than the speed of the interpreter. If students are motivated, they will write code -- a lot of it. Practice makes perfect, not optimized loop unrolling and type hygiene.
My second reaction is that these languages scale quite nicely to real problems in industry. That is why they have been adopted so widely. If you need to process a large web access log, you really don't want to use Java, C, or Ada. You want Perl, Python, or Ruby. This level of scale gives us access to real problems in CS1, and for these tasks scripting languages do more than well enough. Add to that their simplicity and the ability to do a lot with a little code, and student learning is enhanced.
Loui writes, "Indeed, scripting languages are not the answer for long-lasting, CPU-intensive nested loops." But then, Java and C++ and Ada aren't the answer for all the code we write, either. Many of the daily tasks that programmers perform lie in the space better covered by scripting languages. After learning a simpler language that is useful for these daily tasks, students can move on to larger-scale problems and learn the role of a larger-scale language in solving them. That seems more natural to me than going in the other direction.
Now let's consider the case of academic programming languages research. A lot of interesting work is being done in industry on the design and implementation of scripting language, but Loui laments that academic PL research still focus on syntactic and semantic issues of more traditional languages.
Actually, I see a lot of academic work on DSLs -- domain-specific languages -- that is of value. One problem is this research is so theoretical that it is beyond the interest of programmers in the trenches. Then again, it's beyond the mathematical ability and interest of many CS academics, too. (I recently had to comfort a tech entrepreneur friend of mine who was distraught that he couldn't understand even the titles of some PL theory papers on the resume of a programmer he was thinking of hiring. I told him that the lambda calculus does that to people!)
Loui suggest that PL language research might profitably move in a direction taken by linguistics and consider pragmatics rather than syntax and semantics. Instead of proving something more about type systems, perhaps a languages researcher might consider "the disruptive influence that Ruby on Rails might have on web programming". Studying how well "convention over configuration" works in practice might be of as much use as incrementally extending a compiler optimization technique. The effect of pragmatics research would further blur the line between programming languages and software engineering, a line we have seen crossed by some academics from the PLT Scheme community. This has turned out to be practical for PL academics who are interested in tools that support the programming process.
Loui's discussion of programming pragmatics reminds me of my time in studying knowledge-based systems. Our work was pragmatic, in the sense that we sought to model the algorithms and data organization that expert problem solvers used, which we found to be tailored to specific problem types. Other researchers working on such task-specific architectures arrived at models consistent with ours. One particular group went beyond modeling cognitive structures to the sociology of problem solving, John McDermott's lab at Carnegie Mellon. I was impressed by McDermott's focus on understanding problem solvers in an almost anthropological way, but at the time I was hopelessly in love with the algorithm and language side of things to incorporate this kind of observation into my own work. Now, I recognize it as the pragmatics side of knowledge-based systems.
(McDermott was well known in the expert systems community for his work on the pioneering programs R1 and XCON. I googled him to find out what he was up to these days but didn't find much, but through some publications, I infer that he must now be with the Center for High Assurance Computer Systems at the Naval Research Laboratory. I guess that accounts for the sparse web presence.)
Reading Loui's article was an enjoyable repast, though even he admits that much of the piece reflects old arguments from proponents of dynamic languages. It did have, I think, at least one fact off track. He asserts that Java displaced Scheme as the primary language used in CS1. If that is true, it is so only for a slender subset of more elite schools, or perhaps that Scheme made inroads during a brief interregnum between Java and ... Pascal, a traditional procedural language that was small and simple enough to mostly stay out of the way of programmers and learners.
As with so many current papers, one of the best results of reading it is a reminder of a piece of classic literature, in this case Ousterhout's 1998 essay. I usually read this paper again each time I teach programming languages, and with my next offering of that course to begin in three weeks, the timing is perfect to read it again.
Colleague and reader Michael Berman pointed me to the July 2008 issue of IEEE Computer, which includes an article on the virtues of scripting languages, Ronald Loui's In Praise of Scripting: Real Programming Pragmatism. Loui's inspiration is an even more important article in praise of scripting, John Ousterhout's classic Scripting: Higher Level Programming for the 21st Century. Both papers tell us that scripting deserves more respect in the hierarchy of programming and that scripting languages deserve more consideration in the programming language and CS education communities.
New programming languages come from many sources, but most are created to fill some niche. Sometimes the niche is theoretical, but more often the creators want to be able to do something more easily than they can with existing languages. Scripting languages in particular tend to originate in practice, to fill a niche in the trenches, and grow from there. Sometimes, they come to be used just like a so-called general-purpose programming language.
When programmers have a problem that they need solve repeatedly, they want a language that gives them tools that are "ready at hand". For these programming tasks, power comes from the level of abstraction provided by built-in tools. Usually these tools are chosen to fill the needs of a specific niche, but they almost always include the ability to process text conveniently, quickly, and succinctly.
Succinctness is a special virtue of scripting languages. Loui mentions the virtue of short source code, and I'm surprised that more people don't talk about the value of small programs. Loui suggests one advantage that I rarely see discussed: languages that allow and even encourage short programs enable programmers to get done with a task before losing motivation or concentration. I don't know how important this advantage is for professional programmers; perhaps some of my readers who work in the real world can tell me what they think. I can say, though, that, when working with university students, and especially novice programmers, motivation or concentration are huge factors. I sometimes hear colleagues say that students who can't stay motivated and concentrate long enough to solve an assignment in C++, Ada, or Java probably should not be CS majors. This seems to ignore reality, both of human psychology and of past experience with students. Not to mention the fact that teach non-majors, too.
Another advantage of succinctness Loui proposes relates to programmer error. System-level languages include features intended to help programmers make fewer errors, such as static typing, naming schemes, and verbosity. But they also require programmers to spend more time writing code and to write more code, and in that time programmers find other ways to err. This is, too, is an interesting claim if applied to professional software development. One standard answer is that software development is not "just" programming and that such errors would disappear if we simply spent more time up-front in analysis, modeling, and design. Of course, these activities add even more time and more product to the lifecycle, and create more space for error. They also put farther in the future the developers' opportunity to get feedback from customers and users, which in many domains is the best way to eliminate the most important errors that can arise when making software.
Again, my experience is that students, especially CS1 students, find ways to make mistakes, regardless of how safe their language is.
One way to minimize errors and their effects is to shrink the universe of possible errors. Smaller programs -- less code -- is one way to do that. It's harder to make as many or even the same kind of errors in a small piece of code. It's also easier to find and fix errors in a small piece of code. There are exceptions to both of these assertions, but I think that they hold in most circumstances.
Students also have to be able to understand the problem they are trying to solve and the tools they are using to solve it. This places an upper bound on the abstraction level we can allow in the languages we give our novice students and the techniques we teach them. (This has long been an argument made by people who think we should not teach OO techniques in the first year, that they are too abstract for the minds of our typical first-year students.) All other things equal, concrete is good for beginning programmers -- and for learners of all kinds. The fact that scripting languages were designed for concrete tasks means that we are often able to make the connection for students between the languages abstractions and tasks they can appreciate, such as manipulating images, sound, and text.
My biases resonate with this claim in favor of scripting languages:
Students should learn to love their own possibilities before they learn to loathe other people's restrictions.
I've always applied this sentiment to languages such as Smalltalk and Scheme which, while not generally considered scripting languages, share many of the features that make scripting languages attractive.
In this regard, Java and Ada are the poster children in my department's early courses. Students in the C++ track don't suffer from this particular failing as much because they tend not to learn C++ anyway, but a more hygienic C. These students are more likely to lose motivation and concentration while drowning in an ocean of machine details.
When we consider the problem of teaching programming to beginners, this statement by Loui stands out as well:
Students who learn to script early are empowered throughout their college years, especially in the crucial Unix and Web environments.
Over the years, I have come to think that even more important than usefulness for summer jobs is the usefulness a language brings to students in their daily lives, and the mindset it fosters. I want CS students to customize their environments. I want them to automate the tasks they do every day when compiling programs and managing their files. I want them to automate their software testing.
When students learns a big, verbose, picky language, they come to think of writing a program as a major production, one that may well cause more pain in the short term than it relieves in the long term. Even if that is not true, the student looks at the near-term pain and may think, "No, thanks." When students learn a scripting language, they can see that writing a program should be as easy as having a good idea -- "I don't need to keep typing these same three commands over and over", or "A program can reorganize this data file for me." -- and writing it down. A program is an idea, made manifest in an executable form. They can make our lives better. Of all people, computer scientists should be able to harness their power -- even CS students.
This post has grown to cover much more than I had originally planned, and taken more time to write. I'll stop here for now and pick up this thread of thought in my next entry.
I recently started reading The Art of Possibility, by Roz and Ben Zander, and it brought to mind a pattern I have seen many times in literature and in life. Early on, the Zanders explain that this book is "not about making incremental changes that lead to new ways of doing things based on old beliefs". It is "geared toward causing a total shift of posture [and] perceptions"; it is "about transforming your entire world".
That's big talk, but the Zanders are not alone in this message. When talking to companies about creating new products, reaching customers, and running a business, Guy Kawasaki uses the mantra Revolution, Then Evolution. Don't try to get better at what you are doing now, because you aren't always doing the right things. But also don't worry about trying to be perfect at doing something new, because you probably won't be. Transform your company or your product first, then work to get better.
This pattern works in part because people need to be inspired. The novelty of a transformation may be just what your customers or teammates need to rally their energies, when "just" trying to get better will make them weary.
It also works despite running contrary to our fixation these days with "evolving". Sometimes, you can't get there from here. You need a mutation, a change, a transformation. After the transformation, you may not be as good as you would like for a while, because you are learning how to see the world differently and how to react to new stimuli. That is when evolution becomes useful again, only now moving you toward a higher peak than was available in the old place.
I have seen examples of this pattern in the software world. Writing software patterns was a revolution for many companies and many practitioners. The act of making explicit knowledge that had been known only implicitly, or the act of sharing internal knowledge with others and growing a richer set of patterns, requires a new mindset for most of us. Then we find out we are not very good, so we work to get better, and soon we are operating in a world that we may not have been able even to imagine before.
Adopting agile development, especially a practice-laden approach such as XP, is for many developers a Revolution, Then Evolution experience. So are major lifestyle changes such as running.
Many of you will recognize an old computational problem that is related to this idea: hill climbing. Programs that do local search sometimes get stuck at a local maximum. A better solution exists somewhere else in the search space, but the search algorithm makes it impossible for the program to get out of the neighborhood of the local max. One heuristic for breaking out of this circumstance is occasionally to make a random jump somewhere else in the search space, and see where hill climbing leads. If it leads to a better max, stay there, else jump back to the starting point.
In AI and computer science more generally, it is usually easier to peek somewhere else, try for a while, and pop back if it doesn't work out. Most individuals are reluctant to make a major life change that may need to be undone later. We are, for the most part, beings living in serial time. But it can be done. (I sometimes envy the freer spirits in this world who seem built for this sort of experimentation.) It's even more difficult to cause a tentative radical transformation within an organization or team. Such a change disorients the people involved and strains their bonds, which means that you had better well mean it when you decide to transform the team they belong to. This is a major obstacle to Revolution, Then Evolution, and one reason that within organizations it almost always requires a strong leader who has earned everyone's trust, or at least their respect.
As a writer of patterns, I struggle with how to express the context and problem for this pattern. The context seems to be "life", though there are certainly some assumptions lurking underneath. Perhaps this idea matters only when we are seeking a goal or have some metric for the quality of life. The problem seems to be that we are not getting better, despite an effort to get better. Sometimes, we are just bored and need a change.
Right now, the best I can say from my own experience is that Revolution, Then Evolution applies when it has been a while since I made long-term progress, when I keep finding myself revisiting the same terrain again and again without getting better. This is a sign that I have plateaued or found a local maximum. That is when it is time to look for a higher local max elsewhere -- to transform myself in some way, and then begin again the task of getting better by taking small steps.
... for months, in an operation one army officer likened to a "broken telephone," military intelligence had been able to convince Ms. Betancourt's captor, Gerardo Aguilar, a guerrilla known as "Cesar," that he was communicating with his top bosses in the guerrillas' seven-man secretariat. Army intelligence convinced top guerrilla leaders that they were talking to Cesar. In reality, both were talking to army intelligence.
As Bruce Schneier reports in Wired magazine, this strategy is well-known on the internet, both to would-be system crackers and to security experts. The risk of man-in-the-middle attacks is heightened on-line because the primary safeguard against them -- shared social context -- is so often lacking. Schneier describes some of the technical methods available for reducing the risk of such attacks, but his tone is subdued... Even when people have a protection mechanism available, as they do in SSL, they usually don't take advantage of it. Why? Using the mechanism requires work, and most of us are just too lazy.
Then again, the probability of being victimized by a man-in-the-middle attack may be small enough that many of us can rationalize that the cost is greater than the benefit. That is a convenient thought, until we are victimized!
The problem feature that makes man-in-the-middle attacks possible is unjustified trust. This is not a feature of particular technical systems, but of any social system that relies on mediated communication. One of the neat things about the Colombian hostage story it that shows that some of the problems we study in computer science are relevant in a wider context, and that some of our technical solutions can be relevant, too. A little computer science can amplify the problem solving of almost anyone who deals with "systems", whatever their components.
This story shows a potential influence from computing on the wider world. Just so that you know the relationship runs both ways, I point you to Joshua Kerievsky's announcement of "Programming with the Stars", one of the events on the Developer Jam stage at the upcoming Agile 2008 conference. Programming with the Stars adapts the successful formula of Dancing with the Stars, a hit television show, to the world of programming. On the TV show, non-dancers of renown from other popular disciplines pair with professional dancers for a weekly dance competitions. Programming with the Stars will work similarly, only with (pair) programming plugged in for dancing. Rather than competitions involving samba or tango, the competitions will be in categories such as test-driven development of new code and refactoring a code base.
As in the show, each pair will include an expert and a non-expert, and there will be a panel of three judges:
I've already mentioned Uncle Bob in this blog, even in a humorous vein, and I envision him playing the role of Simon Cowell from "American Idol". How Davies and Hill compare to Paula Abdul and Randy Jackson, I don't know. But I expect plenty of sarcasm, gushing praise, and hip lingo from the panel, dog.
Computer scientists and software developers can draw inspiration from pop culture and have a little fun along the way. Just don't forget that the ideas we play with are real and serious. Ask those rescued hostages.
I've been trying to take a break from the office and spend some time at home and with my family. Still, I enjoy finding time to read an occasional technical article and be inspired. While waiting for my daughter's play rehearsal to end last night, I read A Conversation with Christos Papadimitriou, a short interview in the August 2008 of Dr. Dobb's Journal. I first learned of Papadimitriou from a textbook of his that we used in one of my earliest graduate course, Elements of the Theory of Computation. Since that time, he has done groundbreaking work in computational complexity and algorithms, with applications in game theory, economics, networks, and most recently bioinformatics. It seems that many of the best theoreticians have a knack for grounding their research in problems that matter.
The article includes several tidbits that might interest computer scientists and professional programmers of various sorts. Some are pretty far afield from my work. For instance, Papadimitriou and two of his students recently produced an important result related to the Nash Equilibrium in game theory (have you seen A Beautiful Mind?) Nash's theorem tells us that an equilibrium exists in every game, but it does not tell us how to find the equilibrium. Is it possible to produce a tractable algorithm for finding it? Papadimitriou and his students showed that Nash's theorem depends intrinsically on the theorem which is the basis of Nash's proof, which means that, in practice, we cannot produce such an algorithm; finding the Nash equilibrium for any given problem is intractable.
The interview spent considerable time discussing Papadimitriou's recent work related to the Internet and the Web, which are ideas I will likely read more about. Papadimitriou sees the net as an unusual opportunity for computer scientists: a chance to study a computational artifact we didn't design. Unlike our hardware and software systems, it "emerged from an interaction of millions of entities on the basis of deliberately simple protocols". The result is a "mystery" that our designed artifacts can't offer. For a theoretical CS guy, the net and web serve as a research lab of unprecedented size.
It also offers a platform for research at the intersection of computing and other disciplines, such as communication, where my CS grad student Sergei Golitsinski is taking his research. The interviewer quoted net pioneer John Gilmore in the same arena: "The Net interprets censorship as damage and routes around it." This leads to open questions about how rumors spread, an area that Papadimitriou calls "information epidemiology". One of my former grad students, Nate Labelle, worked in this area for a particular part of the designed world, open-source software packages, and I'd love to have a student delve into the epidemiology of more generalized information.
I would also like to read Papadimitriou's novel, Turing. I recall when it came out and just haven't gotten around to asking my library to pick it up or borrow it. In the interview, Papadimitriou said,
I discovered this [novel] was inside me and had to come out, so I took time to write it. I couldn't resist it. ... If I had not done it, I would be a less happy man.
Powerful testimony, and the chance to read CS-themed fiction doesn't come along every day.
Last Sunday, I ran 9 miles. It was my longest long run since coming down with whatever ails me 10 weeks ago or so. It felt better than I expected. The result was 30+ miles for that week. My plan for the past week was to repeat that mileage and let my body adjust before trying to do more.
But then I felt rundown all week. I felt slow when I was on the road, and I felt listless at work and at home. I decided to skip my Friday run, in order to recover a bit, and maybe rest a little.
Friday night, my whole family drove three hours to Minneapolis. My wife and older daughter were to attend a dance camp for a few days, and my younger daughter and I tagged along for Friday evening and Saturday.
Saturday, the three of us who weren't dancing spent all day on our feet, mostly walking to, from, and at the Mall of America. (Yes, it's a very big mall.) At the end of the day, I was exhausted, and my younger daughter and I drove home.
This morning, I slept in, recovering a bit. I decided to give my planned 9-miler a go in the afternoon, but even before I started I was thinking of contingencies. I would take it slow, and if I didn't feel well, I could trim it back to 8, 7, or even 6 miles. The weather would be quite warm, and I usually run in the morning before heat is an issue, so I carried water for even this short "long run".
I felt good. I ran faster and more comfortably than last week. For Mile 7, I ran a shocking 8:18. (The time is shocking only in the context of the last ten weeks!) I slowed over the last 2.5 miles, yet my overall time was almost 3 minutes than last week's run over the same route.
Of course, soon I had to sleep, and sleep I did, hard and deep for an hour or more.
My body is getting back into the swing of running, but in certain ways I'm still where I was weeks ago. What's worse, I don't know how my body will react to any given run or day. I'm still figuring things out, and hoping to get well all the while.
After reading Lockhart, I read Matthias Felleisen's response to Lockhart, and from there I read Matthias's design for a second course to follow How to Design Programs. From an unlinked reference there, I finally found A Critique of Abelson and Sussman, also known as "Why Calculating Is Better Than Scheming" (and also available from the ACM Digital Library). I'm not sure why I'd never run into this old paper before; it appeared in a 1987 issue of the SIGPLAN Notices. In any case, I am glad I did now, because it offers some neat insights on teaching introductory programming. Some of you may recall its author, Philip Wadler, from his appearance in this blog as Lambda Man a couple of OOPSLAs ago.
In this paper, Wadler argues that Structure and Interpretation of Computer Programs, which I have lauded as one of the great CS books, could be improved as a vehicle for teaching introductory programming by using a language other than Scheme. In particular, he thinks that four particular language features are helpful, if not essential:
Read the paper for an excellent discussion of each, but I will summarize. Pattern matching pulls syntax of many decisions out of a single function and creates separate expressions for each. This is similar to writing separate functions for each case, and in some ways resembles function overloading in languages such as Java and C++. A syntax more like traditional math notation is handy when teaching students to derive expressions and to reason about values and correctness. Static typing requires code to state clearly the kinds of objects it manipulates, which eliminates a source of confusion for students. Finally, lazy evaluation allows programs to express meaningful ideas in a natural way without having the language enforce conclusions that are not strictly necessary. This can be also be useful when doing derivation and proof, but it also opens the door to some cool applications, such as infinite streams.
We teach functional programming and use some of these concepts in a junior-/senior-level programming languages course, where many of Wadler's concerns are less of an issue. (They do come into play with a few students, hough; Wadler might say we wouldn't have these problems if we taught our intro course differently!) But for freshmen, the smallest possibilities of confusion become major confusions. Wadler offers a convincing argument for his points, so much so that Felleisen, a Scheme guy throughout, has applied many of these suggestions in the TeachScheme! project. Rather than switching to a different language, the TeachScheme! team chose to simplify Scheme through a series of "teaching languages" that expose concepts and syntax just-in-time.
If you want evidence that Wadler is describing a very different way to teach introductory programming, consider this from the end of Section 4.1:
I would argue that the value of lazy evaluation outweighs the value of being able to teach assignment in the first course. Indeed, I believe there is great value in delaying the introduction of assignment until after the first course.
The assignment statement is like mom and apple pie in most university CS1 courses! The typical CS faculty could hardly conceive of an intro course without assignment. Abelson and Sussman recognized that assignment need not be introduced so early by waiting until the middle of SICP to use set. But for most computer scientists and CS faculty, postponing assignment would require a Kuhn-like paradigm shift.
Advocates of OOP in CS1 encountered this problem when they tried to do real OOP in the first course. Consider the excellent Object-Oriented Programming in Pascal: A Graphical Approach, which waiting until the middle of the first course to introduce if-statements. From the reaction of most faculty I know, you would have thought that Conner, Niguidula, and van Dam were asking people to throw away The Ten Commandments. Few universities adopted the text despite its being a wonderful and clear introduction to programming in an object-oriented style. As my last post noted, OOP causes us to think differently, and if the faculty can't make the jump in CS1 then students won't -- even if the students could.
(There is an interesting connection between the Conner, Niguidula, and van Dam approach and Wadler's ideas. The former postpones explicit decision structures in code by distributing them across objects with different behavior. The latter postpones explicit decision structures by distributing them across separate cases in the code, which look like overloaded function definitions. I wonder if CS faculty would be more open to waiting on if-statements through pattern matching than they were through the dynamic polymorphism of OOP?)
Wadler indicates early on that his suggestions do not presuppose functional programming except perhaps for lazy evaluation. Yet his suggestions are not likely to have a wide effect on CS1 in the United States any time soon, because even if they were implemented in a course using an imperative language, most schools simply don't teach CS1 in a way compatible with these ideas. Still, we would be wise to take them to heart, as Felleisen did, and use them where possible to help us make our courses better.
Most papers presented at SIGCSE and the OOPSLA Educators' Symposium are about teaching methods, not computational methods. When the papers do contain new technical content, it's usually content that isn't really new, just new to the audience or to mainstream use in the classroom. The most prominent example of the latter that comes to mind immediately is the series of papers by Zung Nguyen and Stephen Wong at SIGCSE on design patterns for data structures. Those papers were valuable in principle because they showed that how one conceives of containers changes when one is working with objects. In practice, they sometimes missed their mark because they were so complex that many teachers in the audience said, "Cool! But I can't do that in class."
However, the OOPSLA Educators' Symposium this year received a submission with a cool object-oriented implementation of a common introductory programming topic. Unfortunately, it may not have made the cut for inclusion based on some technical concerns of the committee. Even so, I was so happy to see this paper and to play with the implementation a little on the side! It reminded me of one of the first efforts I saw in a mainstream CS book to show how we think differently about a problem we all know and love when working with objects. That was Tim Budd's implementation of the venerable eight queens problem in An Introduction to Object-Oriented Programming.
Rather than implement the typical procedural algorithm in an object-oriented language, Budd created a solution that allowed each queen to solve the problem for herself by doing some local computation and communicating with the queen to her right. I remember first studying his code to understand how it worked and then showing it to colleagues. Most of them just said, "Huh?" Changing how we think is hard, especially when we already have a perfectly satisfactory solution for the problem in mind. You have to want to get it, and then work until you do.
You can still find Budd's code from the "download area" link on the textbook's page, though you might find a more palatable version in the download area for the book's second edition. I just spent a few minutes creating a Ruby version, which you are welcome to. It is slightly Ruby-ized but mostly follows Budd's solution for now. (Note to self: have fun this weekend refactoring that code!)
Another thing I liked about "An Introduction to Object-Oriented Programming" was its linguistic ecumenism. All examples were given in four languages: Object Pascal, C++, Objective C, and Smalltalk. The reader could learn OOP without tying it to a single language, and Budd could point out subtle differences in how the languages worked. I was already a Smalltalk programmer and used this book as a way to learn some Objective C, a skill which has been useful again this decade.
(Budd's second edition was a step forward in one respect, by adding Java to the roster of languages. But it was also the beginning of the end. Java soon became so popular that the next version of his book used Java only. It was still a good book for its time, but it lost some of its value when it became monolingual.)
[Update: Added a linked to my interview at Confessions of a Science Librarian.]
Some months, I go through stretches when I write a lot. I started this month with a few substantive posts and a few light posts in the span of a week. Back in November 2007, I wrote twice as many posts as the typical month and more than any month since my first few months blogging. That month, I posted entries eleven days in a row, driven by a burst of thoughts from time spent at a workshop on science and computer science. This month, I had the fortune to read some good articles and the chance to skip real work, think, and write. Sometimes, the mood to write takes hold.
I have had an idea for a long time to write an entry that was motivated by reading George Orwell's essay Why I Write, but never seem to get to it. I'm not getting to it today, either. But it came to mind again for two reasons. First, I spent the morning giving a written interview to John Dupuis, who blogs at Confessions of a Science Librarian. John is a reader of my blog and asked me to share some of my ideas with his readers. I was honored to be asked, and so spent some time this morning reflecting on my blog, what and why I write. Second, today is the fourth anniversary of my first blog post.
Responding to John's questions is more writing than I do on most days. I don't have enough energy left to write a substantive post yet today, but I'm still in a reflective frame of mind about why I write.
Do I really need to blog? Someone has already said what I want to say. In that stretch of posts last November, I cited Norvig, Minsky, and Laurel, among others, talking about the same topics I was writing about. Some reasons I can think of are:
There are certainly other self-interested reasons to write. There is noble self-interest:
Share your knowledge. It's a way to achieve immortality.
-- the 14th Dalai Lama
And there is the short-term self-interest. I get to make connections in my own mind. Sometimes I am privileged to see my influence on former students, when they respond to something I've written. And then there is the lazy blog, where some reader knows or has something I don't and shares. At times, these two advantages come together, as when former student Ryan Dixon brought me a surprise gift last winter.
Year Five begins today, even if still without comments.
While reading this morning I came across a link to this essay. College students should read it, because it points out many of the common anti-patterns in the essays that we professors see -- even in papers written for computer science courses.
Of course, if you read this blog, you know that my writing is a poster child for linguistic diffidence, and pat expressions are part of my stock in trade. It's sad to know that these anti-patterns make up so much of my word count.
This web page also introduced me to Roberts's book Patterns in English. With that title, I must check it out. I needed a better reason to stop by the library than merely to return books I have finished. Now I have one.
Last week I wrote about an essay by Paul Lockhart from a few years ago that has been making the rounds this year. Lockhart lamented that math is so poorly misrepresented in our schools that students grow up missing out on its beauty, and even still not being able to perform the skills in whose name we have killed scholastic math. I've long claimed that we would produce more skilled students if we allowed them to approach these skills from the angle of engaging problems. For Lockhart, such problems come form the minds of students themselves and may have no connection to the "real world".
In computer science, I think letting students create their own problems is also quite valuable. It's one of the reasons that open-ended project courses and independent undergraduate research so often lead to an amazing level of learning. When a group of students wants to train a checkers-playing program to learn from scratch, they'll figure out ways to do it. Along the way, they learn a ton -- some of it material I would have selected for them, some beyond what I would have guessed.
The problems CS students create for themselves often do come straight out of their real world, and that's okay, too. Many of us CS love abstract problems such as the why of Y, but for most of us -- even the academics who make a living in the abstractions -- came to computing from concrete problems. I think I was this way, starting when I learned Basic in high school and wanted to make things, like crosstables for chess tournaments and ratings for the players in our club. From there, it wasn't that far a journey into Gödel, Escher, Bach and grad school! Along the way, I had professors and friends who introduced me to a world much larger than the one in which I wrote programs to print pages on which to record chess games.
This is one reason that I tout Owen Astrachan's problem-based learning project for CS. Owen is interested in problems that come from the real world, outside the minds of the crazy but harmless computer scientists he and I know, love, and are. These are the problems that matter to other people, which is good for the long-term prospects of our discipline and great for hooking the minds of kids on the beauty and power of computing. For computer science students, I am a proponent of courses built around projects, because they are big enough to matter to CS students and big enough to teach them lessons they can't learn working on smaller pieces of code.
With an orientation toward the ground, discussions of functional programming versus object-oriented programming seem almost not to matter. Students can solve any problem in either style, right? So who cares? Well, those of us who teach CS care, and our students should, too, but it's important to remember that this is an inward-looking discussion that won't mean much to people outside of CS. It also won't matter much to our students as they first begin to study computer science, so we can't turn our first-year courses into battlegrounds of ideology. We need to be sure that, whatever style we choose to teach first, we teach it in a way that helps students solve problems -- and create the problems that interest them. They style needs to feel right for the kind of problems we expose them to, so that the students can begin to think naturally about computational solutions.
In my department we have for more than a decade introduced functional programming as a style in our programming languages course, after students have seen OOP and procedural programming. I see a lot of benefit in teaching FP sooner, but that's would not fit our faculty all that well. (The students would probably be fine!) Functional programming has a natural home in our languages course, where we teach it as an especially handy way of thinking about how programming languages work. This is a set of topics we want students to learn anyway, so we are able to introduce and practice a new style in the context of essential content, such as how local variables work and how to write a parser. If a few students pick up on some of the beautiful ideas and go do something crazy, like fire up a Haskell interpreter and try to grok monads, well, that's just fine.
On my way into a store this afternoon to buy some milk, I ran into an old friend. He moved to town a decade or so ago and taught art at the university for five years before moving on to private practice. As we reminisced about his time on the faculty, we talked about how much we both like working with students. He mentioned that he recently attended his 34th wedding of a former student.
Thirty-four weddings from five years of teaching. I've been teaching for sixteen years and have been invited to only a handful of weddings -- three or four.
Either art students are a different lot from CS students, or I am doing something wrong...
I wrote about a recent CS curricular discussion, which started with a blog posting by Mark Guzdial. Reading the comments to Guzdial's post is worth the time, as you'll find a couple of lengthy remarks by Alan Kay. As always, Kay challenges even computer science faculty to think beyond the boundaries of our discipline to the role what our students learn from us plays in a democratic world.
One of Kay's comments caught my attention for connections to a couple of things I've written about in recent years. First, consider this:
I posit that this is still the main issue in America. "Skilled children" is too low a threshold for our system of government: we need "educated adults". ... I think the principle is clear and simple: there are thresholds that have to be achieved before one can enter various conversations and processes. "Air guitar and attitude" won't do.
Science is a pretty good model (and it was used by the framers of the US). It is a two level system. The first level has to admit any and all ideas for consideration (to avoid dogma and becoming just another belief system). But the dues for "free and open" are that science has built the strongest system of critical thinking in human history to make the next level threshold for "worthy ideas" as high as possible. This really works.
This echoes the split mind of a scientist: willing to experiment with the widest set of ideas we can imagine, then setting the highest standard we can imagine for accepting the idea as true. As Kay goes on to say, this approach is embedded in the fabric of the American mentality for free society and government. This is yet another good reason for all students to learn and appreciate modern science; it's not just about science.
Next, consider this passage that follows soon after:
"Air guitar" is a metaphor for choosing too tiny a subset of a process and fooling oneself that it is the whole thing. ... You say "needs" and I agree, but you are using it to mean the same as "wants", and it is simply not the case that education should necessarily adapt to the "wants" of students. This is where the confusion of education and marketing enters. The marketeers are trying to understand "wants" (and even inject more) and cater to them for a price; real educators are interested in "needs" and are trying to fulfill these needs. Marketeers are not trying to change but to achieve fit; educators are trying to change those they work with. Introducing marketing ideas into educational processes is a disaster in the making.
I've written occasionally about ideas from marketing, from the value of telling the right story to the creating of new programs. I believe those things and think that we in academia can learn a lot from marketers with the right ideas. Further, I don't think that any of this is in conflict with what Kay says here. He and I agree that we should not change our curriculum to cater solely to the perceptions and base desires of our clientele, whether students, industry, or even government. My appeal to marketing for inspiration lies in finding better ways to communicate what we do and offer and in making sure that what we do and offer are in alignment with the long-term viability of the culture. The best companies are in business for the long haul and must stay aligned with the changing needs of the world.
Further, as I am certain Kay will agree based on many of the things he has said about Apple of the 1980s, the very best companies create and sell products that their customers didn't even know they wanted. We in academia might learn something from the Apples of our world about how to provide the liberal and professional education that our students need but don't realize they need. The same goes for convincing state legislatures and industry when they view too short a horizon for what we do.
Like Kay, I want to give my students "real computing" and "real education".
I think it is fitting and proper to talk about these issues on Independence Day in the United States. We depend on education to preserve the democratic system in which we live and the values by which we live. But there's more. Education -- including, perhaps especially, science -- creates freedom in the student. The mind becomes free to think greater thoughts and accomplish greater deeds when it has been empowered with our best ideas. Science is one.
I usually write year-end reviews of my running, to look back at accomplishments and disappointments and to think ahead to what the next year will be like. My 2007 review saw a tough January through May, a recovery in June, and then a good training season for the Marine Corps Marathon.
I'm in a looking-forward mood as we reach the midpoint in 2008 because I have a decision to make. My running year started well, and at the end of April I was 140 miles ahead of my 2007 pace, though 130 miles behind my record pace from 2006. Then on May 2, the symptoms that set me back in 2006 returned. I stopped running in order to let my body get better. Unfortunately, the symptoms not gone away yet, but I eventually decided that if they weren't leaving I may as well run -- as long as I didn't feel worse.
On June 9, I ran again, and I spent the next three weeks slowly building up my mileage, keeping a close eye on how I feel. I'm not getting worse and may have gotten marginally better, though I still experience the wrong kind of fatigue. After weeks of 15, 20, and 26 miles, I am on target to run 30 this week. Not much, by past standards, but it's a start.
Sadly, while building back up, I was unable to run my now-traditional half marathon at our local summer festival. (I wasn't the only one ailing... After the horrible flooding that hit us last month, the festival had to trim its half marathon to a 10-miler.)
The decision I have to make is this: Should I try to run a fall marathon this year? Even last year I did not have to think about this, because I recovered early in May and had two months to get back somewhere near normal in time for June through September training plan. But this week's 30 miles find me a week deep into July... I'm not strong yet, nor 100% healthy, and I'm not sure I am physically ready to take up the gauntlet. I also wonder if I am mentally ready.
But the desire to rise up to the challenge is at least flickering.
If I do go for it, this is the perfect year to aim for a marathon with no time goal -- just run it, finish, and feel that accomplishment. That would be a refreshing change for me.
If I do go for it, I will keep it simple, stay close to home. I am considering the Des Moines Marathon or a much smaller event, the On the Road for Education marathon. At this point, I'm leaning toward the latter for a few reasons. I've never run a small marathon and am curious what it feels like to be out there mostly alone, not in a big crowd with a large audience along the route. It also has a later preregistration cutoff date, which allows a later decision and a smaller fee. Perhaps most important, though, is one lone week: Des Moines is October 19, and Mason City is October 26. At this point, an extra week for training may be worth a lot more than any money or support.
I have to decide soon.
So no, I'm not complaining about the presence
of facts and formulas in our mathematics classes,
I'm complaining about the lack of mathematics
in our mathematics classes.
-- Paul Lockhart
A week or so ago I mentioned reading a paper called A Mathematician's Lament by Paul Lockhart and said I'd write more on it later. Yesterday's post, which touched on the topic of teaching what is useful reminded me of Lockhart, a mathematician who stakes out a position that is at once diametrically opposed to the notion of teaching what is useful about math and yet grounded in a way that our K-12 math curriculum is not. This topic is especially salient for me right now because our state is these days devoting some effort to the reform of math and science education, and my university and college are playing a leading role in the initiative.
Lockhart's lament is not that we teach mathematics poorly in our K-12 schools, but rather that we don't teach mathematics at all. We teach definitions, rules, and formal systems that have been distilled away from any interesting context, in the name of teaching students skills that will be useful later. What students do in school is not what mathematicians do, and that's a shame, because mathematicians is fun, creative, beautiful -- art.
As Lockhart described his nightmare of music students not being allowed to create or even play music, having to copy and transpose sheet music, I cringed, because I recognized how much of our introductory CS courses work. As he talked about how elementary and HS students never get to "hear the music" in mathematics, I thought of Brian Greene's Put a Little Science in Your Life, which laments the same problem in science education. How have we managed to kill all that is beautiful in these wonderful ideas -- these powerful and even useful ideas -- in the name of teaching useful skills? So sad.
Lockhart sets out an extreme stance. Make math optional. Don't worry about any particular content, or the order of topics, or any particular skills.
Mathematics is the music of reason. To do mathematics is to engage in an act of discovery and conjecture, intuition and inspiration; to be in a state of confusion--not because it makes no sense to you, but because you gave it sense and you still don't understand what your creation is up to; to have a breakthrough idea; to be frustrated as an artist; to be awed and overwhelmed by an almost painful beauty; to be alive, damn it.
I teach computer science, and this poetic sense resonates with me. I feel these emotions about programs all the time!
In the end, Lockhart admits that his position is extreme, that the pendulum has swung so far to the "useful skills" side of the continuum he feels a need to shout out for the "math is beautiful" side. Throughout the paper he tries to address objections, most of which involve our students not learning what they need to know to be citizens or scientists. (Hint: Does anyone really think that most students learn that now? How much worse off could we be to treat math as art? Maybe then at least a few more students would appreciate math and be willing to learn more.)
This paper is long-ish -- 25 pages -- but it is a fun read. His screed on high school geometry is unrestrained. He calls geometry class "Instrument of the Devil" because it so thoroughly and ruthlessly kills the beauty of proof:
Other math courses may hide the beautiful bird, or put it in a cage, but in geometry class it is openly and cruelly tortured.
His discussion of proof as a natural product of a student's curiosity and desire to explain an idea is as well written as any I've read. It extends another idea from earlier in the paper that fits quite nicely with something I have written about computer science: Mathematics is the art of explanation.
By concentrating on what, and leaving out why, mathematics is reduced to an empty shell. The art is not in the "truth" but in the explanation, the argument. It is the argument itself which gives the truth its context, and determines what is really being said and meant. Mathematics is the art of explanation. If you deny students the opportunity to engage in this activity--to pose their own problems, make their own conjectures and discoveries, to be wrong, to be creatively frustrated, to have an inspiration, and to cobble together their own explanations and proofs--you deny them mathematics itself.
I am also quite sympathetic to one of the other themes that runs deeply in this paper:
Mathematics is about problems, and problems must be made the focus of a student's mathematical life.
(Ditto for computer science.)
... you don't start with definitions, you start with problems. Nobody ever had an idea of a number being "irrational" until Pythagoras attempted to measure the diagonal of a square and discovered that it could not be represented as a fraction.
Problems can motivate students, especially when students create their own problems. That is one of the beautiful things about math: almost anything you see in the world can become a problem to work on. It's also true of computer science. Students who want to write a program to do something -- play a game, predict a sports score, track their workouts -- will go out of their way to learn what they need to know. I'm guessing anyone who has taught computer science for any amount of time has experienced this first hand.
As I've mentioned here a few times, my colleague Owen Astrachan is working on a big project to explore the idea of problem-based learning in CS. (I'm wearing the project's official T-shirt as I type this!) This idea is also right in line with Alan Kay's proposal for an "exploratorium" of problems for students who want to learn to commmunicate via computation, which I describe in this entry.
I love this passage from one of Lockhart's little dialogues:
SALVIATI: ... people learn better when the product comes out of the process. A real appreciation for poetry does not come from memorizing a bunch of poems, it comes from writing your own.
SIMPLICIO: Yes, but before you can write your own poems you need to learn the alphabet. The process has to begin somewhere. You have to walk before you can run.
SALVIATI: ... No, you have to have something you want to run toward.
You just have to have something you want to run toward. For teenaged boys, that something is often a girl, and suddenly the desire to write a poem becomes a powerful motivator. We should let students find goals to run toward in math and science and computer science, and then teach them how.
It's interesting that I end with a running metaphor, and not just because I run. My daughter is a sprinter and now hurdler on her school track team. She sprints because she likes to run short distances and hates to run anything long (where, I think, "long" is defined as anything longer than her race distance!). The local runners' club leads a summer running program for high school students, and some people thought my daughter would benefit. One benefit of the program is camaraderie; one drawback that it involves serious workouts. Each week the group does a longer run, a day of interval training, and a day of hill work.
I suggested that she might be benefit more from simply running more -- not doing workouts that kill her, just building up a base of mileage and getting stronger while enjoying some longer runs. My experience is that it's possible to get over the hump and go from disliking longs runs to enjoying them. Then you can move on to workouts that make you faster. So she and I are going to run together a couple of times a week this summer, taking it easy, enjoying the scenery, chatting and otherwise not stressing about "long runs".
There is an element of beauty versus duty in learning most things. When the task is all duty, you may do it, but you may never like it. Indeed, you may come to hate it and stop altogether when the external forces that keep you on task (your teammates, your sense of belonging) disappear. When you enjoy the beauty of what you are doing, everything else changes. So it is with math, I think, and computer science, too.
A couple of weeks ago I linked to Shriram Krishnamurthi, who mentioned a recent SIGPLAN-sponsored workshop that has proposed a change to ACM's curriculum guidelines. The change is quite simple, shifting ten hours of instruction in programming languages from small topics into a single ten-hour category called "functional programming". Among the small topics that would be affected, coverage of recursion and event-driven programming would be halved, and coverage of virtual machines and language translation would no longer be mandated separately, nor would an "overview" of programming languages.
In practice, the proposal to eliminate coverage of some areas has less effect than you might think. Recursion is a natural topic in functional programming, and event-driven programming is a natural topic on object-oriented programming. The current recommendation of three hours total to cover virtual machines and language translation hardly does them justice anyway; students can't possibly learn any of the valuable ideas in depth in that amount of time. If schools adopt this change, they would be spending the time spent more productively helping students to understand functional programming well. Many schools will probably continue to teach those topics as part of their principles of programming languages course anyway.
I didn't comment on the proposal in detail earlier because it seemed more like the shuffling of deck chairs than a major change in stance. I do approve of the message the proposal sends, namely that functional programming is important enough to be a core topic in computer science. Readers of this blog already know where I stand on that.
Earlier this week, though, Mark Guzdial blogged Prediction and Invention: Object-oriented vs. functional, which has created some discussion in several circles. He starts with "The goal of any curriculum is to prepare the students for their future." Here is my take.
Mark seems to be saying that functional programming is not sufficiently useful to our students to make it a core programming topic. Mandating that schools teach ten hours each of functional and object-oriented programming, he thinks, tells our students that we faculty believe functional programming is -- or will be -- as important as object-oriented programming to their professional careers. Our students get jobs in companies that primarily use OO languages and frameworks, and our curricula should reflect that.
This piece has a vocational tone that I find surprising coming from Mark, and that is perhaps what most people are reacting to when they read it. When he speaks of making sure the curriculum teaches what is "real" to students, or how entry-level programmers often find themselves modifying existing code with an OO framework, it's easy to draw a vocational theme from his article. A lot of academics, especially computer scientists, are sensitive to such positions, because the needs of industry and the perceptions of our students already exert enough pressure on CS curriculum. In practical terms, we have to find the right balance between practical skills for students and the ideas that underlie those skills and the rest of computing practice. We already know that, and "esoteric" topics such as functional programming and computing theory are already part of that conversation.
Whether Mark is willing to stand behind the vocational argument or not, I think there is another theme in his piece that also requires a balance he doesn't promote. It comes back to the role of curriculum guidelines in shaping what schools teach and expressing what we think students should learn. Early on, he says,
I completely disagree that we should try to mandate that much functional programming through manipulation of the curriculum standards.
Then, when teaching more functional programming becomes a recognized best practice, it will be obvious that it should be part of the curriculum standards.
The question is whether curriculum standards should be prescriptive or descriptive. Mark views the current SIGPLAN proposal as prescribing an approach that contradicts both current best practice and the needs of industry, rather describing best practice in schools around the country. And he thinks curriculum standards should be descriptive.
I am sensitive to this sort of claim myself, because -- like Mark! -- I have been contending for many years with faculty who think OOP is a fad and has no place in a CS curriculum, or at least in our first-year courses. These faculty, both at my university and throughout the country, argue that our courses should be about what students "really do" in the world, not about esoteric design patterns and programming techniques. In the end, these people end up claiming that people like me are trying to prescribe a paradigm for how our students should think.
The ironic thing, of course, is that over the last fifteen years OOP and Java have gone from being something new to the predominant tools in industry. It's a good things that some schools started teaching more OOP, even in the first year, and developing the texts and teaching materials that other schools could use to join in later.
(The people arguing against OOP in the first year have not given up the case; they've now shifted to claiming that we should teach even Java "fundamentals first", going "back to basics" before diving into all that complicated stuff about data and procedures bearing some relation to one another. I've written about that debate before and have tremendous respect for many of the people on the front line of "basics" argument. I still disagree.)
As in the case of vocational versus theoretical content, I think we need to find the right balance between prescriptive and descriptive curriculum standards. These two dimensions are not wholly independent of each other, but they are different and so call for different balances. I agree with Mark that at least part of our curriculum standard should be descriptive of current practice, both in universities and in industry. Standard curricular practice is important in helping to create some consistency across universities and helping to keep schools who are out of the know on a solid and steady path. And the simple fact is that our students do graduate into professional careers and need to be prepared to participate in an economy that increasingly depends on information technology. For those of us at state-supported universities, this is a reasonable expectation of the people who pay our bills.
However, I think that we also need some prescriptive elements to our curricula. As Alan Kay says in a comment on Mark's blog, universities have a responsibility not only to produce graduates capable in participating in the economy but also to help students become competent, informed citizens in a democracy. This is perhaps even more important at state-supported universities, which serve the citizenry of the state. This may sound too far from the ground when talking about computer science curriculum, but it's not. The same ideas apply -- to growing informed citizens, and to growing informed technical professionals.
The notion that curriculum standards are partly prescriptive is not all that strange, because it's not that different from how curriculum standards have worked in the past, really. Personally, I like having experts in areas such as programming languages and operating systems helping us keep our curricular standards up to date. I certainly value their input for what they know to be current in the field. I also value their input because they know what is coming, what is likely to have an effect on practice in the near future, and what might help students understand better the more standard content we teach.
At first I had a hard time figuring out Mark's position, because I know him to grok functional programming. Why was he taking this position? What were his goals? His first paragraph seems to lay out his goal for the CS curriculum:
The goal of any curriculum is to prepare the students for their future. In just a handful of years, teachers aim to give the students the background to be successful for several decades.
He then recognizes that "the challenge of creating a curriculum is the challenge of predicting the future."
These concerns seem to sync quite nicely with the notion of encouraging that all students learn a modicum about functional programming! I don't have studies to cite, but I've often heard and long believed that the more different programming styles and languages a person learns, the better a programmer she will be. Mark points to studies show little direct transfer from skills learned in one language to skills learned in another, and I do not doubt their truth. But I'm not even talking about direct transfer of knowledge from functional programming to OOP; I'm thinking of the sort of expansion of the mind that happens when we learn different ways to think about problems and implement solutions. A lot of the common OO design patterns borrow ideas from other domains, including functional programming. How can we borrow interesting ideas if we don't know about them?
It is right and good that our curriculum standards push a little beyond current technical and curricular practice, because then we are able to teach ideas that can help computing evolve. This evolution is just as important in the trenches of a web services group at an insurance company as it is to researchers doing basic science. In the particular case of functional programming, students learn not only beautiful ideas but also powerful ideas, ideas that are germinating now in the development of programming languages in practice, from Ruby and Python to .NET. Our students need those ideas for their careers.
As I mentioned, Alan Kay chimed in with a few ideas. I think he disagrees that we can't predict the future by inventing it through curriculum. His idealism on these issues seems to frustrate some people, but I find it refreshing. We can set our sights higher and work to make something better. When I used the allusion to "shuffling the deck chairs" above, I was thinking of Kay, who is on record as saying that how we teach CS is broken. He has also talked to CS educators and exhorted us to set our sights higher. Kay supports the idea of prescriptive curricula for a number of reasons, the most relevant of which to this conversation is that we don't want to hard-code accidental or misguided practice, even if it's the "best" we have right now. Guzdial rightly points out that we don't want to prescribe new accidental or misguided practices, either. That's where the idea of striking a balance comes in for me. We have to do our best to describe what is good now and prescribe at least a little of what is good for the future.
I see no reason that we can't invent good futures by judiciously defined curriculum, any more than inventing futures in other arenas. Sure, we face social, societal, and political pressures, but how many arenas don't?
So, what about the particular curriculum proposal under discussion? Unlike Guzdial, I like the message it sends, that functional programming is an important topic for all CS grads to learn about. But in end I don't think it will cause any dramatic changes in how CS departments work. I used the word "encourage" above rather than Guzdial's more ominous "mandate", because even ACM's curriculum standards have no force of law. Under the proposed plan, maybe a few schools might try to present a coherent treatment of functional programming where now they don't, at the expense of covering a few good ideas at a shallow level. There will continue to be plenty of diversity, one of the values that guides Guzdial's vision. On this, he and I agree strongly. Diversity in curricula is good, both for the vocational reasons he asserts but also because we learn even better how to teach CS well from the labors and explorations of others.