I received an interesting request in e-mail today. I was asked to identify a few...
software innovators--people who have challenged, disrupted, and redefined disciplines through radical code.
The request is specifically for programmers, thinkers, and writers who revolutionized my field, computer science. The first people who come to mind, of course, are John McCarthy and Alan Kay. McCarthy may not himself have written code for the first Lisp interpreter back in 1958, but this page certainly revolutionized computing. I know that Kay didn't write all of Smalltalk, even at the beginning (Dan Ingalls is responsible for most or all of the VM), but the ideas in Smalltalk certainly changed how I and many other people think about programming. So they are at the top of my list.
Why limit my response to the products of my timy brain? I am interested in hearing whom you think changed computing with "radical code". Don't limit yourself to CS, either. I'd love to hear about people who changed other disciplines with their code. Drop me a message with your ideas!
When I first came to UNI, a colleague and I talked a lot about programming by novices, graphical programming, programming by example, declarative programming, and the like. Many people saw ideas such as these as a way to broaden participation by making it possible for people to program without "programming". Anyone can draw a flowchart, right? Or program by dragging and dropping widgets, right?
No, said my colleague. He was fond of saying that all of those solutions were really just new kinds of programming language in disguise; ultimately, you had to know how to write a program. That may not be an insanely difficult task, but it is not a trivial task, either -- even if pictures were the lingua franca.
Yet the world goes on, looking for its next way to bypass the hard work of programming and make it disappear with pretty pictures. So I'm glad when people say out loud in public what my colleague has always said in-house. Here is Uncle Bob Martin sounding in on the latest, model-driven architecture:
Some folks have put a great deal of hope in technologies like MDA. I don't. The reason is that I don't see MDA as anything more than a different kind of computer language. To be effective it will still need 'if' and 'for' statements of some kind. And 'programmers' will still need to write programs in that language, because details will still need to be managed. There is no language that can eliminate the programming step, because the programming step is the translation from requirements to systems irrespective of language. MDA does not change this.
Programming is the translation from requirements to systems, whether we write it down in C, Java, or Ruby -- or an MDA model. There will be always be a program, and so there will be always be programmers. Making better programming tools helps, but as Bob says we also need thinkers, practitioners and academics alike, to figure out better what it means "to program" and to help others learn how to do it.
From running coach and research physiologist Jason Karp, Ph.D.:
... my research published in International Journal of Sport Nutrition and Exercise Metabolism in 2006 showed that chocolate milk is just as good or better than other recovery drinks after exhausting exercise.
According to Karp, other research suggests that I could consume nearly four 8-oz. glasses of chocolate milk within 30 minutes after ending a hard run in order to take in the 0.7g of simple carbs per pound of body weight needed to maximize the rate of glycogen synthesis and thus to speed replenishment of my muscles stores. The mix of carbs and protein in the drink is almost ideal for the human body under the conditions of hard work.
I heart chocolate milk. Any activity that makes it not only possible but recommended that I consume plentiful amounts of it is okay by me.
Unfortunately, I don't usually drink my chocolate milk as a post-run recovery drink. I tend to drink mine in the evening as a treat. I rationalize that it is good to fuel up before my early morning runs, which I do as soon as I wake up and before eating or drinking anything.
But there is also research to support that not fueling up immediately after a hard run can be a good thing, if done appropriately. Running marathons requires the largest store of glycogen possible, a system that uses newly-ingested glucose as efficiently as possible, and a system that uses fat for fuel as efficiently as possible. "Starving" the muscles immediately after a workout that depletes glycogen stores trains the body to use fat more efficiently and to use newly-ingested glucose more efficiently. Karp calls this a "creating a threat to the muscles' survival", to which the body responds effectively. When you finally rebuild glycogen stores later with carbo loading, the muscles will store more than their previous capacity allowed. The human body is an amazing and wonderful machine.
Quick update: I ran 7 miles on Friday in under 56 minutes -- nothing spectacular, but much faster than I have run in over four months. This morning I ran 11 miles, on one of my hilliest routes. The results of these runs are my first 30-mile week in over four months, legs that feel abused, and a small headache from my lingering symptoms. Thirty miles ain't much, but right now it feels okay.
I may well drink a tall glass of chocolate milk this evening.
A couple of weeks ago, I mentioned that I might have my students create their own examples for a homework assignment. Among the possible benefits of this were:
I tried this and, as usual, learned as much or more than my students.
Getting students to think concretely about their tasks is tough, but asking them to write examples seemed to help. Most of them made a pretty good effort and so fleshed out what the one- or two-line text description I gave them meant. I saw lots of the normal cases for each task but also examples at the boundaries of the spec (What if the list is empty?) and on the types of arguments (What if the user passes an integer when the procedure asks for a list? What if the user passes -1 when the procedure expects a non-negative integer?) In class, before the assignment was due, we were able to discuss how much type checking we want our procedures to do, if any, in a language like Scheme without manifest types. Similarly, should we write examples with the wrong number of arguments, which result in an error?
I noticed that most students' examples contrasted cases with different inputs to a procedure, but that few thought about different kinds of output from the procedure. Can filter return an empty list? Well, sure; can you show me an example? I'll know next time to talk to students about this and have them think more broadly about their specs.
Requiring examples part-way through the assignment did motivate questions earlier than usual. On previous assignments, if I received any questions at all, they tended to arrive in my mailbox the night before the programs were due. That was still the case, but now the deadline was halfway through the assignment period, before they had written any code. And most of the class seemed happy to comply with my request that they write their examples before they wrote their code. (They are rarely in a hurry to write their code!)
Did having their own examples in hand help the students know how much code to write and when to stop? Would examples provided by me have helped as much? I don't know, but I guess 'yes' to both. Hmm. I didn't ask students about this! Next time...
Seeing their examples early helped me as much writing their examples early helped them. They got valuable feedback, yes, but so did I. I learned a bit of what they were thinking about the specific problems at hand, but I also learned a bit of what they think about more generally when faced with a programming task.
My first attempt at this also gave me some insight about how to describe the idea of writing examples better, and why it's worth the effort. The examples should clarify the textual description of the problem. They aren't about testing. They may be useful as tests later, but they probably aren't sufficient. (They approximate are a form of black box testing, but not white box testing.) As clarifiers, one might take an extreme position: If the textual description of the problem were missing, would the examples be enough for us to know what procedure to write? At this extreme, examples with the wrong number and type of arguments might be essential; in the more conventional role of clarifying the spec, those examples are unnecessary.
One thing that intrigued me after I made this assignment is that students might use their examples as the source material for test-driven development. (There's that word again.) I doubt many students consider this on their own; a few have an inclination to write and test code in close temporal proximity, but TDD isn't a natural outgrowth of that for most of them. In any case, we are currently learning a pattern-driven style of programming, so they have a pretty good idea of what their simplest piece of code will look like. There is a nice connection, though. Structural recursion relies on mimicking the structure of the input data, and that data definition also gives the programmer an idea about the kinds of input for which she should have examples. That s-list is either an empty list or a pair...
I'm probably reinventing a lot of wheels that the crew behind How to Design Programs smoothed out long ago. But I feel like I'm learning something useful along the way.
On the way out of class today, I ran into the colleague who teaches in the room after me. I apologized for being slow to get out of the room and told him that I had talked more than usual today. From the looks on the faces of my students, I gathered that they needed a bit more. What they really needed was more time with same material. Most of all, they needed me to slow down -- rather than cover more material, they needed a chance to think more about what they had just learned. My way of doing that was to keep talking about the current example.
I told my colleague that there is probably a pedagogical pattern called Shut Up. And if not, then maybe there should be.
He said that the real pattern is Ask a Question.
I bowed down to him.
We talked a bit more, about how we both desire to use the Ask a Question pattern more often. We don't, out of habit and out of convenience. Professors lecture. It's what we do. The easiest thing to do is almost always: just keep talking, saying what I had planned to say.
I give myself some credit for how I ended class today. At the very least, I realized that I should not introduce new material. I was able to Let the Plan Go 
Better than sticking to a plan that is off track for my students is to keep talking, but about same stuff, only in a different way. This can sometimes be good. It gives me a chance to show students another side of the same idea, so that they might understand the idea better by seeing it from different perspectives.
Is Shut Up better than that? Sometimes. There are times when students just need... time -- time for the idea to sink in, time to process.
Is Ask a Question better still? Yes, in most cases. Even if I show students an idea, rather than telling them something, they remain largely passive in the process. Asking a question engages them in the idea. More and different parts of their brain can go to work. Most everything we know about how people learn says that this is A Good Thing.
Now, I do give myself a little credit here, too. I know about the Active Student pattern  and have changed my habits slowly over time. I try to toss in a question for students every now and then, if only to shut myself up for a while. But my holding pattern today probably didn't use enough questions. I was under time pressure (class is almost over!) and didn't have the presence of mind to turn the last few minutes into an exercise. I hope to do better next time.
 You can read the Let the Plan Go pattern in Seminars, an ambitious pattern language by Astrid Fricke and Markus Völter.
I'm a big tennis fan. I like to play and would love to play more, though I've never played well. But I also like to watch tennis -- it is a game of athleticism and strategy. The players are often colorful, yet many of the greatest have been quiet, classy, and respectful of the game. I confess a penchant for the colorful players; Jimmy Connors is my favorite player of all time, and in the 1990s my favorite was Andre Agassi.
Agassi's chief rival throughout his career was one of the game's all-time greats, Pete Sampras. Sampras won a record fourteen Grand Slam titles (a record under assault by the remarkable Roger Federer) and finished six consecutive years as the top-ranked player in the world (a record that no one is likely to break any time soon). He was also one of the quiet, respectful players, much more like me than the loud Agassi, who early in his career seemed to thrive on challenging authority and crossing boundaries just for the attention.
Sampras recently published a tennis memoir, A Champion's Mind, which I gladly read -- a rare treat these days, reading a book purely for pleasure. But even while reading for pleasure I could not help noticing parallels to my professional interest in software development and teaching. I saw in Sampras's experience some lessons that that we in CS have also learned. Here are a few.
Teaching and Humility
After Sampras had made his mark as a great player, one of his first coaches liked to be known as one of the coaches who helped make Sampras the player he was. Sampras gave that coach his due, and gave the two men who coached him for most of his pro career a huge amount of credit for honing specific elements of his game and strategy. But without sounding arrogant, he also was clear that no coach "made" him. He had a certain amount of native talent, and he was also born with the kind of personality that drove him to excel. Sampras would likely have been one of the all-time greats even if he had had different coaches in his youth, and even as a pro.
Great performers have what it takes to succeed. It is rare for a teacher to help create greatness in a student. What made Sampras's pro coaches so great themselves is not that they built Sampras but that they were able to identify the one or two things that he needed at that point in his career and helped him work on those parts of his game -- or his mind. Otherwise, they let the drive within him push him forward.
As a teacher, I try figure out what students need and help them find that. It's tough to do when teaching a class of twenty-five students, because so much of the teaching is done with the group and so cannot be tailored to the individual as much as I might like and as much as each might need. But when mentoring students, whether grad students or undergrads, a dose of humility is in order. As I think back to the very best of my past students, I realize that I was most successful when I helped them get past roadblocks or to remove some source of friction in their thinking or their doing. Their energy often energized me, and I fed off of the relationship as much as they did.
The secret of greatness is working hard day in and day out. Sampras grew as a player because he had to in order to achieve his goal of finishing six straight years as #1. And the only way to do that was to add value to his game every day. This seems consistent with agile developers' emphasis on adding value to their programs every day, through small steps and daily builds. Being out there every day also makes it possible to get feedback more frequently and so make the next day's work potentially more valuable. For some reason, Sampras's comments on a commitment to being in the arena day in and day out reminded me of one of Kent Beck's early bits of writing on XP, in which he proclaimed that, and the end of the day, if you hadn't produced some code, you probably had not given your customer any value. I think Sampras felt similarly.
Finally, this paragraph from a man who never changed the model of racket he used throughout his career, even as technology made it possible for lesser players to serve bigger and hit more powerful ground strokes. Here he speaks of the court on which his legend grew beyond ordinary proportion, Centre Court at the All England Club:
I enjoyed the relative "softness" of the court; it was terrific to feel the sod give gently beneath my feet with every step. I felt catlike out there, like I was on a soft play mat where I could do as I pleased without worry, fear, or excessive wear and tear. Centre Court always made me feel connected to my craft, and the sophisticated British crowd enhanced that feeling. It was a pleasure to play before them, and they inspired me to play my best. Wimbledon is a shrine, and it was always a joy to perform there.
Whatever else the agile crowd is about, feeling connected to the craft of making software is at its heart. I like to use tools that give gently beneath my feet, that let me make progress without worry and fear. Even ordinary craftsmen such as I appreciate these feelings.
The latest issue of ACM's on-line pub Ubiquity consists of Chauncey Bell's My Problem with Design, an article that first appeared on his blog a year ago. I almost stopped reading it early on, distracted by other things and not enamored with its wordiness. (I'm one to talk about another writer's wordiness!) I'm glad I read the whole article, because Bell has an inspiring take on design for a world that has redefined the word from its classic sense. He echoes a common theme of the software patterns and software craftsmanship crowd, that in separating design from the other tasks involved in making an artifact we diminish the concept of design, and ultimately we diminish the quality of the artifact thus made.
But I was especially struck by these words:
The distinctive character of the designer shapes each design that affects us, and at the same time the designer is shaped by his/her inventions. Successful designs shape those for whom they are designed. The designs alter people's worlds, how they understand those worlds, and the character and possibilities of inhabiting those worlds. ...
Most of our contemporaries tell a different story about designing, in which designers fashion or craft artifacts (including "information") that others "use." One reason that we talk about it this way, I think, is that it can be frightening to contemplate the actual consequences of our actions. Do we dare speak a story in which, in the process of designing structures in which others live, we are designing them, their possibilities, what they attend to, the choices they will make, and so forth?
(The passage I clipped gives the networked computer as the signature example of our era.)
Successful designs shape those for whom they are designed. In designing structures for people, we design them, their possibilities.
I wonder how often we who make software think this sobering thought. How often do we simply string characters together without considering that our product might -- should?! -- change the lives of its users? My experience with software written by small, independent developers for the Mac leads me to think that at least a few programmers believe they are doing something more than "just" cutting code to make a buck.
I have had similar feelings about tools built for the agile world. Even if Ward and Kent were only scratching their own itches when they built their first unit-testing framework in Smalltalk, something tells me they knew they were doing more than "making a tool"; they were changing how they could write Smalltalk. And I believe that Kent and Erich knew that JUnit would redefine the world of the developers who adopted it.
What about educators? I wonder how often we who "design curriculum" think this sobering thought. Our students should become new people after taking even one of our courses. If they don't, then the course wasn't part of their education; it's just a line on their transcripts. How sad. After four years in a degree programs, our students should see and want possibilities that were beyond their ken at the start.
I've been fortunate in my years to come to know many CS educators for whom designing curriculum is more than writing a syllabus and showing up 40 times in a semester. Most educators care much more than that, of course, or they would probably be in industry. (Just showing up out there pays better than just showing up around here, if you can hold the gig.) But even if we care, do we really think all the time about how our courses are creating people, not just degree programs? And even if we think this way in some abstract way, how often do we let it seep down into our daily actions. That's tough. A lot of us are trying.
I know there's nothing new here. Way back, I wrote another entry on the riff that "design, well done, satisfies needs users didn't know they had". Yet it's probably worth reminding ourselves about this every so often, and to keep in mind that what we are doing today, right now, is probably a form of design. Whose world and possibilities are we defining?
This thought fits nicely with another theme among some CS educators these days, context. We should design in context: in the context of implementation and the other acts inherent in making something, yes, but also in the context of our ultimate community of users. Educators such as Owen Astrachan are trying help us think about our computing in the context of problems that matter to people outside of the CS building. Others, such as Mark Guzdial, have been preaching computing in context for a while now. I write occasionally on this topic here. If we think about the context of our students, as we will if we think of design as shaping people, then putting our courses and curricula into context becomes the natural next step.
Another entry generated from a thread on a mailing list...
A recent thread on the SIGCSE list began as a discussion of how programming language constructs are abstractions of the underlying hardware, and what that means for how students understand the code they write. For example, this snippet of Java:
int x = 1; while (x > 0) x++;
does not result in an infinite, because Java ints are not integers.
This is one of many examples that remind us how important it is to study computer organization and architecture, and more generally to learn that abstractions are never 100% faithful to the details they hide. If they were, they would not be abstractions! A few good abstractions make all the difference in how we work, but -- much like metaphor -- we have to pay attention to what happens at their edges.
Eventually, the thread devolved toward a standard old discussion on this list, "What is Computer Science?" I conjecture that every mailing list, news group, and bulletin board has a topic that is its "fixed point", the topic toward which every conversation ultimately leads if left to proceed long enough, unfettered by an external force. Just about every Usenet newsgroup in which I participated during the late 1980s and early 1990s had one, and the SIGCSE list does, too. It is, "What is Computer Science?"
This question matters deeply to many people, who believe that graduates of CS programs have a particular role to play in the world. Some think that the primary job of undergraduate CS programs is to produce software engineers. If CS is really engineering (or at least should be thought of that way for practical reasons), then the courses we teach and the curricula we design should have specific outcomes, teach specific content, and imbue in students the mindset and methodology of an engineer. If CS is some sort of liberal art, then our courses and curricula will look quite different.
Much of this new thread was unremarkable if only because it all sounded so familiar to me. One group of people argued that CS is engineering, and another argued that it was more than engineering, perhaps even a science. I must have been in an ornery mood, because one poster's assertion provoked me to jump into the fray with a few remarks. He claimed that CS was not a science, because it is not a "natural science", and that it is not a natural science because the object of its study is not a natural phenomenon:
I don't believe that I have ever seen a general purpose, stored-program computing device that occurs in nature... unless we want to claim that humans are examples of such devices.
This seems like such a misguided view of computer science, but many people hold it. I'm not surprised that non-computer scientists believe this, but I am still surprised to learn that someone in our discipline does, too. Different people have different backgrounds and experiences, and I guess those differences can lead people to widely diverging viewpoints.
Computer science does not study the digital computer. Dijkstra told us so a long time ago, and if we didn't believe him then, we should now, with the advent of ideas such as quantum computing and biological computing.
Computer science is about processes that transform information. I see many naturally-occurring processes in the world. It appears now that life is the result of an information process, implement in the form of DNA. Chemical processes involve information as well as matter. And some physicists now believe that the universe as we experience it is a projection of two-dimensional information embodied in the interaction of matter and energy.
When we speak of these disciplines, we are saying more than that computer scientists use their tool -- a general-purpose computation machine -- to help biologists, chemists, and physicists do science in their areas. We are talking about a more general view of processes and information, how they behave in theory and under resource constraints. Certainly, computer scientists use their tools to help practitioners of other disciplines do their jobs differently. But perhaps more important, computer scientists seek to unify our understanding of processes and information across the many disciplines in which they occur, in a way that sheds light on how information processing works in each discipline. We are still at the advent of the cycle feeding back what we learn from computing into the other disciplines, but many believe that this is where the greatest value of computer science ultimately lies. This means that computer science is wonderful not only because we help others by giving them tools but also because we are studying something important in its own right.
If we broaden our definition of "naturally occurring" to include social phenomena in large. complex systems that were not designed by anyone in particular, then the social sciences give rise to a whole new class of information processes. Economic markets, political systems, and influence networks all manifest processes that manipulate and communicate information. How do these processes work? Are they bound by the same laws as physical information processing? These are insanely interesting questions, whose answers will help us to understand the world we live in so much better than we do now. Again, study of these processes from the perspective of computer science is only just beginning, but we have to start somewhere. Fortunately, some scientists are taking the first steps.
I believe everything I've said here today, but that doesn't mean that I believe that CS is only science. Much of what we do in CS is engineering: of hardware systems, of software systems, of larger systems in which the manipulation of information is but one component. Much of what we do is mathematics: finding patterns, constructing abstractions, and following the implications of our constructions within a formal system. That doesn't mean computer science is not also science. Some people think we use the scientific method only as a tool to study engineered artifacts, but I think that they are missing the big picture of what CS is.
The fact that people within our discipline still grapple with this sense of uncertainty about its fundamental nature does not disconcert me. We are a young discipline and unlike any of the disciplines that came before (which are themselves human constructs in trying to classify knowledge of the world). We do not need to hide from this unique character, but should embrace it. As Peter Denning has written over the years Is computer science science? Engineering? Mathematics? The answer need not be one of the above. From different perspectives, it can be all three.
Of course, we are left with the question of what it is like for a discipline to comprise all three. Denning's Rebooting Computing summit will bring together people who have been thinking about this conundrum in an effort to make progress, or chart a course. On the CS education front, we need to think deeply about the implications of CS's split personality for the design of our curricula. Owen Astrachan is working on innovating the image of CS in the university by turning our view outward again to the role of computer science in understanding a world bigger than the insides of our computers or compilers. Both of these projects are funded by the NSF, which seems to appreciate the possibilities.
I can't think about the relationship between computer science and natural science with thinking of Herb Simon's seminal Sciences of the Artificial. I don't know whether reading it would change enough minds, but it affected deeply how I think about complex systems, intentionality, and science.
On the PLT Scheme mailing list, someone asked why the authors of How to Design Programs do not provide unit tests for their exercises. The questioner could understand not giving solutions, but why not give the examples that the students could use to guide their thinking. A list member who is not an HtDP c-author speculated that if the authors provided unit tests then students would not bother to implement their own.
Co-author Matthias Felleisen responded "Yes" and added this stronger assertion:
I have come to believe that being able to make up your own examples (inputs, outputs) is the critical step in solving most problems.
Writing examples is one of the essential elements of the "design recipe" approach on which Felleisen et al. base How to Design Programs. The idea itself isn't new, as I'm sure the book's authors will tell you. Some CS teachers have been requiring students to write test cases or test plans for many years, and the practice is similar to what some engineers learn to do from the start of their education. Heck, test-driven design has gone from being the latest rage in agile development to an accepted (if too infrequently practiced) part of creating software.
What HtDP and TDD do is to remind us all of the importance of the practice and to make it an essential step in the student's or developer's programming process.
What struck me by Matthias's response is that making up examples is the critical step in writing code. It is certainly reasonable, for so many reasons, among them:
I usually give my students several examples as a part of specifying problems and ask them to write a few of their own. Most don't do much on their own and, uncharacteristically, I don't hold them accountable often enough. My next programming assignment may look different from the previous ones; I have an idea of how to sneak this little bit of design recipe thinking into the process.
What about not running?
Well, I did write about my lost summer. That's the bad kind of not running: not running because you can't. I've been doing too much of that this year. It's not ironic or humorous in Bayard's sense; it's just sad.
Believe it or not, I have encountered Bayardesque not running before. Soon after I went public with my intent to run my first marathon, a mathematician friend publicly announced that he had embarked on the arduous task of not running a marathon before he turned 40. I am happy to say that he succeeded to the fullest extent of his dream. However, this past weekend, I saw him laboring across a local bridge in what must be called at least a trot. He is certainly now not not running, and perhaps is even entertaining the idea of not not running a marathon. There is no cache in that.
In all seriousness, there are several good kinds of not running. Sometimes we cross train, which means to do another form of exercise instead of running. This allows us to develop strength or stamina, or maintain our habit of exercising, without stressing our running muscles. At other times we go all the way and rest. Sometimes, our body needs time off to recuperate and rebuild damaged muscle. Other times, our mind needs rest, time away from the stress of meeting a goal or pushing the body to its limit. (Crosstraining is a form of rest, too, but only for the part of the mind and body that runs.)
Finally, we may choose not to run for a very good reason. I don't have a pithy name for this; I think of it as not running toward a goal. In the last weeks before a marathon, we taper, that is, we cut our mileage way back. It's rest with a specific intent: to prepare the body for the race. In the final week, we may not run at all. When I have a bad hamstring, I choose not to run in order that I might heal. This is different than rest, because my mind and body may very much want to run. But I tell it to wait, so that more running doesn't make the injury worse.
I have learned that there is something in between bad not running and good not running. A few months ago I was able to say that I run again, but that was true only in the simplest sense: After not running for a few weeks, I pulled on my shoes and shuffled along the trails and roads around town. But I am still not running in the fullest sense: regular miles, regular speed, regular workouts.
Only in the last week have I "run" run -- on Sunday, a 10-miler in which I pushed hard for two miles in each half of an out-and-back course, and last night, an 8-miler over which I maintained the pace of those four miles for the full 67+ minutes on the university's indoor track. (Cute girls and young male speed demons motivate even old fogies like me.) Now I can only hope that my body doesn't balk in the next few days with a full set of symptoms. I'm happy for now to be in between.
When I was in grad school, my advisor sent me to a series of swank conferences on expert systems in business, finance, and accounting. Among the things these conferences did for me was to give a chance to stay at Ritz Carlton hotels. This Midwestern boy had never been treated so well.
At the 1990 conference, I heard a talk by Gary Ribar from KPMG Peat Marwick, one of the Big Six accounting consulting firms of the time. Ribar described LoanProbe, a program that evaluated the collectibility of commercial loans. LoanProbe was a rule-based system organized in a peculiar way, with its 9000 rules separated into thirty-three separate "knowledge bases". It was a significant application that interacted with sixty external programs and two large data bases. Peat Marwick took LoanProbe a step further and used its organizational technique to build a knowledge acquisition program that enabled non-programmers to create systems with a similar structure.
I was so excited. I recognized this technique as what we in our lab called structured matching, a remarkably common and versatile pattern in knowledge-based systems. LoanProbe looked to me like the largest documented application of structured matching, which I was working on as a part of my research. Naturally, I wanted to make a connection to this work and share experiences with the speaker.
After the talk, I waited in line to speak with him. When my turn came, I gushed that their generic architecture was very cool and that "we do something just like that in our lab!". I expected camaraderie, but all I received back was an icy stare, a curt response, and an end to the conversation.
I didn't understand. Call me naive. For a while, I wondered if Mr. Ribar was simply an unfriendly guy. Then I realized that he probably took my comment not as a compliment -- We do that, too! -- but as a claim that the work he described was less valuable because it was not novel. I realized that I had violated one of the basic courtesies of research by telling him that his work was known already.
These days, I think fondly of Ribar, that talk, and that conference. He was behaving perfectly reasonably, given the culture in which he worked. Novelty is prized.
A few years after that conference, I came across PLoP and the software patterns community. This group of people valued discovering, documenting, and sharing common solutions, the patterns that make our software and our programming lives better. Structured Matcher did that, and it appeared in programs from all sorts of domain.
Jeff Patton has described this feature of the patterns community nicely in his article on emerging best practices in user experience:
If you tell someone a great idea, and they say "Yes, we do something like that too!", that's a pattern.
Documenting and sharing old, proven solutions that expert practitioners use may not get you a publication in the best research journal (though it might, if you are persistent and fortunate), but it will make the world better for programmers and software users. That is valuable, too.
My semester has started with a busy bang, complicated beyond usual by a colleague's family emergency, which has me teaching an extra course until he returns. The good news is that my own course is programming languages, so I am getting to think about fun stuff at least a couple of days a week.
Teaching Scheme to a typical mix of eager, indifferent, and skeptical students brought to mind a blog entry I read recently on Fluent Builders in Java. This really is a neat little design pattern for Java or C++ -- a way to make those code written in these languages look and feel so much better to the reader. But looking at the simple example:
Car car = Car.builder() .year(2007) .make("Toyota") .model("Camry") .color("blue") .build();
... can't help me think about the old snark that we are reinventing Smalltalk and Lisp one feature at a time. A language extension here, a design pattern there, and pretty soon you have the language people want to use. Once again, I am turning into an old curmudgeon before my time.
As the author points out in a comment, Ruby gives us an more convenient way to fake named parameters: passing a hash of name/value pairs to the constructor. This is a much cleaner hack for programmers, because we don't have to do anything special; hashes are primitives. From the perspective of teaching Programming Languages this semester, what like most about the Ruby example is that it implements the named parameters in data, not code. The duality of data and program is one of those Big Ideas that all CS students should grok before they leave us, and now I have a way to talk about the trade-off using Java, Scheme, and an idiomatic construction in Ruby, a language gaining steam in industry.
Of course, we know that Scheme programmers don't need patterns... This topic came up in a recent thread on the PLT Scheme mailing list. Actually, the Scheme guys gave a reasonably balanced answer, in the context of a question that implied an unnecessary insertion of pattern-talk into Scheme programming. How would a Scheme programmer solve the problem that gives rise to fluent builders? Likely, write a macro: extend the language with new syntax that permits named parameters. This is the "pattern as language construct" mentality that extensible syntax allows. (But this leaves other questions unanswered, including: When is it worth the effort to use named parameters in this way? What trade-offs do we face among various ways to implement the macro?)
Finally, thinking ahead to next semester's compilers class, I can't help but think of ways to use this example to illustrate ideas we'll discuss there. A compiler can look for opportunities to optimize the cascaded message send shown above into a single function call. A code generator could produce a fluent builder for any given class. The latter would allow a programmer to use a fluent builder without the tedium of writing boilerplate code, and the former would produce efficient run-time code while allowing the programmer to write code in a clear and convenient way. See a problem; fix it. Sometimes that means creating a new tool.
Sometimes I wonder whether it is worth blogging ideas as simple as these. What's the value? I have a new piece of evidence in favor. Back in May 2007, I wrote several entries about a paper on the psychology of security. It was so salient to me for a while that I ended up suggesting to a colleague that he might use the paper in his capstone course. Sixteen months later, it is the very same colleague's capstone course that I find myself covering temporarily, and it just so happens that this week the students are discussing... Schneier's paper. Re-reading my own blog entries has proven invaluable in reconnecting with the ideas that were fresh back then. (But did I re-read Schneier's paper?)
The Parade magazine insert to my Sunday paper contained an interview with legendary bluesman B.B. King that included this snippet:
There's never a day goes by that I don't miss having graduated and gone to college. If I went now, I would major in computers and minor in music. I have a laptop with me all the time, so it's my tutor and my buddy.
CS and music are, of course, a great combination. Over the years, I've had a number of strong and interesting students whose backgrounds included heavy doses of music, from alt rock to marching band to classical. But B.B. King is from a different universe. Maybe I can get this quote plastered on the walls of all the local haunts for musicians.
I wonder what B.B. calls his laptop?