This story is a wonderful illustration of the idea of associative memory:
I remember my grandad, who worked in the police, telling me how they used to keep details of criminals on filing cards. On the edge of each card there were a number of punched holes, some punched right the way to the edge. Each punched hole corresponded to some fact about the person -- whether they were a burglar or a mugger or things like that. So, if you wanted to pick out all the burglars, you'd take a metal skewer and slide it in through the appropriate hole and lift it up. All the cards whose hole had been punched right to the edge (non-burglars) would stay behind, whilst all the relevant ones would get lifted up.
I found this story in an article on DNA-based computing at Andrew Birkett's blog. If you've ever wondered about how biocomputing can work, read this article! It's really neat -- yet another example of how Mother Nature can do intractable search using massive replication and parallelism.
I ran across an old 'thought piece' (*) a couple of days ago that reminded me of a discussion that was common at my early PLoPs: how some design patterns 'break the rules' of object-oriented design.
Among the GoF patterns, Bridge and Singleton are often cited as patterns that violate some of the principles of OOD, or at least the heuristics that usually indicate good designs.
On the other hand, some of the GoF patterns don't violate OOD design principles. Indeed, they seem to specialize or at least illustrate of OOD design principles. Strategy, Adapter, and State come to my mind. I remember a comment that Bobby Woolf made to me at PLoP'96 or '97 when he came across such a pattern in a workshop: "That's not a pattern. It's just object-oriented programming."
Of course, if a particular configuration of classes and methods recurs in a particular context, then acknowledging that recurrence has value has a tool for teaching and learning. What is obvious to one reader is not obvious to every reader. (Brian Marick wrote an interesting note on how this applies to comments in code.) Such a pattern also has value as a part of a pattern language that documents something bigger.
Back in those days, Jim Coplien was just beginning to talk about the idea of symmetries, symmetry breaking, and their relationship to what is and isn't a pattern. I wish I could remember some of the pragmatic criteria he used for classifying the GoF patterns in this way, but my memory fails me.
One can see this idea at work with patterns of imperative programming, too. Control structures break the "symmetry" of sequential processing, while some looping patterns illustrate common design rules (e.g., Process All Items) and others violate more general rules (e.g., Loop and a Half).
The most important point made in this thought piece was a pedagogical one, made in the conclusion: that relating patterns to more general design rules and heuristics helps students learn both better. This should be obvious to us, but like many patterns needs to be said explicitly every once in a while. Relating patterns to more general design rules and heuristics creates a richer context for both. It helps students learn to make better design decisions through a deeper understanding of the forces at play.
(*) Joseph Lang et al., "Object-Oriented Programming and Design Patterns", SIGCSE Bulletin 33(4):68-70, December 2001.
Today I am working on an invited tutorial that I will be giving at the Fourth Latin American Conference on the Pattern Languages of Programs, affectionately known as SugarLoafPLoP 2004. The conference is August 10 through 13 in Porto das Dunas, a resort town in the north of Brazil.
I am also serving as program co-chair for the conference. Being a program chair for a PLoP conference can be quite time consuming, but this has not been too bad. My Brazilian co-chair, Paulo Borba, has done a lot of work, especially with submissions written in Portugese, and the conference chair, Rossana Andrade, has shouldered most of the responsibility for the conference schedule. This has left me to focus on the writers workshop program for authors working in English and with handling basic inquiries from authors and presenters.
My tutorial is on test-driven development (TDD). I plan first to give some background on extreme programming (XP) and TDD's place in it, and then to introduce the practices, supporting tools, and "theory" of TDD through a pair programming demonstration. Lecturing on how to do something always seems like a bad idea when you can do it and talk about it at the same time. One of the things I've been thinking a lot about is how practicing TDD can both help and hinder the evolution of good design, and how other practices such as refactoring work with TDD to even better effect. I hope for my talk to convey the "rhythm" of TDD and how it changes one's mindset about writing programs.
On my way to SugarLoafPLoP, I am giving a talk to the Centro de Informática at the Universidade Federal de Pernambuco in Recifé. I think that this talk will also be on TDD, with a focus on interaction-based testing with mock objects. I've been learning about this idea from a recent article by Martin Fowler. I could always still talk on elementary patterns, which is the work closest to my heart, but it seems more convenient to mine the same thread of material that's occupying my mind most deeply right now.
This is my first trip outside of North America, so I'm looking forward several new experiences! Now, back to those talks...
And I mean that literally.
I haven't been able to run since last Friday. This is my longest break from my running routine in over a year, and I'm really ready to lace up my New Balances and hit the road again.
However, I am recovering from the worst case of poison ivy I've had since I was in grade school. In some ways it is my worst ever. On Saturday, I was unable to move about comfortably. Lying in bed was nearly as uncomfortable.
How did you get that, you ask? Let's just say that if I'm ever on Jeopardy!, and Alex Trebek asks me about my most embarrassing situation, I now have a really good something to say. Of course, the answer may disqualify me from appearing on Jeopardy!, so I may not let Alex know.
If I have to take a week off from running, now is as good a time as any. I'm currently training for the Des Moines Marathon, which takes place on October 17. Any later, and a layoff would seriously affect my preparation. As it is, I can use this layoff as a rest for my body before seriously ramping up my miles in August and September. At I can rationalize it as such. Mostly, I just want to run.
This will be my second marathon, by the way. Last October, I ran the Chicago Marathon for my first. That was great fun and a great teacher about running the distance. I'm approaching Des Moines with a bit more confidence but also with even more respect for the marathon. It's funny how learning about tough challenges can affect us in those seemingly opposite ways. Hubris and humility often go hand in hand.
I don't usually think of Edsger Dijkstra as an "agile methods" kind of guy, but yesterday as I re-read "The Humble Programmer", his 1972 Turing Award lecture, an interesting connection formed in my mind. (I ran across this paper again at Malcolm Davis's blog.)
As a part of Dijkstra's argument that it is possible for computer scientists to develop large programs at reasonable cost, he considers the problem of correctness. The value of testing as traditionally practiced is limited:
... program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.
So, rather than write code then test it, or even write code then prove it correct,
... the programmer should let correctness proof and program grow hand in hand. ... If one first asks oneself what the structure of a convincing proof would be and, having found this, the constructs a program satisfying this proof's requirements then these correctness concerns turn out to be a very effective heuristic guidance.
This sounds a lot like test-driven development to me. The tests in TDD are really more about specification than testing, and as such act as assertions, or delimiters of correctness. Each test is a claim about what the code should do. A test guides the development of a part of the code, and the body of tests and the program grow hand in hand.
Dijkstra would certainly argue for a systematic and sound development of tests that define the system, but I do think he would appreciate TDD's emphasis on writing tests first.
This lecture has so many neat ideas that resonate today, from the value of patterns to language design to the conservatism of the academic computer science community. I will save some of those ideas for other days.
However, I can't resist quoting Dijkstra on the importance of computing. The conclusion of the paper is worthy of a Turing Award lecture and offers a deeply inspirational message. Two quotes stand out for me. First,
[Computers] have had a great impact on our society in their capacity of tools, but in that capacity their influence will be but a ripple on the surface of our culture, compared with the much more profound influence they will have in their capacity of intellectual challenge without precedent in the cultural history of mankind.
And then later,
This challenge, viz. the confrontation with the programming task, is so unique that this novel experience can teach us a lot about ourselves. It should deepen our understanding of the processes of design and creation, it should give us better control over the task of organizing our thoughts. If it did not do so, to my taste we should not deserve the computer at all!
The desire to understand creation and design better is exactly what drew me to computer science in my undergraduate days, what led me to study AI as a graduate student, and what continues to drive me today. Dijkstra's position also reminds me of Alan Kay's forty-year advocacy that we can -- must -- do so much more with computers than we currently do, that the digital computer can fundamentally alter how we create and communicate ideas. We are lucky to be guided by such visionaries.
But, as Dijkstra might say, do we deserve them?
Another article to catch my eye in the latest SIGCSE Bulletin was "Training Strategic Problem Solvers", by de Raadt, Toleman, and Watson. This brief paper discusses the gap between problem and solution that students face when learning to program. Most programming instruction relies on students to learn problem-solving strategies implicitly through repeated practice, much as they would in mathematics.
This approach faces a couple of difficulties. First, as I discuss in an earlier entry, CS students don't generally get enough practice for this approach to be effective. Second, and more troublesome, is that this search for strategies occurs in a large state space and thus is prone to dead ends and failures. Most students will end up constructing the wrong strategies or no strategies at all.
The authors report the results of an informal experiment in which they found that their students, after a semester's worth of programming instruction, had not mastered several simple plans associated with writing a loop to average a sequence of numbers -- a basic CS 1 problem. I've heard many other instructors tell a similar story about the students at their schools. I've seen many proposals for new approaches in CS1 offered to address just this problem.
So, what's the solution?
The authors suggest that explicit instruction about goals and plans will lead to better problem solvers. They harken back to Soloway's ground-breaking working on goals and plans as a source for the material that we should be teaching instead.
I agree wholeheartedly. The elementary patterns community has devoted much of its energy to identifying the plans that novice programmers should learn and the problems that they solve. By making explicit the forces that lead to or away from a particular pattern, we hope that students can better learn to make systematic design decisions in the face of multiple goals.
Notice the subtle shift in vocabulary. The pattern-oriented approach focuses on the thing to built, rather than the plan for building it. This allows one to recognize that there are often different ways to build the same structure, depending upon the forces of the problem at hand. The plan for building the pattern is also a part of the pattern description. Soloway's plans were, by and large, code structures to be implemented in a program, too. The name "plan" came be thought of in the same way as a pattern when one considers it in the sense of an architectural plan, or blueprint for a structure.
Teaching patterns or plans explicitly isn't easy or a silver bullet, either. Mike Clancy discusses some of the key issues in his paper Patterns and Pedagogy. But I think that it offers the best hope for us to get it right. I hope that de Raadt, Toleman, and Watson will continue with their idea -- and then tell us the results off their new teaching approach.
Today I've been working on OOPSLA 2004. If you use or study object-oriented techniques and have never been to OOPSLA, then you should definitely try to make it to Vancouver in October. OOPSLA is an electric conference, and Vancouver is a great conference town.
I'm chair of this year's Educators Symposium, which promises to be a lot of fun. Alan Kay -- who this year has won the Turing Award, the Draper Prize, and Kyoto Prize -- is giving our keynote address. We have a great line-up of papers and activities, too. Most of my work as chair is done, as the program is mostly set. With the help of my program committee, I have a few panels and activities to finalize yet. The last big task I face is the one I'm doing now: choosing among the many deserving applicants for Educator Scholarships.
OOPSLA is a conference of SIGPLAN. Each year, SIGPLAN generously supports a number of scholarships to OOPSLA for educators, so that these folks can learn about the latest in OO technology and use that to improve the teaching of OOP in our colleges and universities. I think that the educators add to the conference, too, by bringing teaching ideas that help trainers and by bringing an excitement to the professional population.
In OOPSLA's salad days of the late 1990s, SIGPLAN was able to offer a large scholarship fund. In recent years, OOPSLA has not been as profitable. Fortunately, SIGPLAN continues to support the scholarships, but with an understandably smaller pot of money.
Low supply makes awarding the scholarships even harder than usual. The demand for scholarships hasn't gone down, and I find that nearly all of the applicants are deserving. This problem is tough enough, but it is complicated by a couple of other factors:
The program committee has implemented a couple of ways to make the selection process more objective. But no set of rules can shield me from the selection process, as ultimately I have to sign off on the awards.
As one program committee member told me in e-mail, "That's why they pay you the big bucks." I laughed then, but I don't feel like laughing right now.
Someone stole the stereo out of my car this morning before church -- while it was parked in our own garage. My wife and daughter had gone out earlier in the morning to deliver newspapers, and they left the garage door for our van open because we'd soon be heading out again. We live in Cedar Falls, Iowa, a small university town known for being friendly and crime-free in a state known for being friendly and crime-free. It doesn't feel that way right now.
I've never been a victim of a theft and, as small as this is, it's unsettling. Fortunately, the thief (1) knew what he was doing and didn't really damage the dashboard and (2) only wanted the stereo and so didn't take anything else from the garage or car. And my wife and daughters were not bothered. So I'll count my blessings.
We've always been perhaps a bit too trusting in leaving the garage door up or the side door unlocked, but that will change. I don't think it's an overreaction to start locking doors more observantly. But I wish I didn't feel that I had to.
Not having been a crime victim before, I don't think I ever really got it when other crime victims spoke of feeling "violated". I read that phrase in the newspaper almost daily and was unmoved. Sometimes, in my mind, I probably wondered if those folks weren't just a little too sensitive, maybe even whiners. I've learned the lesson now. As I noted above, the crime I've experienced is relatively tame and wholly impersonal. It compares not at all to many other crimes against one's person. I will be more compassionate in my response to other's misfortunes from now on.
On a less emotional note, this theft has reminded just how dependent we are now on electonics. In removing my stereo, the thief disabled some part of my Taurus's electronic system, resetting a little computer somewhere. As a result, the speedometer doesn't work and I have no way to control the fans or air conditioner. Man, it gets hot in a car fast, even in Iowa! Let's roll the windows down then-- wait, we can't do that either. They have only electronic controls. And then, when I get home, I go to lock the car doors with the automatic switch on my keychain, but it doesn't work either. Sigh.
At least I can lock and unlock the doors manually. I'll be careful to lock them wherever I go now.
This weekend, I re-read Jon Hassler's My Staggerford Journal. Hassler is a novelist of small-town Minnesota life, and My Staggerford Journal is the diary-like story of the writing of his first novel, Staggerford, on a sabbatical from Brainerd Community College. I first read it in the months before my own sabbatical of Fall 2002, in hopes of some energy and inspiration. The results of my sabbatical disappointed me, but this journal did not. I heartily recommend his other novels to readers who like the stories of small-town Midwesterners coming to grips withe the changes of life.
One paragraph jumped out at me from Hassler's description of what it was like to give birth to the novel he'd wanted -- needed -- to write for twenty years:
I enjoy working on a second draft better than a first. If I had my choice I would write nothing but second (or later) drafts. But to get to that sometimes pedantic, sometimes exhilirating stage of perfection, polishing, filling in holes, rechanneling streams, etc., one has to struggle through the frightening first draft, create the damn thing through to the end, live it night and day and not know where it's going, or if you do know where it's going, then you don't know if you have the skill or stamina to get it there. It won't get there on its own.
Those feelings sound familiar to this programmer.
Hassler's discussions of rewriting brought to mind redesign and refactoring. Of course, Hassler wasn't just refactoring his novel. In the second and third drafts, he made substantive changes to the story being told and to the effect it elicits from his readers. But much of his rewriting sounded like refactoring: changing sentences here and there, even rewriting whole chapters to bring out the real story that the earlier version meant to tell. Hassler certainly writes of the experience as one who was "listening to the code".
The pain of writing the first draft sounds like a rather un-agile way to develop a novel: creating a whole without knowing where he or the story are going, living in a constant state of uncertainty and discomfort. I have known that feeling as a programmer, and I try to teach my students how to avoid it -- indeed, that it is okay to avoid it.
I wonder what it would be like to write a novel using an "agile" method? Can we create art in quite that way? I'm not an artist, but somehow I think painting may work that way more than writing a novel.
Or maybe novelists already move in agile way, with the first draft being reworked in bits and pieces as they go, and later revisions just continuing with the evolution of the work. Maybe what distinguishes Hassler's first draft and his later drafts his more in his mind than in the work?
Don Gotterbarn, writes the "Thinking Professionally" column in the SIGCSE Bulletin. His most recent column (Volume 36, Number 2, June 2004) suggests that agile development methods are unprofessional. Here are some of his assertions:
He includes RUP in those admonitions, but he reserves some of his harsher words for agile methods. He speaks dismissively of "designing a web page while agilely chatting with the customer". Later, he says, "Agile methods' emphasis on a 'people-centric view of software development' is about supporting the individual developer's needs to feel free to be creative and not about a concern for people affected by the system."
These words sound rather disparaging, but I think they reflect a common misconception of agile methods by some in the software engineering community. Such folks often see the agile methods' emphasis on different elements of the development process as a rebuke of their own professional concerns. The result is often an overreaction to the agile methods. (Some of the hype that ushered in XP probably didn't help this situation, either.)
In particular, I think Gotterbarn's article reflects two misconceptions.
First, it assumes that agile methods do pay not adequate attention to gathering requirements. While some in the agile community have written a bit loosely of unit tests almost as a replacement for requirements, that's not what the agile methods actually call for. (Brian Marick makes the distinction quite nicely.) For example, XP's "planning game" prioritizes and selects from product requirements that were developed in consultation with the client. The key point XP makes about requirements is that the client should be responsible for business decisions, and the developers should be responsible for technical decisions.
Second, the article assumes that developers using agile methods are not allowed to use their professional judgment. I don't think that any proponent of the agile methods thinks that or wants practitioners to behave this way. A professional can explore the context of a project with the client. A professional can use his or her domain knowledge when choosing stories to implement and when implementing them. A professional can acknowledge a lack of knowledge about a task. For example, several of Gotterbarn's examples are based in poor user interface design. Designing a user interface without knowledge of the intended user audience is unprofessional regardless of the development method used. Likewise for designing a user interface without knowledge of usability design.
Perhaps both of these misconceptions ultimately come back to the agile methods' emphasis on the separation of responsibilities between the client and the developer. Certainly, agile methods depend in large part on the client specifying an appropriate system, in collaboration with the developer. That won't be different under any other software development method. Gathering requirements is still essential to building good software, and it is still difficult to do well. One of the things I like about the agile methods is their emphasis on communication, on close interaction with the client, precisely because they give the developer an opportunity to be a more active part of the conversation that defines the project.
Agile methods don't tell us to turn our professional judgment off, or to ignore what we know about the domains in which we work. They do encourage us not to substitute our expertise for the client's in relation to the client's domain and needs. They do encourage us not to run too far head of the client's needs, not to build a system that is bigger, more complex, more costly, and more error-prone than the client needs. I don't think that any software professional or educator can disagree with these goals.
The first reference to me comes at link 72. So much for pumping up my ego.
Looking a little closer, many references are to the city of Eugene, Oregon, and organizations in its penumbra, such as its airport, Chamber of Commerce, and newspapers. If we narrow our focus down to individuals, I come in 24th. Most of the folks ahead of me are well-known writers, artists, and astronauts. I was surprised at the number of computer scientists named Eugene -- three or four folks ahead of me on the list are computer scientists, including the department chair at the University of Toronto and the best known Eugene in CS, Gene Spafford.
If I ever make contributions as important as Gene Spafford, I can afford to worry about my Google ranking. Until then, I should just get back to work.
The most recent SIGCSE Bulletin (Volume 36, Number 2, June 2004) reached my mailbox yesterday. Several articles caught my eye, and I'll probably write each about them over the next week or so.
I turned first to David Ginat's article on algorithmic patterns. David and I had a nice chat about elementary patterns during a lunch at SIGCSE 2003, so I knew of his interest in how the ideas of patterns might help us to understand algorithm design better, or at least to help us help our students to understand it better.
What is an algorithmic pattern? Ginat distinguishes between the mathematical features of an algorithm, such as boundaries on a sequence of search ranges, and the computational features of how it might processes data. For Ginat, and algorithmic pattern combines mathematical features with a certain computational "theme". For example, binary partition is an algorithmic pattern whose computational theme is "processing by halves" and whose mathematical core is a log n bound on the number of partitions considered.
The bulk of this paper illustrates the value of thinking in terms of an algorithmic pattern, using the Ginat's example of the sliding delta pattern. A sliding delta is an on-the-fly accumulator that enables an algorithm to compute a value of interest by updating the accumulator as it encounters new data values. This counting pattern shows up in many forms, from computing the winner of an election to determining the maximally covered point in a collection of ranges. Ginat defines this pattern as a computational theme-mathematical core pair and illustrates its application to several seemingly disparate problems.
The assertion that struck closest to home for me is that students who don't know about the sliding delta pattern are almost certainly doomed to implement naive and incredibly inefficient solutions to common problems in computing. In the best possible world, a student may implement a sliding delta, but only after going through laborious effort to discover the idea.
This is one of the driving ideas behind the elementary patterns project. Students learn best when they learn the patterns of a domain, because these are the structures they will want to build into their algorithms, their designs, and their code. I design my courses on design and programming around the elementary patterns students need to know. I'm even designing my fall algorithms course around what Ginat calls algorithmic patterns.
But to me, they are just patterns. Calling them 'algorithmic' patterns, to distinguish them from 'design' patterns and 'coding' patterns, does serve a useful purpose, I suppose. It may help his readers avoid any negative connotations they have associated with the older terms. ("Design patterns are too complex for my students." "Coding patterns are just idioms.")
Recognizing algorithmic patterns as patterns can help us to make them more useful. Ginat's pattern form lacks a couple of elements considered essential by the patterns community. Their absence accounts for my only misgiving about this paper: Just showing students patterns isn't enough. They need to learn when to use the pattern.
For examples, how do I know when to use a sliding delta? What features of a problem push me toward a sliding delta? What features push me away from the pattern? When I use a sliding delta, what new problems may result, and what other patterns could help me resolve them?
Context. Forces. Resulting context. Defining these for the sliding delta pattern would make it a lot more useful to its readers. They are essential knowledge for any algorithm designer: how to balance the requirements of the problem and its context with the features of the pattern.
Ginat laments that students often don't apply sliding delta to problems like his Dot Connections and Crocodiles. He suggests that introducing students to the pattern and occasionally categorizing problems solved by the pattern will be enough to help them learn to apply the pattern to new problems. He does hint at more, though -- the instructor should "underline the effective role" of the pattern. Perhaps we can abstract from this information some wisdom for recignizing a pattern's effectiveness beforehand?
I hope that my suggestion for improvement doesn't give the wrong impression. I like Ginat's idea a lot. I think this paper is a contribution to CS education, and I hope he develops this area further. I do hope, though, that he'll document the forces that drive algorithm design. They are the key to design -- and to learning how to design. As a Chinese adage holds, the Master paints not the created thing but the forces that created it.
Over the last couple of months, I've been following the discussion on the Extravagaria wiki. Extravagaria is a workshop organized by Richard Gabriel and Janet Holmes at OOPSLA 2004. It aims to explore how software people can use techniques of the arts "to explore research questions and explain results".
I am a "Suzuki dad" to my daughter as she learns to play the piano. For the last few weeks, I've been picking up "Zen and the Art of Motorcycle Maintenance" during her lesson and reading a section while she works with her teacher. Yesterday, something I read brought Extravagaria to mind.
I was reading Chapter 25, in which Pirsig talks about the synthesis of classical and romantic thought. He argues that adding the romantic as a veneer over the classical almost always results in "stylish" but unsatisfying -- even ugly -- results, both the product itself and the experience of users and designers. Instead, the classical and romantic must be united at a more fundamental level, in his notion of Quality. Pirsig then says:
At present we're snowed under with an irrational expansion of blind data-gathering in the sciences because there's no rational format for any understanding of scientific creativity. At present we are also snowed under with lots of stylishness in the arts -- thin art -- because there's very little assimilation or extension into underlying form. We have artists with no scientific knowledge and scientists with no artistic knowledge and both with no spiritual sense of gravity at all, and the result is not just bad, it is ghastly. The time for real reunification of art and technology is long overdue.
How much artistic knowledge do scientists require in order to avoid producing ghastly results? Can we just put a "stylish" veneer on our work, or must we study art -- do art -- so that the process is a part of us?
I sometimes feel as though I am affecting an artistic stance when the substance of my work is little different.
That isn't to say that I have not benefited from adopting practices from the arts. I learned a lot from Natalie Goldberg's Writing Down the Bones. Since reading it, I have always tried to write a little every day (code and text) as a way to keep my ideas, and my ability to write them down, flowing. One of the reasons that I started this blog was, in part, as an external encouragement to write something of value every day, and not just the surface of an interesting but inchoate thought. Gabriel has been something of an inspiration in this regard, with his "one poem a day" habit.
I have also certainly benefited from learning to play the piano (well, beginning to learn) as an adult. The acts of learning an obviously artistic skill, talking about it with my teacher, and reading about it have all changed my brain in subtle but useful ways. The change affects how I teach computer science and how I read its literature; I suspect that it has begun to change how I do computer science, too.
How easily can scientists adopt practices from the arts without 'grokking' them in the artistic sense? I suppose that this is one of the points of Extravagaria.
If you are interested in this topic, be sure to check out the Extravagaria wiki.
I am preparing to teach our undergraduate algorithms course this fall for the first time. My last involvement with this side of an algorithms was as a graduate assistant at Michigan State. This course is much different than my usual focus on programming (object-oriented and funtional mostly) and programming languages, so preparing for it is fun.
As I've been reading, I've come across Robert Floyd's name many times, and whenever I do I am sure to track down the reference and read yet another paper. I always enjoy them.
It occurs to me that I am a big fan of Robert Floyd. To be accurate and objective and scientific, I suppose that I should say that I am a big fan of Floyd's work, but that's not what it feels like. It feels more personal than that.
My attraction to Floyd dates to my discovery of his Turing Award lecture. At the time I was still flush with the idea of elementary patterns, and Floyd's lecture seemed to advocate patterns as a teaching and learning mechanism--not in so many words, of course. The same lecture also encouraged programmers to rewrite their working programs from scratch once they understood the solution well, so that they could isolate the key concepts of the solution. That sounded like refactoring to me. Floyd's goal wasn't just a better program, though, but also a better programmer.
Lately I've been admiring his papers on sorting networks and random sampling. I've also stumbled across some of his early papers on programming languages, only to discover that, according to Knuth, Floyd developed "the first syntax-directed algorithm of practical importance" and wrote "probably the best paper ever written" on the syntax of programming languages. Simply amazing.
Now, Floyd is not the only superstar whose work I admire in this way. Alan Kay and Ward Cunningham are two others. I read everything they write and try to grow in the ideas they share.
When I was growing up, I had posters of my favorite sports stars hanging in my bedroom -- Pete Rose, Johnny Bench, Walt Frazier, George McGinnis. These days, I find myself wanting to post quotes from my favorite computing stars on my office door. In part, I do this so that students can learn from them. But I think I also do it for me, because I am a fan of these guys and enjoy being a fan.
I hope that there isn't anything wrong with that.
Well, I made it through a successful first week of blogging. I am not certain that I can maintain a post-a-day pace, but I think I can find a comfortable groove. Certainly my entries this week have been more like essays than just blogging, and doing that daily is untenable -- especially after school starts for the fall. But I hope that you find the articles useful. I have found writing them to be useful to me!
Cool link of the day: Yesterday, Clive Thompson blogged on The Shape of a Song, a web app that creates images from midi files by finding patterns in the song and representing them with arcs. Not only is the app -- a Java applet -- available on-line, but the artist has a large repertoire of songs already available, extendible by his user community. Enjoy!
A few days ago, I wrote about the difference between practice for piano students and practice for CS students. As I noted then, this has been a popular topic for software bloggers in recent years. Here are some links:
As Brian mentions, the value of practice has been a consistent theme of Richard Gabriel's over the last few years, and Dick has inspired a lot of software people to think more about practice.
One of the things I've been thinking about in this regard is what Chad Fowler called valueless software.
As a piano student, I often practice things that are not ever intended for performance. I play scales. I work through a set of Finger Power books to improve my dexterity and hand position. I play pieces that were written solely to help students learn to read notes and intervals. Indeed, some days much of my practice consists of pieces that are just for practice. The goal is that I'll be a better player of performance pieces (and I use 'performance' loosely here -- I'll never be a concert pianist!)
Chad is more of a musician than I, and he says that this sort of practice is essential. The italics are mine.
Something I learned as a saxophonist is that the less valuable the direct output of that which you practice, the more emphasis you will place on the act of practicing. I can learn more, for instance, from making horrid noises for 30 minutes than I can from learning a Charlie Parker solo and playing it from memory. Why is this? I'm so focused on the output of the Charlie Parker solo--making it sound good, feeling good as a result--that I forget to focus on the process of learning it, and important bits of information are liable to fall through the proverbial cracks.
Computer science students don't usually get this sort of practice: many, many repetitions of a low-level skill that
strengthens the muscles and mind for the "real" task, but which themselves are not useful as products. Can you imagine
asking your CS1 student to write 100
for-loops for tomorrow's class? Our students tend to do fewer
repetitions, and for the most part we try to make those few "real", in an effort to motivate. (Well, if you count
assignments like the Fahrenheit-to-Celsius conversion program as real tasks.)
I'm not sure if there is any reason to seek a direct analog to the music scenario for learning to programming. We do have an analog, though not quite so low level. It is a venerable practice in computing to reimplement classic programs as learning exercises: a calendar manager, Tetris, a web server -- all have been done to death, and these days we all have ready access to source code for these programs. Yet students write their own for many reasons. Sometimes it's to learn a new language or OS API. But the value lies, in part, in the fact that the student understands the ideas in the app so well that he can focus on the learning task!
My good friend Joe Bergin has created an object-oriented programming
exercise inspired by the idea of practice for practice's sake, which he likens to the musical etude: Write a particular
program with a budget of n
if-statements or less, for some small value of n. Forcing one's self to
not use an
if statement wherever it feels comfortable foces the programmer to confront how choices
can be made at run-time, and how polymorphism in the program can do the job. The goal isn't necessarily to create an
application to keep and use -- indeed, if n is small enough and the task challenging enough, the resulting program may
well be stilted beyond all maintainability. But in writing it the programmer may learn something about polymorphism and
when it should be used.
This reminds me of Kent Beck's "Three Bears" pattern, which I revised as a part of some patterns of built-in failure. (These patterns appeared as part of Patterns for Experiential Learning, which was workshopped at EuroPLoP 2001.)
While out on a run recently, I realized that I was practicing the agile software development principle of getting continuous feedback -- without even trying.
Most mornings, I want to control the pace I am running. Maybe I am doing a tempo run, on which I want to average my 10K pace for a few miles. Maybe I'm doing a speed work-out and need to run several repetitions of a particular shorter distance at a faster pace. I have to be careful when trying to run fast, because it's easy for me to overdo it. Then I run out of gas and can't finish comfortably, or at all. And it's even easier to run too slowly and not get the full benefit of the workout.
Or maybe I *want* to run slower than usual, as a way to recover from faster work-outs or as a way bump my mileage up. On days like this, I have to be careful not to run too fast, because my body needs the break.
So I need a way to pace myself. I'm not very good at doing that naturally, so I like to use landmarks to monitor my pace.
One place I can do that is on a recreation trail near my home. This trail contains a 6.2-mile loop and has four 1-mile segments labeled. When I try to run a steady pace on this route, I used to find that my miles varied by anywhere between 10 and 20 seconds. These days I do better, but sometimes I can't seem to get into a groove that keeps me steady enough.
I do my weekly speed workouts on the indoor track at my university's wellness center. This track requires me to do 9 laps per mile, and it has signs marking 200m, 400m, 800m, and 1200m splits. Running on this track I get feedback every 1/9th of a mile, and I can synchronize myself at the longer splits, too. Not too surprisingly, I pace myself much better on the track than on the trail. And more frequent feedback is the reason. When I get off by a second or two for a lap, I make can make a small adjustment to get back on pace -- and I can tell if the adjustment was successful within a 1/9th of a mile.
Doing my Yasso 800s on the small track has been invaluable in helping me get faster. Even better, they have helped me learn to pace myself naturally. Now when I run mile repeats on the trail, I find that my pace rarely varies more than 10 seconds per mile, and sometimes I can clip off several miles in a row all within 3-7 seconds of each other. Getting continuous feedback as I've learned has helped me to develop better "instincts".
I recently took my speed workouts outside to the university's 1/4-mile track, to enjoy the summer weather more and to lengthen my repeats. Running consistent 1200m repeats on the longer track is tougher, because I don't yet have the instincts for racing fast at a desired pace and because the track gives me feedback less frequently. But I hope that a few weeks of practice will remedy that...
My goal is eventually to be able to find a groove where my pace is steady, comfortable, and right on the mark for a particular marathon time. Continuous feedback plays an important role in training by body and mind to do that.
I think that this story may be a good way to illustrate and motivate the idea of continuous feedback in my Agile Software Development course this fall.
Some of my colleagues become uncomfortable when I teach our students agile software development methods. "That's just trendy hogwash," they want to say. (They mostly don't, because they are too polite.) Of course, folks from the Smalltalk and Lisp communities say just the opposite -- many of the ideas and practices of the agile software movement have their roots in ideas and practices found in those communities back in the 1980s and even earlier.
I was reminded of just how fundamental some of these ideas are when I read one of the early chapters in Gerald Weinberg's The Secrets of Consulting, which I first blogged on yesterday. Weinberg talks about the necessity of recognizing trade-offs and making them explicit in the projects one does. Whenever the client asks for some optimal something -- the minimum cost solution, the shortest possible time, the best possible way -- the wise consultant asks, "What are you willing to sacrifice?"
This sounds an awful lot like what Extreme Programming says about The Four Variables: cost, time, scope, and quality. To maximize or minimize one of these fundamental variables, you have to give up something for at least one of the others, maybe all. Trade-offs of this sort aren't a new idea at all, even in the world of software development punditry. Weinberg wrote his book in 1985.
I especially like one of the examples Weinberg uses to make his case, the Trade-off Chart. Here is such a chart, updated to 2004:
This chart shows the trade-off between speed and distance in the product category "world's fastest runner". If you want a runner to maintain a faster speed, then you will have to give up distance. If you want a runner to maintain a speed for a particular distance, then you will have to accept a slower speed. This is true of the world's fastest runners -- the upper bounds on expected performance for most -- and so it is generally true for other runners, too.
As an erstwhile runner, I understand just what Weinberg is saying. And I am nowhere near the upper bound at any distance! Yet, like many people, I sometime forget about the fundamental trade-offs I face when working on a software project. I sometimes forget when I'm running, too, and pay the price later, in the form of fatigue or injury.
Weinberg goes on to show how a developer or consultant can use such a chart to begin asking the right questions about the requirements for a project, and how these questions can help both consultant and client follow a reasonable path toward a working system.
I wonder what a four-dimensional trade-off chart for XP's four variables looks like for typical kinds of software? Even two-dimensional chart showing pairwise trade-offs among the variables would be useful in understanding the forces at play when building software. These are the sort of pictures that Kent Beck draws with words in his book Extreme Programming Explained.
So, no, dear colleagues, I'm not teaching only the latest fads in my Agile Software Development course. We are learning the basics.
I've been reading Gerald Weinberg's classic The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully. I heartily recommend that you read just about everything Weinberg's written, whether you are a software person or not. His books are really about being a better person, whatever you do.
In The Secrets of Consulting, Weinberg says that he was often upset with clients for being so reluctant to adopt his advice. But then one of his clients enlightened him about the risk differential between the consultant and the client:
Of course, as psychologists like Kahneman and Tversky have shown us, most people will bear opportunity costs in order to avoid risk, even in the face of potentially big payoffs. In this case, the client needs a pretty good reason to choose to implement the consultant's advice.
I wonder if this dynamic is at play when students are learning new practices, such as object-oriented design or agile methods? In some ways, the student is in the role of Weinberg's client. If the student is comfortable with how he goes about building software, then he may be averse to change. Adopting a new practice may make his life better -- but it may make it worse! And even if he is not all that happy with how he programs now, he may be happy to maintain the status quo in the face of risk.
So the new practice either needs to offer the promise of a *big* payoff or guarantee little risk of failure. And human psychology may get in the way of the student risking a change even in that case.
The instructor is, of course, in the role of Weinberg's consultant, but with a twist: if students don't adopt the new practice, the instructor may not even face the loss of his 'contract'! Instructors don't always lose their courses when students don't learn; sometimes they just muddle along, perhaps trying something new next semester. And as a tenured member of our faculty, I face little chance of losing my job, no matter how poorly my students do.
Where is the risk? Students may see the instructor as having no stake in the risk of failure.
Whether this ideas applies directly to the student-teacher relationship or not, I think that it does explain in some part the history of elementary patterns in computer science education. A few years ago, several colleagues and I began our work on so-called elementary patterns, those patterns that relative beginners need and use when writing programs. After writing some papers, we gave several workshops at conferences like SIGCSE, aimed at helping other faculty reorganize their courses using elementary patterns and write elementary patterns of their own.
The reaction we received surprised me at the time. Our workshops received high ratings at each conference, and participants almost always came up to tell us how much they enjoyed the workshops and to encourage us to do more. But almost no one began to write elementary patterns, and almost no one redesigned their courses around the idea. In retrospect, I realize that we were asking them to make a pretty big change in how they did their jobs. They were in the same situation as Weinberg's client above. The trade-off between success and failure had to balanced against the option of doing nothing and being happy with their current courses.
In retrospect, I can say that we didn't do a very good job 'selling' the idea of elementary patterns. My good friend Robert Duvall always prodded us to create more and better support materials to ease adoption of our ideas by others, and a few of us are trying to do just that. But I wonder how successful we will be.
Maybe a more evolutionary approach would work? What would that look like?
I began taking piano lessons a couple of years ago. I never learned to play an instrument as a child, and after several years of being the "Suzuki dad" to my two daughters, I decided I'd like to learn myself. Being a student again, especially in an area that doesn't come easily to me, has helped me to develop greater empathy for my computer science students.
The Suzuki approach itself has been a source of many interesting thoughts about how I might teach programming more effectively. I'll blog on that topic in the future.
One thing about learning piano that seems obviously different from how my students learn programming is the element of practice. I've read elsewhere on the web where others have talked about the role of practice in learning to write programs and build software. The Pragmatic Programmers come to mind. I've lost my reference to their page on this to subject; if you know it, please pass it on.
As a piano student, I am expected to practice every day for 20 to 30 minutes. My teacher would like more, but she assures me that a few minutes everyday is better than a long session one day followed by a day off. For a skill so dependent upon muscle memory and strength, daily practice is essential. And I see the damage that lack of practice can do: my progress as a player has been halting, with conference trips and busy times at school and home interrupting my practice routine.
Computer science students don't tend to get this sort of practice: daily repetitions of the same skills until they become secondhand nature, and then regular brushing up to keep the skill intact. Many of my programming students plan to do their work for my course on only one or two or three days a week. The other days are scheduled for other courses' work, for the work that pays their tuition bills, and for play. Like my piano teacher, I encourage them to try to work on a project a little each day, so that they can keep in touch with the material and have more opportunities to come to understand the material. Their reasons for not doing so sound an awful lot like my excuses for missing daily piano practice.
One group of students seems the exception to this: all those kids who hack Linux. I lurk on the local Linux users group mailing list, for occasional pointers to interesting new ideas I'd like to learn about. I am amazed by how many hours students put into installing Linux, reinstalling Linux, patching the system, adding drivers for new devices, setting up home networks, upgrading to a new version of the system ... the list goes on. These students get to practice their sysadmin and use skills over and over, in a relatively artificial setting--but one that is so important to them at least in part because they see their own skills grow as a result.
Now, if I could only get these same students to spend so much time playing with design patterns and algorithms and functional programming and all the stuff I'm trying to teach them. Maybe I need to figure out what sort of "killer app" would draw their attention as strongly to my coursework.
Welcome to my blog. I have been enjoying many different blogs for a couple of years, most notably from the agile software and OOP communities. All the while, I've been thinking I should give it a try myself.
The hard drive of every computer I've ever had is littered with little snippets and observations that I would have liked to make to someone, anyone, at the time. As many of you do, I often write to clarify what's happening in my mind, and to learn something new. The idea of a weblog opens new avenues to sharing such thoughts, and to learn both from the writing and from whomever may read them.
In this blog, I'll chronicle ideas from my professional life as an academic, teacher, and software developer. I have a strong interest in how people make things, in particular computer programs. One of the topics I most hope to write about are the elementary patterns of programs and how they enter into the process of learning how to build software.
I'm also a human being (shh... don't tell my students!) and will also share some of my thoughts as I go through life these days as a runner, a student of piano, a father and husband, and a regular guy in a fun but complex world.
I hope you find this blog as useful as I hope it will be to me.