Yesterday I blogged about a new Rule of Three for the patterns community, taken from Gerald Weinberg's The Secrets of Consulting. Weinberg motivated the rule with a story of how his false pride in being thought smart -- by his students! -- led to ineffective thinking.
The story of false pride reminded me of one of my favorite scenes in the movie Serendipity, and then of one of my favorite classical quotes.
In the movie, Jonathan ( John Cusack) throws all sensibility to the wind in an effort to find the woman he fell in love with one afternoon many years ago. His search threatens his upcoming wedding with a beautiful woman and makes everyone think he's nuts. But his best friend, Dean ( Jeremy Piven), sees the search and its attendant risk as something more.
Dean is married, but his marriage is in trouble. He and his wife have let their problems go on so long that now both are too proud to be the one to make the first move to fix them. When Jonathan wonders out loud if he has gone nuts and should just go home and marry his lovely fiancee, Dean tells him that his search has been an inspiration to work with his wife to repair their relationship. In support of his admiration for Jonathan, he recited a quote from a college humanities course that they shared: "If you want to improve, be content to be thought foolish and stupid..."
That scene and quote so struck me that, the next day, I had to track down the source. As is usually the case, Google helped me find just what I wanted:
If you want to improve, be content to be thought foolish and stupid with regard to external things. Don't wish to be thought to know anything; and even if you appear to be somebody important to others, distrust yourself. For, it is difficult to both keep your faculty of choice in a state conformable to nature, and at the same time acquire external things. But while you are careful about the one, you must of necessity neglect the other.
This quote is a recurring source of encouragement to me. My natural tendency is to want to guard my reputation by appearing to have everything under control, by not asking questions when I have something more to learn, by not venturing to share my ideas. Before I started this blog, I worried that people would find what I said shallow or uninteresting. But then I decided to draw my inspiration from Serendipity's Jonathan and step forward.
Weinberg's book teaches the same lesson throughout: A consultant will live a better life and help their clients more if only they drop their false pride and admit that they don't know all there is, that they can't answer every question.
And if you like romantic comedies but haven't seen Serendipity yet, then by all means check it out soon.
Back in the beginning, the patterns community created the Rule of Three. This rule stated that every pattern should have three known uses. It was a part of the community's conscious effort to promote the creation of literature on well-tested solutions that had grown up in the practicing software world. Without the rule, leaders of the community worried that academics and students might re-write their latest theories into pattern form and drown out the new literature before it took root. These folks were not anti-academic; indeed many were and are academics. But they knew that academics had many outlets for their work, and they recognized the need to nurture a new community of writers among practicing software developers.
The Rule of Three was a cultural rule, though, and not a natural law. Christopher Alexander, in The Timeless Way of Building, wrote that patterns could be drived from both practice and theory. As the patterns community matured, the rule became less useful as a normative mechanism. In recent years, Richard Gabriel has encouraged pattern writers to look beyond the need for three known uses when creating new pattern languages. The most important thing is the role of each pattern in the language to create a meaningful whole. Good material is worth writing.
I returned to Gerald Weinberg's The Secrets of Consulting this weekend and ran across a new Rule of Three. I propose that pattern writers consider adopting Weinberg's Rule of Three, which he gives in his chapter on "seeing what's not there":
If you can't think of three things that might go wrong with your plans, then there's something wrong with your thinking.
At writers workshops, it's not uncommon to read a pattern which, according to its author, has no negative consequences. Apply the pattern and -- poof! -- all is well with the world. The real world doesn't usually work this way. If I use a Decorator or Mutual Recursion, then I still have plenty to think about. A pattern resolves some forces, but not others; or perhaps it resolves the primary forces under consideration but creates new ones within the system.
If you are writing a pattern, try to think of three negative consequences. You may not find three, but if you can't find any then either you are thinking far enough or your pattern isn't one; it's a law of the universe.
Authors can likewise use this rule as a reminder to develop their forces more completely. If a pattern addresses few forces, then the reader will rightly wonder if the "problem" is really a problem at all. Or, if all the forces point in one direction, then the problem doesn't seem all that hard to solve. The solution is implicit in the problem.
Weinberg offers this rule as a general check on the breadth and depth of one's thinking, and it's a good one. But I think it also offers pattern writers, new and experienced alike, a much needed reminder that patterns are rarely so overarching that we can't find weaknesses in their armor. And looking for these weaknesses will help authors understand their problems better and write more convincing and more useful pattern languages.
"Okay, Eugene, I'm game. How exactly do I do that?" Advice about the results of thinking are not helpful when it's the actual thinking that's your problem. Weinberg offers some general advice, and I'll share that in an upcoming entry. I'll also offer some advice drawn from my own experience and from experienced writers who've been teaching us at patterns conferences.
Everyone knows that using good names in code is essential. What more is there to say? Apparently, plenty. I've encountered discussions of good and bad names several times in the last few weeks.
Not too surprisingly, the topic of names came up during the first week of my introductory OOP course. I gave my students a program to read, one that is larger than they've read in preceding courses and that uses Java classes they've never used. We talked about how good names can make a program understandable even to readers who don't know the details of how an object works. The method names in a class such as java.util.Hashtable are clear enough that, along with the context in which they were used, students could understand the class that uses them.
Often, issues that involve names are really about something bigger. Consider these:
When a client passes false as the second argument with this message, the result is that the button is disabled. Such code is almost impossible to follow, because you have to pay attention to the guilty argument in order to get even a high-level understanding of the code.
Avoiding this situation is relatively easy. Create a second method with a name that says what the method means:
Not only is Nat spot on with his analysis, but he has come up with a wonderfully evocative name for this anti-pattern, at least for those of us of a certain age: the Wayne's World Method. This name may be too specific in time and culture to have a long life, but I just love it.
I'm not sure exactly what Ward will talk about, and I can't wait to find out. But the title hints that he will follow the thread of names through several of his contributions to our industry. Just think about the role names play in connecting the tissue of a pattern language, the fabric of ideas that is a wiki -- and the domain model that comprises an object-oriented program. And perhaps names play a more central role? I look forward to Vancouver.
Important ideas have a depth that we can explore in thousands of ways, never quite the same. Names are one of them.
I bonked while running this morning. It's been a long while since that has happened, and I won't mind if it doesn't happen again for a long time.
Today called for an 8-1/2 mile speed workout, doing seven 1200m repeats. After my third repeat, I felt myself losing all energy. For a while, I considered alternatives to finishing -- stopping short and running laps this evening, or doing a short workout in the morning. In the end I gutted it out, with longer recoveries and much slower lap times. The rest of the workout was a struggle, but I'm glad I finished. I was even able to finish relatively strong, with big negative splits for each 200m. That felt good.
Why the bonk today? I'm probably feeling the effects of a 60-mile week last week. I had been scheduled for 48 miles then, but I returned late on the previous Sunday from Brazil and ended up doing my long run for that week on Monday. The big week went well, including a much faster than expected 18-miler on Sunday. That extra speed is probably also affecting my runs this week.
The final ingredient can be traced to a storm. A major thunderstorm knocked our power out for a couple of hours at bed time last night. That killed my alarm clock, which allowed me to sleep in more than an hour longer than usual. That got me to the track this morning at 7:00 AM, rather than 5:30 AM. So I shared the track -- with three or four dozen ROTC recruits, out for weekly PT. No problem there, except that they were only running 2 miles, and some of those guys do a fast two miles. I let myself get caught up in their speed, which led to three fast repeats -- and then the wall.
I think there is a lesson that I can draw for software development in this story, such as the importance of maintaining a sustainable pace, but I am not inclined to draw it out now.
Ryan Dixon pointed out an interesting connection between technology, speed, and language in response to one of my recent posts. Recall that Clive Stephenson had blogged on how typing changes how we write, because it lets us put more material on the page faster than writing by hand. In my recent entry, I talked about how a similar 'speed-up' technology -- agile development -- affects how we write programs and how this perhaps should affect how we teach programming.
In response, Ryan sent me this quote by Paul Graham, from his wonderful On Lisp:
Imagine the kind of conversation you would have with someone so far away that there was a transmission delay of one minute. Now imagine speaking to someone in the next room. You wouldn't just have the same conversation faster, you would have a different kind of conversation. In Lisp, developing software is like speaking face-to-face. You can test code as you're writing it. And instant turnaround has just as dramatic an effect on development as it does on conversation. You don't just write the same program faster; you write a different kind of program.
This is an important insight: you would have a different kind of conversation. Notice the how the ideas of testing and continuous feedback play into Graham's comment. And notice that the synergy between the two leads not just to a difference in degree but a difference in kind. Graham obviously thinks that the change is an improvement. I do, too.
I think this notion underlies the benefits of having empowering technology in your hands. It's why writers are usually better off by getting lots of material down on paper quickly: the act of making thoughts concrete in words changes the act of writing, and it gives the writer something real with which to work. It's why an agile development style can lead to good programs -- better programs!? -- even without big design upfront: the act of writing small tests and small functional bits of code change the act of programming. They also give programmers something concrete with which to work, rather than fuzzy requirements and the design abstractions they build in their head. Programmers learn from the growing program, and they can feed this learning back into the next code they write.
Graham speaks specifically of Lisp, but I think he'd agree that other languages offer a similar experience. Smalltalk is one. Interactivity plays a big part in the experience, though there's also something about the kind of language one programs in buried in there, too. Some languages facilitate this style of programming more than others. Lisp and Smalltalk, with their "everything is customizable" designs, do just that.
I love how seemingly little ideas can flow together to create something much bigger...
Yesterday I blogged about programming. Today, I advised severally newly-declared computer science majors. Not a single one of them wants to learn to program. Most want to take our new Network and System Administration major. Others are interested in bioinformatics or other applied areas.
Some of these incoming freshmen know that they have to do a certain amount of programming in order to reach their goals, and they're okay with that. They'll humor us along the way. But others seemed genuinely concerned that we expect them to learn how to program before teaching them how to set up Windows networks and troubleshoot systems -- so much so that they immediately began to explore their alternatives at the university.
I read occasionally about problems with science and mathematics education in the United States, but I wonder if any discipline is quite like computer science. We go through cycles of having an incredibly popular major followed by having a major that few students want to take. But at no time do very many of our students come with the slightest inkling of what computer science is or the role that programming plays in it. We seem to start with a student body that has no idea what they are in for. Some students must feel sideswiped when the truth hits them.
This state of affairs helps to explain why intro CS courses have rather high drop rates compared to similar courses in other departments. The university should consider this when comparing numbers.
What can we do about this? The move toward "breadth-first" curricula a decade ago aimed to address this problem, and I think such an approach has some benefits. (It also has some drawbacks.) But it would perhaps be better if we could address the problem earlier, if we could somehow expose high school students to a more accurate view of computing beyond the applications they see and use in so many contexts. Computing is a fundamental component of the modern world, yet it is still largely a mystery to the public at large.
This fall, I hope to teach a six-week unit on computing concepts at my daughters' school, to sixth graders. Maybe I can get a feeling for what is possible then. But, here in the trough of our enrollment cycle, encounters like the one I had this morning spook me.
Fall semester is in the air. The students are beginning to arrive. As I prepare for my courses, I've been thinking about some issues at the intersection of agile development, programming style, thinking ahead, cognitive styles, and technology.
Many computer science faculty pine for the good old days when students had to plan their programs with greater care. To this way of thinking, when students had to punch programs on cards and submit them for batch processing at the computer center, they had to really care about the quality of their code -- think through their approach ahead of time, write the code out by hand, and carefully desk-check it for correctness and typos. Then came the era of interactive computing at the university -- when I was an undergrad at Ball State, this era began with the wide availability of DEC Vax terminals -- and students no longer had to be careful. Professors love to tell stories from those Vax days of receiving student submissions that were Version 132 of the program. This meant that the student had saved the program 132 times and presumably compiled and run the program that many times as well, or close to it. What could be a worse sign of student attention to planning ahead, to getting their programs right before typing?
I've never held this view in the extreme, but I used to lament the general phenomenon. But my view is now mixed, at best, and moving toward a more agile perspective. Under what conditions would I want my students to save, compile, and run a program 142 times?
In agile development, we encourage taking small steps, getting frequent feedback from the program itself, and letting a program evolve in response to the requirements we implement. In test-driven development, we explicitly call for compiling and running a program even when we expect a failure -- and then adding functionality to the program to make the test pass.
If my students program this way, then they will necessarily end up with many, many saves and compiles. But that would be a good thing, and at every step along the way they would have a program that deserves partial credit for a correct but incomplete solution.
In order for this approach to be desirable, though, students need to do more than just code, compile, and run. They will need to add individual features to their programs in a thoughtful, disciplined way. They will need to do some testing at each step, to ensure that the new feature works and that all older features still work. They will need to continuously re-work the design of their program -- refactor! -- as the design of the program evolves. And all of these take time. Not the frenzied iterations of a student whose program is due tomorrow morning, but the intentional iterations of a programmer in control.
To me, this is the biggest difficulty in getting students to program in an agile style. Students are so used to procrastinating, to doing triage on their to-do lists in order to get the most urgent project done first. Unfortunately, many also give higher priority to non-school activities all too often. I am always on the look-out for new ways to help students see how important time is in creating good programs, whether by planning carefully ahead or by going through many planned iterations. Please share any ideas you have.
So, many iterations isn't a problem in itself, but rather a style in which those iterations are the result of the seemingly random modify-and-compile approach that many students seem to fall into when a program gets tough. Part of our job as teachers is helping students learn discipline in attacking a problem -- more so than teaching them any particular discipline itself.
Why the mention of cognitive styles above? A blog entry by Clive Stephenson brought this topic to the front of my mind a few weeks ago, and it wasn't about programming at all, but about writing more generally:
... Is there any difference between our cognitive styles when we write longhand, versus typing on a keyboard?
Since I type about 70 words per minute, I can type practically as fast as I can compose sentences in my head. So does the much-slower pace of handwriting actually create a different way not just of writing, but of thinking? Does the buffer buildup between my brain and my arm affect things?
What I mean is this: When I'm typing, because I can generate text so fast, I'll toss lots of stuff out on the page -- and then quickly edit or change it. But when I'm writing by hand, because it's so much slower I'll try to compose the sentence in my head before trying to write it. With a keyboard, I sort of offload some of my mental-sorting onto the page, where I can look at the words I've written, meditate on them, and manipulate them. With writing, that manipulation happens before the output. Clearly this would lead to some cognitive difference between the two modes ... but I can't quite figure out what it would be.
Changes in technology have made it easier for us to get ideas out of our heads and onto paper, or into a computer file. That's true of good ideas and bad, well-formed and inchoate. For many writers, this is a *good* thing, because it allows them to get over the fear of writing by getting something down. That's one of the things I like about having my students use more agile techniques to program: Many students in introductory courses are intimidated by the problems they face, are afraid of not being able to get the Right Answer. But if they approach the problem with a series of small steps, perhaps each small step will seem doable. After a few small steps, they will find themselves well on the way to creating a complete program. (This was also one of my early motivations for structuring my courses around patterns -- reducing fear.)
Used inappropriately, the technology is simply a way to do a poor job faster. For people whose cognitive style is more attuned to up-front planning, newer technologies can be a trap that draws them away from the way they work best.
In retrospect, a large number of compiles may be a bad sign, if they were done for the wrong reason. Multiple iterations is not the issue; the process that leads to them is. With a disciplined approach, 142 compiles is the agile way!
On the last day of SugarLoafPLoP 2004, I gave my test-driven development tutorial as the last event on the main program, just before the closing ceremony. I was pretty tired but brought as much energy as I could to it. The audience was tired, too, and it showed on their faces, but most folks were attentive and a couple asked interesting questions.
One person asked about the role of traditional testing skills, such as finding equivalence classes on inputs, in TDD. These skills are still essential to writing a complete set of tests. Brian Marick and his colleagues in "agile testing" have written a lot about how testers work with agile projects. One of the great values of agile software development is that most everyone on your team can develop some level of expertise at writing tests, and can use whatever knowledge they learn about testing.
Someone in industry asked whether TDD increases the quality of code but at the cost of longer development times. I answered that many believe TDD doesn't increase net development time, because this approach includes some testing time and because the increase in code quality means many fewer bugs to fix downstream. I could not point to any controlled experiments that confirm this, such as the ones Laurie Williams has condcuted on pair programming. If you know of any such studies, I would love to hear from you. I think this is an area ripe with possibilities.
All in all, folks were skeptical, which is no surprise from an audience with a serious bent toward traditional software engineering practice. TDD and other agile practices are as disorienting to many folks as finding myself in the Sao Paulo airport was to me. Perhaps I helped them to see at least that TDD isn't irresponsible, that it can be a foundation for sound software development.
This day turned into one like last Sunday -- after a half day of conference, Rossana Andrade took me and Paulo Masiero on a short sightseeing and souvenir-shopping trip around Fortaleza. Then she and her husband Richard took me to a cool Brazilian pizza place for dinner, and finally they took me to the airport a few hours before my 11:10 PM flight to Rio de Janeiro, the first leg of my journey home. The day became Saturday with no fanfare, just a long flight with a layover in Recife to exchange passengers and arrival in an empty and quite English-free Rio de Janeiro airport.
I must say thanks to my hosts in Brazil, Paulo and Rossana. They took wonderful care of me, fed me lots of authentic food, told me all about their cities and country, chauffered me around, and translated everything from pizza menus to billboards for me. Indeed, all the folks at the conference were wonderful hosts and colleagues. I can heartily recommend SugarLoafPLoP to anyone interested in participating in a patterns conference.
[ Update at the end... ]
The writers workshops at SugarLoafPLoP have gone well so far. To moderate a workshop is hard work, because the moderator has to understand each paper pretty deeply, which requires hard work studying the paper. Then, he has to guide the workshop, keeping it on focus and asking the right leading questions when the discussion slows. I have a lot to learn yet about being a truly good moderator.
The coolest part of this day wasn't my 100-minute run at sunrise over the grounds of the Aquaville Resort, but our afternoon at Beach Park, a water park near the resort. The conference organizers set aside a three-hour block to decompress from the work we'd been doing by going to the park. This place has a wide variety of water slides -- and the scariest ones I've ever seen! The web site touts the newest, Kalafrio, and it was both fun and scary. But the scariest of all is named "el Insano", and the name fits. It is a 41m tall slide, with a drop that is darn close to vertical. A few of the college guys wanted to try it and, when they found out I was the only "old guy" around who wanted to give it a go, they took me with them. Click the link for a picture.
That first moment is the scariest. Just after you go over the edge, you are airborne for a second, out of contact with the bottom of the slide. The gravity does its job and accelerates you to a remarkable speed. When you hit the bottom, you enter a curve that brings you into a tunnel parallel to the ground, where water hits you with a remarkable force. I think that is a deceleration mechanism, because otherwise they'd need an airport runway-length chute at the bottom. It all happened so fast that I hardly had time to be afraid. I remember my heart racing at the initial drop, and then the sensation of falling, and then the water in the tunnel -- but then it was over in what seemed like an instant. I may have screamed, but my heart was so loud in my ears that I wcouldn't have heard it.
Thanks to Joe Yoder, I have a T-shirt to show all that "Eu sobrevivi ... El Insano". (That's "I survived", in Portugese.)
And to prove it was no fluke, I did it again later!
I finally gave my talk on writing patterns and pattern languages this morning. It went well enough, I suppose, but I broke many of the suggestions I made in the paper: too few examples, too abstract. Sigh. How can I manage so often often to know what to do yet not do it? This talk will be better the next time I give it.
The best question at this session was about trying to write patterns that "live forever". I used Alexander's "Light on Two Sides of Every Room" as an example, and this prompted someone to point out that even the best patterns seem to become stale after a certain period of time. People wouldn't want to have two windows on two sides of their rooms if they lived in a dirty part of Sao Paulo, so Alexander's pattern is already dated; and, if Alexander's patterns suffer this fate, how can we mortals hope to write software patterns that live forever?
My answer was two-fold:
That's my understanding today. If I learn something to make me change my mind tomorrow, I'll post an update. :-)
This is my first trip overseas, and I did not adequately have anticipate how I would feel being where language separates me from the world around me. Not understanding airport announcements and signs left me in a state of constant uncertainty. (I even managed to leave my checked bag at baggage claim in Sao Paulo yesterday, so I lived out of my carry-on for the second of two straight days up. That resulted partly from not understanding the language and partly from not understanding how customs overseas work.)
Language can make us lose confidence in other ways, too. Technical jargon can turn a paper or class session into an intimidating experience. Given that I was in Brazil for a conference on how to write more effectively, this fact stood out to me from my experiences moving around the country, even with a native Brazilian often at my side to help me. I hope that I am able to keep this feeling in my mind this semester as I prepare lectures and talks for my students.
I was to give my first talk to open the conference today, on writing patterns and pattern languages, but it was first postponed from 3:00 PM to 6:00 PM and then finally to 8:00 AM Tuesday morning. Paulo Borba, Rossana Andrade, and I spent the morning in Fortaleza on the campus of the Federal University of Ceara (UFC), where Rossana teaches. One of Rossana's students defended his master's thesis, and Paulo was on the thesis committee. When the defense ran later than scheduled and we spent more time than expected over lunch, we ended up arriving at the Aquaville Resort outside of Fortaleza after the time the conference was to begin. So we, as the chairs of the conference, postponed the start by half an hour! Things worked this way all week -- the schedule seemed more a helpful suggestion than a rigid expectation. The Brazilian folks seemed comfortable with this from the start. I adapted to this rhythm pretty quickly myself.
Brazil has a lot of different fruits that we never see up here. Many have juice that is enjoyable only after sweetening, and the tastes of many are less bold than their more famous cousins, but they do add a new twist to the Brazilian diet. I especially like caja juice!
Well, I made it to Brazil. Yesterday was a day that came and went with no end. I ran 18 miles in Bradenton before the sun rose, visited with parents until after lunch, and then went to the airport for an overnight flight that brought me to Recife at lunch time Monday.
The change from English to Portugese on theb plane from Miami to Sao Paulo made the newness of my surroundings obvious. In Sao Paulo, I went through the dreaded American immigration line. The Brazilian government strives to treat each nation's citizens as that nation treats Brazilian citizens and, with the procedures in place here since 9/11, that means long lines, fewer handling stations, photographs, and fingerprints for Americans entering Brazil. I spent over two and half hours of a three-hour layover in Sao Paulo going through the immigration line. And by I use the word "line" with some hesitation. The South Americans and Europeans in the crowd certainly didn't feel limited by any idea of the linear.
My first stop was the Federal University of Pernambuco (UFPE), in Recife. My SugarLoafPLoP co-program chair, Paulo Borba, teaches there, and he asked me to give a talk to his department. I debuted the test-driven development talk that I planned to give at the conference. It went well, despite my not having slept more than an hour since 34 hours earlier and our running so late after lunch that on arrival I walked straight into the classroom and gave my talk. The audience was mostly graduate students, many of whom write software in industry. I'd forgotten what it felt like to be in a room with a bunch of grad students trying fit whatever talk they here into the context of their own research. I spent considerable time discussing the relation of TDD and refactoring to aspect-oriented programming, JML, and code coverage tools such as Clover. This dry run led me to make a couple of improvements to the talk before delivering it on Friday to the conference audience.
I was energized by the foment! But then I was ready to crash.
My trip to Florida went well.
I love to fly out of the Quad Cities Airport. It is small but not too small. The TSA personnel there work quickly and efficiently. AirTran, the discount airline that's the reason I occasionally fly out of the QCs, is fast and efficient, too.
And now I have another reason to like this airport: It offers free wireless! The service is through MediaCom. It, too, was fast, and reliable. It was nice to be able to quickly scan e-mail and check a web site for the slides I was polishing.
Ubiquitous wireless is the future, but the future is more and more with us today. The shopping mall in Cedar Falls and a couple of local eateries now provide wireless for their customers. How will this change our world? Checking mail and web surfing are the uses most of us make of it now, but they won't be where the real effect lies.
I find that I'm sometimes more productive in airports and on airplanes than at home. Airports are, in a strange way, less distracting than the other places I work. I don't know anyone around me, the scenery is impersonal and unremarkable, and I'm away from everything but my laptop and my thoughts. All I really need is an extra battery for the laptop, or good enough fortune to find an open and easily accessible outlet in the airport. But even sitting on the concourse floor, under a telephone to be near an outlet, is strangely freeing -- the words flow. Perhaps it's only the change of scenery. I got as much work done yesterday as I had all week in the office.
Tomorrow, I head to Brazil. I'm a bit on edge.
I had planned to blog while traveling to Florida and BrazilR, but I didn't have convenient Internet access for most of the trip. So I will post my entries one at a time now that I'm back, with titles that give the day the entry was written. I look forward to getting back to blogging on some content this week.
I am off to Florida for a couple of days on my way to Brazil for SugarLoafPLoP 2004. I give a talk in Recife on Monday and conference talks on Tuesday and Friday. In between I'm leading a writers workshop and serving as a writing mentor to a new author. I'll be busy! The conference hotel offers Internet access, so I'll try to blog about the goings-on.
My talks aren't finished yet, so I will also be busy on the plan today and then on the flights to Brazil. Good thing my Tampa-Recife flights net out at 23 hours...
Running update: I ran my best 6x1200m speed work-out this morning, bringing every repeat in at 4 seconds under target except the last, which I ran 14s faster than my goal time. So, after four slow runs recovering from my nine-day lay-off, I may be back on track! Let's hope that I can find the time and places I need to run while traveling.
Does test-driven development help us to build software with a higher degree of encapsulation? I feel like it does when I write code. But this is the sort of claim that, when made in front of students, can be exposed as either wrong or more complex than it first appears. That's because it depends on assumptions and skills that not all programmers hold.
How might test-driven development help us to build better-encapsulated software? When we write tests first, we have an opportunity to think about our code in terms solely of its interface because we haven't written the implementation yet! In an object-oriented program, the test involves sending a message to an object in order to see whether the object behaves as expected. The object's class may may be partially implemented, but not the behavior we are testing. And we are supposed to be thinking about just the requirement at hand, not anything else.
But how can we go wrong? If we become sloppy, we can fall into the trap of writing a test that causes a change in the object's state and then verifies that the expected change has occurred. This often requires adding a public accessor to the object's interface that is otherwise unnecessary. Even you don't intend for client programmers to use the method, it's now there. One of the lessons of interface design is that, if it's there, clients will use it.
It's more than just sloppiness that can lead us astray, though. Testing some behaviors is not straightforward because they involve outside resources (say, a web connection) or non-trivial collaborations (say, a network error). Often it's easier to write a state-based test than a behavior-based test. But those kinds of tests usually leave me feeling unfulfilled. That feeling is my test telling me to do better.
The idea of mock objects developed in the XP community as a way to support behavior-driven testing in the face of such difficulties. But even mock objects aren't a guarantee that we will write well-encapsulated code. Martin Fowler wrote a recent article discussing the common confusion of mocks with stubs.
I do think that TDD encourages and supports well-encapsulated code -- but only if the programmer understands the Tell, Don't Ask principle for designing objects. And practices it faithfully. And uses mock objects (or their equivalent in your programming style) for the tough cases. That's a lot of assumptions built into a simple claim. But most knowledge works that way.
How can you support yourself in those assumptions? Pair programming! Those XP practices really do add up to something more than their parts.
After nine days off and missing one week's worth of mileage (43 in all), I ran again on Sunday. I was scheduled for 16 miles, but after the layoff I decided to try only 12. That was still pretty aggressive; the body loses stamina in nine days. I survived, if a bit more tired than usual, and have now jumped back into my regular training schedule. I'm running slower than in the recent past, but the speed will come back with time. I'm just glad to be able to run again!
I'm busy today working on conference chair duties, but I wanted to share a couple of ideas I ran across while reading yesterday:
Laziness, Agility, and the Web
In December of 2002, I uploaded a screen-captured table .... I couldn't be bothered to convert it into HTML. Eighteen months on, Adrian Furby did just that. This shows there's some "can I have some more"'s law of the lazyweb or something, and that you should optimise for laziness and early public whining instead of planning ahead.
I've experienced this on a local scale, with my students. Often, when I post something to a course web page that leaves a natural blank to be filled, a student will do the job -- especially if it allows them to show that they know something about a programming language or a tool.
There is something agile in this "Can I Have Some More?"'s Law. Instead of waiting to post an idea until it is 100% ready, get something useful out for people to see. The community can often provide useful feedback that improves the idea, and some may even benefit from your incomplete idea now.
One nice thing about the blog culture is that it lowers the barrier to sharing incomplete ideas and getting feedback from a wider set of readers.
Tool-Making and Progress
This article provides a nice reminder of how human progress depends on the creation of better tools. That should make computer scientists both feel good about our place in the world and remember the responsibility we bear. We are first and foremost tool builders, and the rest of the world depends on what we do to do what they do better.
We also build tools for ourselves. One of the things that has always attracted me to certain software communities -- Lisp, Smalltalk, OOP, agile software -- is the liveliness with which they write and share programs to improve the lives of the people in them. This is true of many other software communities, too. Ruby and Perl come to mind. Perhaps this desire to build better tools for ourselves is one of the hallmarks of software people?
Last week, I had a celebratory dinner with a group of folks who've been involved in developing a program that is now ready for prime time. It was a nice to recall the trajectory of a project that has been useful to me in several ways.
The dinner was called by the leader of the project, Ed, a mathematics professor who specializes in early elementary education. Several years ago, he and I hooked up over his desire to have a computer program for assessing the performance of students who have been taught to basic arithmetic facts in a particular way. His research has identified a set of strategies for doing simple addition, subtraction, multiplication, and division, and a set of strategies for teaching students. What he needed was a way to tell whether and how well students were using the basic facts strategies. He needed the answer to these questions in order to demonstrate that his approach was working. More practically for teachers, answers to these questions could serve as diagnostic information, helping them know which students need more work with which strategies. Such data can make the dream of individualized instruction more of a reality.
I did some initial design work with the math professor and then turned the bulk of the code writing over to Masa Noborikawa, an M.S. student in search of a master's project. Masa's interest lay in the role of design patterns as targets for refactoring during the evolution of a large project, and this project gave him something large enough to grow over many months, refactoring and introducing many of the design patterns he'd learned about "in the small" in his courses. The result for the larger project was a first version of an assessment tool, written in Java.
Our first version had a couple of weaknesses that we needed to address. The first involved portability. Now, Java is characterized as "write once, run anywhere". But when anywhere includes old Macs running various versions of older Mac OS, Java -- especially its graphics library -- doesn't run as cleanly everywhere as we had hoped. The second involved networkability. As written the program was a single-user, single-machine tool. But teachers needed to be able to amalgamate data across machines and classrooms, and school districts wanted to be able to the same across different schools. The program needed to be networked with a central server and database.
We went through an uncomfortable hiatus in the project. One of the risks for long-term projects at a school with only a small master's program is the the unreliable flow of manpower and skills. When I was a graduate student, we seemed to have an endless supply of new students looking for projects and ready to work. As a researcher at UNI, I often hit dry spots when suitable students are sparse. It's one of the downsides of a program like ours.
Eventually I found the perfect person for the job, Ryan Dixon, an undergraduate with a lot of experience programming on Macs. Not only did Ryan have the right experience for the job, he is a good software designer, user interface designer, and programmer. He took control of the software and produced a Version 2 that addressed the above weaknesses and more. In particular, he created a parallel UI that depended on Java 1.1.8 or less, so that the program would run the same on all platforms, even the abandoned Mac OS.
Since then, Ryan has gone off to graduate school, returning to do some work for us this summer. We also have a local consultant who has added some of the networking capability and other extensions.
Anyway, as the project reaches the point of being marketed as a part of a mathematics curriculum for use in schools, the lead math professor brought us all together, with our families, to celebrate the achievements of the last few years. We all enjoyed a nice meal and the good company. Of course, there is always more to be done...
This is my first experience as an academic working with internal client who is taking a project "commercial". You get to learn more from working on real projects than you can ever learn just by reading and hearing about others' experiences. I have along track record working on real projects with real clients, but this is the first on which the resulting program will be used by a mass audience outside of the client's office. In this case, our client isn't even the user -- just someone who has lots of ideas about the project and who works with the real users.
Much of my job on this project is listening to the client figure out what he wants, listening to him talk out loud and asking questions, sometimes rhetorical. When Ed says, "Do you have two minutes?", I know two things:
Version 3 of the program could be awesome, but how we'll get there is yet unknown.
It's hard to get rich writing programs for educational markets, but there is a chance that this could take off. This curriculum shows promise, and the assessment program opens doors to possibilities that are unavailable to most elementary curricula. But even if we never make more than a token royalty check, the project will have been worth the time and energy.
As I prepare my upcoming tutorial on test-driven development for SugarLoafPLoP 2004, I find myself frequently coming back to the idea of refactoring and its synergy with TDD. Fortunately, there will also be a tutorial on refactoring at SugarLoafPLoP, by Joe Yoder. I think that our tutorials will be able to work together, too, to help folks see that, to get full benefit of either, one really ought to practice both!
That isn't to say that one can't do test-driven development alone. My coding practice in interactive languages like Scheme and Smalltalk have always been test-driven, because it's so easy to build a test suite and the desired code in parallel. On the other hand, in a language like Java, it's so easy to get caught up in the details of a big class and lose sight of the tests. This is where I find that having cultivated TDD as a practice has has made me a better programmer.
Jason Marshall recently wrote a nice piece on the value of refactoring. He says,
... on my most productive day of coding ever, I had written negative 500 lines of code. On my longest sustained 'productive' cycle, on the first project I mentioned, I averaged negative 200 lines of code for four weeks ...
This reminded me of one of Brian Foote's wonderful aphorisms: "The only thing better than a 1000-line of code weekend is a minus 1000-line of code weekend." Of course, it's sometimes hard to convince anyone still living in a LOC world to appreciate the value in your accomplishment!
Anyone who has had to live with an out-of-control code base understands. Jason's article explains well the value of refactoring even in the absence of any other agile practices: Repetition, especially the mindless sort, ultimately makes the code too big and too hard to understand. That makes it hard for anyone to add to the program.