A great line from About Kim:
I suppose the sign of a sweet-spot language is when you naturally fall into using it for writing pseudocode.
How many of the languages you program in hit the sweet spot?
The major training for my second marathon has ended.
Yesterday, I did my last long run in preparation for the Des Moines Marathon, which takes place on October 17. I just finished up my heaviest three weeks of running ever: 163.5 miles, including three runs of 20 miles or more (22, 24, and 20). And I even felt good at the end of yesterday's run, throwing in a 46-minute 10K at the end, which itself ended with 2 miles in 14:40. That may not be very fast, but for me it is. A big improvement over last year.
Now begins the taper, those three weeks before the marathon when a runner decreases mileage, begins to rest the body, and fuels up for the race. I'll do 46 miles this week and 38 next week, but mostly slow and easy. I'll run one more speed work-out Friday and then do an 8-miler at marathon goal pace next Wednesday. Otherwise, my only goal is to enjoy my runs, let my body recover from the pounding it's taken the last few weeks, and break in a new pair of shoes for the race. That last week will include just a few short, easy runs, with a cuple of miles at marathon goal pace thrown in to preserve muscle memory.
Wish me luck.
Oh, while recording my mileage for last week I noticed that I'd passed a new milestone on yesterday's run: 1400 miles for 2004! I'll go over 1500 on my last jog before Des Moines. That just seemed kinda cool.
My last entry ended with the realization that helping folks, especially students, adopt agile development methods comes down to motivation. That's a much different task than presenting course content objectively and clearly. Often students get it, they just don't get around to doing it.
I don't have any answers to this puzzle yet, but it reminded of Tall, Dark, and Mysterious's recent blog on universities and job training. TDM is a math professor in Canada, and she confronts the fact that, while professors and universities often prefer to paint academic life as a pure intellectual journey, most students are at the university to earn a job credential. This creates a motivational gap between what students seek and what their courses sometimes want to teach.
Computer science occupies a different part of the spectrum from majors like math or history or literature. Many of the skills and tools that we teach have direct application in industry. Courses in programming, software development, database, and the like all can introduce "practical" material. But in many ways this makes our situation more difficult. I'm guessing that most history and literature majors aren't there for job skills, at least not in the superficial way that, say, a CS student may want to study networking. But when our networking courses go theoretical, students' interest can begin to wander, because they don't see the real value.
Don't get me wrong -- most of the students I've dealt with have conscientiously continued to work through the theoretical stuff. But I know that, in some sense, they are humoring me.
I agree with TDM that we should be aware of our students' goals and take them into account when we design our courses. Why not let students know that some of the theory we're studying applies to real problems? In my algorithms course last week, I enjoyed being able to point out how we can encounter the Knapsack Problem in maximizing profits in an Internet-based auction. After seeing the naive brute-force solution to this problem, and connecting the problem to a real-life problem, perhaps students will better appreciate the more complex but efficient algorithms we will study later.
That said, there are some beautiful ideas in computing that don't necessarily have a direct application in current practice, and I also love to have students encounter their beauty. I always hope that, if I do a good job, at least some of the students will appreciate some of those neat ideas for their own sake and realize that the university can be about more than just earning a job credential.
My message earlier today was just a riff on the quote I began with. I was in an especially "why not?" kind of mood. As I walked to lunch, though, I knew that many of my students, including many of the better ones, would be unfazed by my rhapsody. They have plenty of reasons for resisting the switch to TDD. And those reasons seem quite powerful to them. Let's consider two.
It takes too much time. Students don't always have the luxury of time when designing, implementing, and debugging an assignment. The program is due in a week or two, and so they spend most of their time working just to write a program that works. Evidence such as "TDD takes 15% longer and results in 30% fewer defects" doesn't provide much motivation to do TDD when students don't think they *have* 15% more time. They'll take their chances with working on what really matters, which is the program. Requiring students to submit their tests and then grading them, too, may motivate them, but I'd like to hear from folks who have tried that before deciding that it really works -- or whether students just view it as an extra burden, an 'unfunded mandate' from the instructor.
Old habits die hard, if at all. Even if convinced of the value of TDD, many people find the change in habit to be a difficult obstacle to surmount. Changing habits takes discipline, support, and time. Instructors aren't usually with students enough at the times they program to help with the discipline, so our ability to provide support is compromised. When the pressure is on, or when faced with a challenging tasks, people tend to fall back on what they know best, what feels comfortable -- even if they aren't confident that the old ways will work! As an instructor, I find it most frustrating to watch students fall back on practices they know will fail, but I realize that this is simply human nature. Without changing the students' environment more radically, effecting certain changes of habit will be a hit-or-miss affair.
Maybe this comes down to the fact that we who teach need to change the way we do things. Give assignments over longer periods, allowing more time for reflection. This sounds good, but ... What about the stuff we can't do because we now don't have time? It's a commonplace that students may be better off learning less content better, with more growing of the mind, but making that change is a difficult obstacle for teachers to surmount.
In the end, I wonder how much effect such a change would have anyway. Would all the students' newly freed time be sucked up by their other courses, their jobs, and their ordinary lives?
Ultimately, this all comes down to motivation, and the best motivation comes from inside the learner. Drawing that desire out is a task for which instructors aren't usually well-prepared. We can learn to do a better job of motivating students, but that takes work on our end. And wouldn't we rather just lecture?
"If you can't write a test for the code you are about to write, then you shouldn't even be thinking about writing the code."
And if you can, why not write it now? Then you will know for certain that you can, and that you aren't just fooling yourself.
Good programmers sometimes ask me why they should bother. They already write good code. Do such programmers really need the extra discipline of TDD? Perhaps not. But good programmers are people, too, and so are subject to the same tendencies as anyone else. When the pressure is on, or when they are surprised by a new requirement, they are all too prone to tell themselves that they really do understand.
There is another, perhaps better, reason for good programmers to practice TDD: good programmers often work as part of a team. And as a member of a team, they depend on the quality of the code produced by the entire team, not just themselves. A good programmer can benefit from having his teammates practice the discipline. In this way, even weaker members of the team will develop better code, which benefits everyone. The good programmers will want test-first to be a team discipline, so everyone practices it.
If writing tests first is something you can do anyway, the real question becomes: Why not?
A few items about the blog itself...
A few folks have asked why Knowing and Doing doesn't support comments. I've been thinking about adding comments since July. Several blogs I read don't support comments, and a few that have supported comments in the past are disabling them.
I have two reasons. One is comment spam. Many bloggers (e.g., Michael Nielsen) find that keeping their comment sections neat and tidy is difficult in these days of spambots. I don't have time to monitor comments and eliminate the ones that waste space or (worse) offend readers.
The second comes down to another form of laziness. I'm using NanoBlogger to power my blog, and adding comments requires a small but non-trivial amount of work. I tried it once, couldn't get it to work right away, and so dropped the idea for a while. Then school happened, and I've just been too busy to get back to it.
To a certain part of the blogging community, not supporting comments is a huge faux pas. To these folks, comments are an essential part of the blogging experience. I hope that I have not lost many potential readers for this reason, but it's a risk I have to take until I have more time to mess with the software and monitor the comments.
On the other hand, I've always thought the prospect of seeing "Comments(0)" at the bottom of every entry would be rather depressing, so maybe this is a case of "what you don't know can't hurt you". :-) The fact that I receive occasional responses lets me know that someone is reading. (And I even had my first spotted at reference today!)
I recently changed the name of the Elementary Patterns category to just plain Patterns. I realized that all of my patterns posts thus far had been more general, and I didn't want to mislead folks. I expect that Elementary Patterns will return as a category some day soon, when I have more material specific to that issue to write on.
I wish I had them. According to the doc, NanoBlogger supports them, and I have the config flag set to 'on', but they don't seem to be there. One of these days, I'll fix it.
So far, my blog has been almost exclusively about matters of professional interest, with one broad exception: Running. I don't expect that I'll begin blogging in a confessional or stream-of-consciousness mode any time soon, because those sorts of posts can go off course into self-indulgence pretty quick, and I don't trust myself. I'll keep posting on running because (1) some folks have expressed interest, (2) sometimes those posts interact with professional threads, such as agile software development, and (3) I like it. Hey, even I can indulge myself some of the time.
That said, I must admit that when I blog on running, it almost feels like a day off. There's a lot less pressure on me to get those posts "right".
That's all for now. I am surprised that I've been able to keep up a steady pace blogging after the academic year started. It takes time, and with two new preps plus all of my regular duties, time isn't exactly in surplus. But I enjoy the sort of thinking I have to do when I write for a public audience, and so I've made time.
But pretty soon I have to grade some assignments, or the students will revolt. And I do have some other writing to do... so don't be surprised to see some of that material show up here!
I just read a long rant by a student who is studying Paul Graham's ANSI Common Lisp. He was trying to understand an especially Lispy piece of code and having troubles. At the top of the code file he submitted, he included a comment over a page long talking about code readability, comments, and mental sanity. I enjoyed it very much.
Many folks who have studied "advanced" Scheme or Lisp code know what he is talking about. I use the scare quotes because, while students often characterize the code this way, the code doesn't really have to be all that advanced to create this kind of disorientation. It doesn't have to be Scheme or Lisp, for that matter; I had a similar experience when I studied Haskell programs. It's more about programming style and familiarity than language. (Perl may be an exception. Does anyone really understand that stuff? :-)
Functional languages tend to cause the kind of disorientation that my student felt, because we don't teach or learn functional programming very much or very well at most schools. Almost everyone who comes into contact with functional programming does so from a position of weakness. That's too bad, because functional programming can be powerful and beautiful, and as time passes we see more and more elements from functional languages migrating into popular languages like Java, C##, and Python. I'm glad that the TeachScheme! folks are building tools and developing techniques for teaching this stuff better.
That's really just a prelude to what triggered my writing on this topic, which was the part of the student's rant that dealt with comments. He felt that, if Graham had commented the function in question, understanding it would have been easier. But as I read further into the rant, I found that much of the student's misunderstanding arose from his not understanding graphs and graph search as well as he might.
Graham could certainly have explained more about graphs and breadth-first search, but his function isn't a textbook; it's a Lisp function. He could have added a comment with a pointer to explanatory material, but I suspect that Graham assumed his readers would already know certain CS basics. He is trying to teach a certain audience how to program in Lisp effectively, and perhaps something more general about programming. But teaching data structures and algorithms isn't one of his goals.
Commenting code is a perennial topic of debate among programmers, students, and instructors. How much is too little, enough, too much? Brian Marick wrote a nice little piece that points out something folks seem to forget occasionally:
But code can only ever be self-explanatory with respect to an expected reader.
When writing code, you have to know your expected audience, but you also have to know something about other potential readers. Then you have to make some hard decisions about how to write your code and comments in light of all that.
I don't expect that my students put a lot of comments in the code they write for class. Their expected audience
consists of primarily me, with themselves and their fellow students as other potential readers. I don't need them to
explain to me how a
for loop works, or what an assignment statement does. I much prefer that they
choose variable names and organize their code in ways that make their intention clear than that they add gratuitous
comments to the code.
On the other hand, I sometimes put comments in my code that reflect the fact that I am teaching students about some
idea. If my code example is for CS I, then I may well comment a
for loop with explanatory material. For CS
II, I may add comments that explain the role played by a class in the Decorator pattern. Even so, I sometimes don't add
comments of this sort, because the code is read as a part of lecture notes that explain the ideas behind the code. Maybe
I should be more conscientious of the fact that many students will read the code of the context of the lecture -- or not
even read the lecture at all!
As I responded to some of my student's rant, my mind shifted to the old questions: Just when are comments most useful? When should programmers comment their code, and with what sort of comment? Let's assume that the programmer is writing for readers of roughly her own level of skill and understanding.
I surfed on over to a wiki page I first read long ago and have always loved, Method Commenting. Ward Cunningham started that page with some patterns of code and comments to summarize a discussion of the topic. These patterns resonate with me. The basic theme is that people read code, and you should write for them.
Code should reveal the intentions of the programmer, using names and method decomposition that communicate purpose.
For example, one of the things my student disliked in the function he was studying was its lack of data abstraction --
the function uses
cars all over the place, which are implementation details. The code
would read better with Syntax Procedures for the
abstract path operations they implement.
That said, programmers have to learn how to think like the machine. Programs aren't novels; they aren't even how-to manuals. A reader can't expect to have normal computational behavior explained in comments. What counts as normal depends on many things, including at least the programming style and language.
Here is a situation that calls for a comment: Sometimes, you have to write code for the machine, for example, when an optimization is required. This code may mislead the reader. So, give the reader a clear warning in order to avert a misunderstanding.
Here's another: Sometimes, you can take advantage of a given specification, say, a limit on the size of a collection, and implement an especially efficient algorithm. But the spec may change, and the code may then become a liability. So, fence off your code by documenting in a comment the constraint and its role on your implementation. (These days, I find that my unit tests serve this purpose better, and they are code!)
There are some other situations where I use a comment, but they usually involve process. Sometimes, I have to leave a piece of code undone. A comment can help me know what is left to be done. Other times, I have a great idea that I don't have time to explore at the moment, and a comment can help me remember the idea. But I worry about both of these kinds of comments, because they tend to have a short lifespan of utility. If I don't get back to the same code soon, the comment on my great idea may not mean much to me when I get back to it. Or, worse, I don't see the comment until after it's too late. I usually prefer to make a note to myself in some other form for such things.
When else is a comment better than (better) code?
We are coming upon a major anniversary in the worlds of object-oriented programming, patterns, and CS publications... Design Patterns first appeared at OOPSLA 1994. Most CS books, especially ones that appeal to a wide popular audience, have a pretty short shelf life. Occasionally, a new classic comes along that accompanies a change in how we work -- or ushers in the change. Design Patterns is such a book.
It came out at a time when industry was embracing the idea of object-oriented programming in C++, but many programmers just didn't know much about OOP. Where was the flexibility it offered? How could achieve that flexibility in C++, a powerful but rigid languages. Design Patterns showed us.
Design Patterns has taught a few generations of programmers about OOP and the common patterns that occur in flexible, extensible OO designs. It's still going strong these days. Amazon lists it at #788 on its bestsellers list, higher even than more more recent classics such as Refactoring and not all that far behind much more recent soon-to-be classics such as Code Complete, Second Edition. Even as the world has moved from C++ to Java, Design Patterns remains an essential reading for OO programmers.
To what can we attribute its success and durability? There has been much discussion in the software patterns community about the book's shortcomings (it isn't a pattern language, its pattern format is hard to write in, ...), but we can't ignore the fact that it teaches well material that is essential to programmers. I learn a little something new every time I read it.
This book gave visibility to the then-nascent software patterns community, which has led to many wonderful books and papers that teach us the principles of topics as diverse as configuration management, use case development, assembly language programming for microcontrollers, organizational management, and user interface design.
It also spawned an eponym for the book's authors, the Gang of Four, and a corresponding new TLA, GoF.
OOPSLA 2004 isn't going to let this anniversary slip by without a tribute, and a little fun to boot. The GoF 10th Anniversary Commemorative will feature a recent entry in the GoF-inspired library, Design Dating Patterns, by Solveig Haugland. That's a real book, all right -- check out the book's web site for more. The OOPSLA web site lists this session as a "social event", which makes some sense, given the book's content. But I expect that a few of the techies who show up will do so harboring a secret wish to learn something new that they can use. Heck, I'm married, and I plan to. If these design patterns teach their content as well as Design Patterns, then we will all learn something. And Ms. Haugland should be hailed for her achievement!
Mark Jacobson, a colleague of mine, is a big fan of the movie Ghostbusters. Now, I like that movie, too... I remember the first time I saw it, when my brother and I came home and replayed the whole movie all afternoon, and I've seen it many times since. But Mark is a big fan. He believes that Ghostbusters can help students learn to be better students. You can see his many Ghostbusters-related links on his homepage.
I am a huge Bill Murray fan. Last night, I watched one of my other favorites from his filmography, What About Bob? This movie is pure goofiness, unlike the high art of Ghostbusters, but I enjoy it. I must have been in a goofy mood this weekend, because I began to notice all the things that What About Bob? can teach us about agile software development. Humor me.
First, a little about the movie. Bob Wiley (Murray) is a mess, a multiphobic who can barely even leave his apartment. He's also so obsessive about his therapists that they keep passing him on. At the beginning of the movie, Bob's current therapist leaves his practice in order to get away from. Bob is referred to Dr. Leo Marvin, a successful and self-absorbed psychiatrist played to perfection by Richard Dreyfuss. Leo is about to go on vacation after the publication of his blockbuster self-help book Baby Steps. Bob manages to wheedle an appointment on that fateful last morning, making his first sparks with Leo and receiving a retail-price copy of Baby Steps.
It took me a long time to realize that Bob is a software developer. He's paralyzed by change, fears interacting with people (clients), and can't make progress toward his goals. He also isn't good at pairing. He was married once, but that ended quickly because, as he tells Leo in their first session together, there are two kinds of people in the world: people who like Neil Diamond, and those who don't. And people of different kinds can't work together.
The first thing he learns from Leo is take baby steps. Trying to take on all of the world's pressures at once would paralyze anyone. Set a small goal, take the actions that achieve just that goal, and then reassess the situation. (That's test-first development, Do the Simplest Thing, and small releases.)
One thing that Bob already seems to understand is the need for continuous feedback. That's what he seeks from his therapists -- and also what keeps driving them away. He needs too much attention, because he is caught up the backwards pathology of modern business. He seeks feedback not from the world, which is where he takes his actions, but from his therapist, who represents his manager. He wants someone else to tell him what to do, and how to feel, and how to live. A professional must take responsibility for his own actions and feelings and life. That's one of the things that folks like Jerry Weinberg and Pragmatic Programmers have emphasized so often to software people for so many years.
We also see that Leo needs feedback, too. When planning for his live interview on Good Morning, America, he asks his family to help him choose what to wear and where to stand, but they are so busy with Bob that they don't pay him enough attention. "I need feedback, people!" he screams in a moment of raw emotion. And he does. The ever presence of the dysfunctional Bob accentuates Leo's own egotistic tendencies and pushes him to cry out for help.
Another lesson Bob learns from Leo comes in the form of a prescription -- not for more medication, with which Bob seems to have far too much experience, but to "Take a Vacation". This is a veiled reference to the agile principle of sustainable pace. In addition to taking steps that are too big, Bob spends every waking moment, and apparently many of his sleeping ones, focused on his problems. Such obsession will burn a person out, and only taking regular breaks can cure the obsession. Bob isn't cured when he decides to take a literal vacation from his problems, but his vacation is another step on the road to recovery. Unfortunately for Leo, Bob decides to vacation in Lake Winnipisaukee along with the Marvin family, which is another step for Leo toward collapse.
I'm still working on the role played by Leo's family in our little morality play. Leo's wife, Fay, and children Anna and Siggy take to Bob quickly, finding his attempts to put a happy face on the world refreshing after all the years of seriousness and isolation from Leo. It is in his interactions with these gentle, loving people that Bob begins to grow out of his sarcophagus, putting the lessons of baby steps and vacations into practice. Perhaps they somehow symbolize clients, though most software developers wouldn't characterize all of their clients as gentle and loving. However, it is in interaction with these folks that Bob learns that he does not have to shoulder all of his burdens alone. (As an aside, Anna's Kathryn Erbe can have a role in my stories any day!)
This leads us to the central question remaining: Who is Dr. Leo Marvin? The agile coach, of course. He teaches Bob to overcome his fears, to accept the world as it is, and to embrace change. Unlike with Bob, I felt an overwhelming urge to identify Leo with a real person in the community. Kent Beck? Uncle Bob Martin? Finally, near the end of the movie, we have our answer. Bob Wiley is ultimately cured by Leo's latest invention, the not-so-tongue-in-cheek Death Therapy. Through a single attempt at Death Therapy, Bob learns to untie the self-made knots that bind him and to take command of his life. He even becomes able to pair again, marrying Leo's sister, Leo.
And so we learn that the model for Leo must be Ron Jeffries, who recently so eloquently described the role that Death Therapy might play in reversing the fortunes of a software industry that so often binds itself up with long-range plans, unnecessary separation of tasks, and fear of change.
Shh. Don't tell Ron any of this, though. Leo goes crazy at the end of What About Bob?, unable to shake Bob's obsessions. But Bob is cured!
That's all the Metaphor I can manage today. Thankyouverymuch.
Oh, and if you are one of my students, don't expect this to show up in one of my classes. As much as I'd love to watch What About Bob? again with you all in class, I don't quite have the personality to carry this sort of thing off live. Then again, you never know...
I am enjoying a weekend at home without a lot of work to do. After having been at PLoP for five days last week, a chance to be with my family and to regroup is welcome. Tomorrow is my longest training run for the Des Moines Marathon -- 24 miles. My 22-miler last Sunday at Allerton Park went well, so I'm hopeful for a good run tomorrow.
Here are some programs that I've been experimenting with lately...
Markdown is a simple plain text formatting syntax *and* a program that translates this syntax into HTML. I like to work in plain text, so I write all of my own HTML by hand. But when I am trying to whip up lecture notes for my courses, I find typing all the HTML formatting stuff a drag on my productivity. Markdown gives me the ability to format my notes in an email-like way that can be posted and read as plaintext, if necessary, and then translated into HTML. It doesn't do all of HTML, but it covers almost everything that I usually do.
Here are a couple of Smalltalks implemented in Java: Talks2 and Tim Budd's Little Smalltalk, SmallWorld. Talks2 builds on Squeak and so has more stuff in it, but SmallWorld is, well, small, and thus has source code that is more accessible to students.
And here are a couple of fun little sites:
A technical person who works at a university recently lamented:
Our administrators want to be involved in technical decisions, but they don't understand the technology. So they buy stuff.
In his view, these managers think that selecting the software everyone uses makes them relevant. It's about power.
I have noticed this tendency in administrators as well, but I think that we can find a more charitable interpretation. These folks really do want to contribute value to the organization, but their lack of deep technical understanding leaves them with only one tool available to them, money. (Ironic that this so, in these days of deep cuts in academia.) Unfortunately, this often leaves the university supporting and using commercial software -- sometimes rather expensive software -- when free, open source, and better software would serve as well.
If we believe the more charitable interpretation, then we need to do a better job helping administrators understand technology, software, and the values they embody. It also means getting involved in the hiring administrators, to help bring in folks who either understand already or who are keen on learning. Both of these require the gritty sort of committee work and meetings that many academics run away from, me included. In a big organization, it is sometimes hard for grassroots involvement to have a big effect on hiring and promotion. But the effort is almost certainly worth it.
At PLoP last week Gerard Meszaros said something that caught my ear:
risk = probability X consequence
Why waste energy minimizing a risk whose consequence is too low to be worth the effort?
This idea came up later at the conference when Ward talked about the convention-busting assumption of wiki, but it is of course a central tenet in the agile methods. Too often, when faced with an undesirable potential result, we focus too quickly on the event's likelihood, or on its effects. But risk arises in the interplay between the two, not in either factor alone. If an event is likely to happen but has only a small negative effect, or if it has a major effect but is unlikely to occur, then our risk is mitigated by the second factor. Recognizing this can help us avoid the pitfall of running from a potential event for the wrong reasons.
Recognizing this relationship can also help us to take control of the problem. In XP and the other agile methods, we accept that change is highly likely, so we work to minimize the consequence of change. We do that by maintaining a comprehensive suite of tests to help us verify that changes to the system don't break something unexpectedly; and, when they do, we use the tests to find and fix the problem spots. We minimize the consequence of change by using refactoring tools that help us to change the structure of our programs when design requirements call for something different. We minimize the consequence of change by working on short iterations, continuously integrating our code, and releasing versions frequently, because these disciplines ensure that we get feedback from our tools and client frequently.
Learning to pay attention to all the variables in a situation is more general than just assessing risk. In a recent message to the XP mailing list, Kent Beck said that he tries to help his customers to think in terms of return, not just value or cost:
I have another goal for early estimation, which is to encourage the business-decision-makers to focus on return instead of just value. If the choice is between the Yugo and the Ferrari, I'll take the Ferrari every time. If I have $6000 and I know the price of the two cars, my thinking is more realistic.
A system's value is a function of many variables, including its features, its cost, and the environment in which it must operate.
We can become more accurate, more efficient decision makers by paying attention to what really matters, and not being distracted by our biases and initial reactions. Often, these biases were learned in other times, other environments. Fortunately, I think that this is something that we can learn to do, by consciously developing new habits of thought. It takes discipline and patience to form new habits, but the payoff is often worth the effort.
One of the goals of the running category of my blog is to report on my runs while traveling. When I go to conferences or to visit friends, I try to find interesting places to run. I sometimes have a hard time finding good information on the web about routes and parks, so I figured I should share what I do find, plus any information I can add from my own experience.
The first stop in the Running on the Road series: Allerton Park, southwest of Monticello, Illinois.
I run at Allerton Park every year when I go to PLoP. As I've increased my mileage over the last couple of years, I've come up with a wider variety of running routes. The great thing about Allerton is the variety available. On the park grounds, you have a choice of trail runs that follow the Sangammon River and trail runs that go through the park's many sculpture and sculptured gardens. Be sure to run to the Sunsinger at least once--especially at sunrise! You can also run on Old Timber Road, the county road that runs through the park, or use it to reach a network of county roads that surround the park and run to Monticello.
First, check out this map to get a feel for the area. All of those county roads are runnable, so you can put together your own routes pretty easily. The hardest thing to do the first time around is judge distance.
Here are some of the routes I've used, by distance.
You should also check out this essay by a local runner about some good Allerton Park routes, as well as suggestions for good post-run eats in Monticello!
If you want to run laps on an outdoor track, here are directions to Monticello High School. I had planned to do a speed workout there in 2004, but at the last minute decided to do that workout at home before leaving and to run an 11-miler on the road in its place.
Allerton Park is a great place to run. It is hillier than my hometown, which makes it even a bit more challenging. Just be prepared for the shape of county roads: they drop off on the sides more than most city streets, which can wear on your hips after a few miles.
Ward Cunningham led the second session of the day, beginning with some of his own history. As many of you know, Ward is best known for taking ideas and turning them into programs, or ways of making programs better. He spoke about "wielding the power of programming", to be able to make a computer do what is important to you. If you can think of an idea, you can write a program to bring it about.
But programmers can also empower other people to do the same. Alan Kay's great vision going back to his grad school days is to empower people with a medium for creating and expressing thoughts. Ward pointed out that the first program to empower a large group of non-programmers was the spreadsheet. The web has opened many new doors. He called it "a faulty system that delivers so much value that we ignore its fault".
Ward's wiki also empowers people. It is an example of social software, software that doesn't make sense to be used by one person. The value is in the people that use it together. These days, social software dominates our landscape: Amazon, ebay, and a multitude of web-based on-line communities are just a few examples. Wiki works best when people seek common ground; it is perhaps best thought of as a medium for making and refining arguments, carrying on a conversation that progresses toward a shared understanding.
This dynamic is interesting, because wiki was predicated in part on the notion of taking a hard problem and acting as if it weren't a problem at all. For wiki, that problem is malevolent editing, users who come to a site for the purpose of deleting pages or defacing ideas. Wiki doesn't guard against this problem, yet, surprisingly, for the most part this just isn't a problem. The social processes of a community discourage malevolent behavior, and when someone violates the community's trust we find that the system heals itself through users themselves repairing the damage. A more subtle form of this is in the flip side of wiki as medium for seeking common ground: so-called "edit wars", in which posters take rigid positions and then snipe at one another on wiki pages that grow increasingly long and tedious. Yet community pressure usually stems the war, and volunteers clean up the mess.
Ward's latest thoughts on wiki focus on two questions, one technical and one social, but both aimed at a common end.
First, how can we link wikis together in a way benefits them all? When there was just one wiki, every reference matched a page on the same server, or a new page was created. But now there are dozens (hundreds?) of public wikis on the web, and this leads to a artificial disjunction in the sharing of information. For example, if I make a link to AgileSoftwareDevelopment in a post to one wiki, the only page to which I can refer is one on the same server -- even if someone has posted a valuable page of that name on another wiki. How could we manage automatic links across multiple wikis, multiple servers?
Second, how can wiki help information to grow around the world? Ward spoke of wiki's role as a storytelling device, with stories spreading, being retold and changes and improved, across geographic and linguistic boundaries, and maybe coming back around to the originators with a trail of where the story has been and how it's change. Think of the children's game "telephone", but without the accumulation of accidental changes, only intentional ones. Could my server connect to other servers that have been the source of stories that interested me before, to find out what's new there? Can my wiki gain information while forgetting or ignoring the stuff that isn't so good?
Some of these ideas exist today at different levels of human and program control. Some wikis have sister sites, implemented now with crosslinking via naming convention. But could such crosslinking be done automatically by the wikis themselves? For instance, I could tell my wiki to check Ward's nightly, looking for crossreferenced names and linking, perhaps even linking to several wikis for the same name.
In the world of blogging, we have the blogroll. Many bloggers put links to their favorite bloggers on their own blog, which serves as a way to say "I find these sites useful; perhaps you will, too." I've found many of the blogs I like to read by beginning at blogs by Brian Marick and Martin Fowler, and following the blogroll trail. This is an effective human implementation of the spreading of useful servers, and much of the blogging culture itself is predicated on the notion of sharing stories -- linking to an interesting story and then expanding on it.
Ward's discussion of automating this process brought to mind the idea of "recommender systems", which examine a user's preferences, finds a subset of the community whose collective preferences correlate well with the user's, and then uses that correlation to recommends content that the user hasn't seen yet. (One of my colleagues, Ben Schafer, does research in this area.) Maybe a collection of wikis could do something similar? The algorithms are simple enough; the real issue seems to be tracking and recording user preferences in a meaningful way. Existing recommender systems generally require the user to do a lot of the work in setting up preferences. But I have heard Ralph Johnson tell about an undergraduate project he directed in which preferences were extracted from Mac users' iTunes playlists.
I must admit that I was a bit worried when I first heard Ward talk about having a wiki sift through its content and activity to determine who the most valuable contributors are. Who needs even narrower bandwidth for many potential posters to add useful content? But then I thought about all the studies of how power laws accurately model idea exchange in the blogosphere, and I realized that programmed heuristics might actually increase the level of democratization rather than diminish it. Even old AI guys like me sometimes react with a kneejerk when a new application of technology enters our frame of reference.
The Saturday sessions at PLoP created an avalanche of thoughts in my mind. I don't have enough time to think them now or act on them, but I'll keep at it. Did I say before how much I like PLoP?
Yesterday was an unusual day for PLoP: we didn't have any writers workshops. This year, we squeezed all of the paper sessions into the first two days of the conference, setting aside Saturday for two extended sessions by Norm Kerth and Ward Cunningham. Among their many accomplishments, Norm and Ward are two of the founding fathers of the software patterns community, and their talks commemorated the first PLoP conference ten years ago.
Norm led a discussion on myths. I like two of the definitions offered. One person defined a myth as "a story that is so true you can't use the facts to explain it." Richard Gabriel's definition turned on a myth's effects: A myth is "a story capable of generating stories in every person who hears it."
Many folks might think that a discussion of myth is out of place at a conference on software development, but this session was spot on at PLoP for two reasons. First, this session celebrated the conference's tenth anniversary, and so stories about its founders and founding were on everyone's mind. The conference's opening session focused on the Genesis Myth of the patterns community, sprinkling facts among the essential truths. Norm's session this morning was more about the Hero's Journey myth, told in various forms by all cultures.
More important is the second reason. Science is itself a myth. It is a set of stories that purport to explain how and why the world is. We hold some of these stories true (the Greek theory of atoms) until a better story comes along (Newtonian physics, relativity, quantum mechanics, string theory...).
Software folks have their myths, too. Software engineering is a comprehensive one. Extreme programming is a myth, one of a complex set of myths that we call agile software development.
Patterns are themselves myths. They are stories we tell about the successful systems we see around us in the world. They have a narrative form and expected components. We use them to help others understand what we think we understand, and we write them to help ourselves understand.
I don't worry that these ideas are "just" stories that we tell. They embody our current understanding of the world. We try to use more scientific methods than our forbears in constructing and refining our stories, but we must always keep in mind that they are just that -- stories -- and that we can them. One of the beauties of XP is that Kent Beck and his colleagues chose to create a story with such a challenging premise and the promise of something different. Because it is agile, it is made to be shaped to its environment, retold in as many different forms as there are tellers, as we all work together to find deeper truths about how to build better software better.
Deep truths often lie inside stories that are themselves not strictly factual. A classicist who now does software at IBM reminded us of this during the session. I love PLoP.
"... as we all work together to find deeper truths ..." is a great segue to Ward's afternoon session, but because I didn't post this last night I will have to wait to tell that story after I get home from the conference.
I was in a skit tonight. I don't think I've had a role in a skit not created by one of my daughters since 1985 or so, when I teamed with one of best friends and the cutest girl I went to college with on a skit in our Business Law class.
The skit was put on by Mary Lynn Manns and Linda Rising to introduce their forthcoming book, Fearless Change: Patterns for Introducing New Ideas. They wrote a short play as a vehicle for presenting a subset of their pattern language at XP 2004, which they reprised here. I had two lines as an Early Adopter, a careful, judicious decision maker who needs to see concrete results before adopting a technology, but who is basically open to change. Wearing a construction paper hat, I pulled off my lines without even looking at my script. Look for the video at a Mr. Movies near you.
My favorite part of the skit that didn't involve me was this Dilbert comic that opened the show.
PLoP makes you do crazy things.
Our writers' workshop was excellent. Ralph Johnson and I had two papers on our elementary patterns work at ChiliPLoP. They are the first concrete steps toward a CS1 textbook driven by patterns and an agile approach from the beginning. The papers were well-received, and we gathered a lot of feedback on what worked well and what could stand some improvement. Now we're talking about how to give the writing project a boost of energy. I'll post links to the papers soon.
PLoP opened with a wonderful session led by Ward Cunningham and Norm Kerth on the history of the software patterns community. I've heard many of the community's creation stories, yet the collaborative telling was great fun. I even contributed a bit on my first PLoP in 1996. And I picked up a few new tidbits of interest:
However, I think that the most important idea that I left the session with is one that I'm still thinking on. One of the participants commented that The Nature of Order was Alexander's effort to get beyond a fundamental error underlying his work on patterns. Paraphrased:
Patterns are an autopsy of a successful system. An autopsy can do many things of value, but it doesn't tell us how to build a living thing.
That's what a pattern language strives to do, of course. But Alexander's experience applying his pattern language in Mexicali was only a mixed success. The language generated livable spaces that were more "funky" than beautiful. The Nature of Order seeks to identify the fundamental principles that give rise to harmony and beauty in created things, that give rise to patterns in a particular time and place and community.
I think we still need to discover the pattern languages embodied in our great programs. But "pattern as autopsy" is such an evocative phrase that I'll surely puzzle over it more until I have a better idea of where the boundary between dissection and creation lies.
Many people express surprise that anyone can enjoy running, especially when it comes to long distances or fast paces. Every once in a while, though, I have a run that reminds me just why I get up every morning. Usually, it's a medium to long run on the trails, not too fast or slow, that raises the spirit. This morning, I had a speed workout that left me feeling just as good.
Darkness. A crystal clear sky filled with stars. The temperature a bit chilly -- 48 degrees -- but perfect for keeping the body cool as it works. Fast repeats with short recoveries, with every lap feeling goodi, faster than usual. The sunrise lights the sky as I run, but it doesn't hide all of the stars. I finish in bright sunshine on a beautiful September day, a few degrees warmer, but still brisk. My legs feel spent but strong. I'm alive.
I'm busily preparing to leave town for PLoP, and I haven't had time to write lately. I also haven't had time to clarify a bunch of loose ideas in my head on the topics of aptitude for computing and the implications for introductory CS courses. So I'll write down my jumble as is.
A couple of months ago, I read E.L. Doctorow's memoir, Reporting The Universe. As I've noted before, I like to read writers, especially accomplished ones, writing about writing, especially how and why they write. One of Doctorow's comments stuck in my mind. He claimed that the aptitude for math and physics (and presumably computing) is rare, but all people are within reach of writing good narrative. For some reason, I didn't want to believe that. Not the part about writing, because I do believe that most folks can learn to write well. I didn't want to believe that most folks are predisposed away from doing computing. I know that it's technical, and that it seems difficult to many. But to me, it's about communicating, too, and I've always held out hope that most folks can learn to write programs, too.
Then on a mailing list last weekend, this topic came up in the form of what intro CS courses should be like.
These guys are among the best, if not the best, intro CS teachers in the country, and they make their claims from deep understanding and broad experience. The ideas aren't mutually exclusive, of course, as they talk some about what our courses should be like and about the aptitude that people may have for the discipline. I'm still trying to tease the two apart in my mind. But the aptitude issue is stickier for me right now.
I'm reminded of something Ralph Johnson said at a PLoP a few years ago. He offered as motivation piano teachers. Perhaps not everyone has the aptitude to be a concert pianist, but piano teachers have several pedagogies that enable them to teach nearly anyone to play piano serviceably. Perhaps we can aim for the same: not all of our students will become masters, but all can become competent programmers, if they show some interest and put in the work required to get better.
Perhaps aptitude is more of a controlling factor than I have thought. Certainly, I know of more scientists and mathematicians who have enjoyed writing (and well) than humanities folks who enjoy doing calculus or computer programming on the side. But I can't help but think that interest can trump aptitude for all but a few, so long as "merely" competence is the goal.
ASCII art seems to be going through a renaissance these days. This quintessentially '70s art form has attracted a cult following among today's youth. An example of this phenomenon is the fun little search-engine-and-art-generator Toogle. Toogle feeds your query to Google Images and then generates an ASCII art version of the #1 image it finds using the characters of the query as its alphabet. The images look pretty good; it even reproduces colors well.
Now I can tell everyone that I really am a man of letters.
Conventional wisdom is that many software projects are completed late and over budget. I don't know whether this is true, or to what extent, but few dispute the claim. Within the software engineering community, this assertion is used as motivation to develop more rigorous methods for building software and managing the process. In the popular press, it is viewed with much head shaking and a bit of disdain. Yet people must continue to buy software.
We face a similar issue in academia. Many professors accept late work for a few days after an assignment is due. Sometimes they assess a penalty, such as 10% of the available points per day. But compassion usually dictates some degree of flexibility with our young charges.
My grading policy has always been not to accept late work. I tell students to submit their best available work at the time the assignment is due. I also place a lower bound on the acceptable quality of a submission: If a program doesn't compile, or compiles but blows up horribly when run, then the resulting grade will be quite low. I tell students that, all other things being equal, a compilable, runnable, yet incomplete program is more valuable than a program that doesn't compile or run. It's hard for me to have much confidence in what a student knows or has created when even the compiler balks.
I'm reasonable enough to make exceptions when events warrant them. Sometimes, extenuating circumstances interfere with a student's opportunity to do the assigned work in a timely fashion. Sometimes a reasonably good, nearly complete program causes an exception in an unusual situation that the student doesn't yet understand. But for the most part, the policy stands. Students long ago stopped questioning this rule of mine, perhaps accepting it as one of my personal quirks. But when deadlines approach, someone will usually ask for a break because with just a little more time...
Of course, I also encourage students to do short iterations and generate many "small releases" as they write programs. If they work systematically, then they can always be in the position of having a compilable, runnable -- if incomplete -- program to submit at every point in a project. I demonstrate this behavior in class, both in project retrospectives and in my own work at the computer. I don't know how many actually program this way themselves.
These thoughts came to mind earlier this week when I saw a message from Ron Jeffries to the XP mailing list, which appeared in expanded form in his blog as Good Day to Die. Ron considers the problem of late software delivery in industry and wonders,
What if the rule was this?
On the day and dollar specified in the plan, the project will terminate. Whatever it has most recently shipped, if it has shipped anything, will be assessed to decide whether the project was successful or not.
Does that sound familiar? His answer sounds familiar, too. If you program in the way that the so-called agile methods people suggest, this won't be a problem. You will always have a working deliverable to ship. And, because you will have worked with your client to choose which requirements to implement first, the system you ship should be the best you could offer the client on that day. That is fair value for the time spent on the project.
Maybe my grading policy can help students learn to produce software that achieves this much, whatever its lifespan turns out to be.