I never wrote a Running Year in Review for 2008, though I threatened to. What was there to say after my Running Half-Year in Review from July 2008? I had a lost summer and a year distinguished mostly by not running. I did manage to get in three miles on December 30, which got me over the 1100 mile mark for the year, reminiscent of the last day of 2007. Unfortunately, nearly 500 of those miles came in the first four months of the year.
2009 has been an unusual year. I had only one long stretch of not running, with but three miles for the year as late as February 9, but I had four other weeks of zeros due to health. I also had a difficult time doing more than 15-20 miles a week without having a tough week to follow. Finally, speed may not kill me, but it knocks me down. I have not run a truly fast interval work-out in over two years.
Yet I look to my running log and am happy to realize that I ran five races this year:
The three weeks after the marathon were normal recovery weeks, but then came six weeks of lower mileage and higher fatigue. I lost last week to a cold, but have been doing some short, slow miles this week, both indoor and out. After this morning's four miles, the log 1128.8 miles for the year. This isn't much more than last year's total, but it feels a lot different. I managed two sustained periods of training, with a faster pace at least one day a week. My marathon training contained fewer days and fewer miles than ever before, yet it was steady and progressive. I will use it as a template for a spring training session and see where it takes me.
All in all, a mixed year with plenty of ups to keep me hopeful and eager. Bring on 2010.
I've been enjoying time away from the office, classes, and even programming for the last week or so. After a long semester, spending time with my wife and daughters is just right.
It also gives me a chance to clean up my home office. What I am doing today is effectively refactoring: improving the structure of my stuff without adding any new capabilities. After this round of refactoring, I'll be ready to bring some new furniture in and do a couple of things I've been wanting to do since we moved in last December.
I won't strain the metaphor any farther, but I must say that my work day is a paradigm for this tweet by former student, musician, and software pro Chuck Hoffman:
"don't have time to refactor now" leads to "everything takes way more time because the code is confusing." The time gets spent either way.
True of code. True of papers piled high on a desktop or stacked in the corner of the room. In either world, you can pay now, or pay more later.
And now even the grading is done. I enjoyed reading my students' answers to exam questions about software engineering, especially agile approaches. Their views were shaped in part by things I said in class, in part by things I asked them to read, and in part by their own experiences writing code. The last of these included a small team project in class, for which two teams adopted many XP practices.
Many people in the software industry have come to think of agile development as implying an incomplete specification. A couple of students inferred this as well, and so came to view one of the weaknesses of agile approaches as a high risk that the team will go in circles or, worse yet, produce an incomplete or otherwise unacceptable system because they did not spend enough time analyzing the problem. Perhaps I can be more careful in how I introduce requirements in the context of agile development.
One exam question asked students to describe a key relationship between refactoring and testing. Several students responded with a variation of "Clean code is easier to test." I am not sure whether this was simply a guess, or this is what they think. It's certainly true that clean code is easier to test, and for teams practicing more traditional software engineering techniques this may be an important reason to refactor. For teams that are writing tests first or even using tests to drive development, this is not quite as important the answer I was hoping for: After you refactor, you need to be able to run the test suite to ensure that you have not broken any features.
Another person wrote an answer that was similar to the one in the preceding paragraph, but I read it as potentially more interesting: "Sometimes you need to refactor in order to test a feature well." Perhaps this answer was meant in the same way as "clean code is easier to test". It could mean something else, though, related to an idea I mentioned last week, design for testability. In XP, refactoring and test-first programming work together to generate the system's design. The tests drive additions to the design, and refactoring ensures that the additions become part of a coherent whole. Sometimes, you need to refactor in order to test well a feature that you want to add in the next iteration. If this is what the student meant, then I think he or she picked up on something subtle that we didn't discuss explicitly in class.
When asked what the hardest part of their project had been and what the team had done in response to the challenge, one student said, "We had difficulty writing code, so we looked for ways to break the story into parts." Hurray! I think this team got then idea.
A question near the end of the exam asked students about Fred Brooks's No Silver Bullet paper. They read this paper early in the semester to get some perspective on software engineering and wrote a short essay about their thoughts on it. After having worked on a team project for ten weeks, I asked them to revisit the paper's themes in light of their experience. One student wrote, "A good programmer is the best solution to engineering software."
A lot of the teams seem to have come to a common understanding that their design and programming skills and those of their teammates were often the biggest impediment to writing the software they envisioned. If they take nothing from this course than the desire and willingness to work hard to become better designers and programmers, then we will have achieved an outcome more important than anything try to measure with a test.
I agree with Brooks and these students. Good programmers are the best solution to engineering software. The trick for us in computer science is how to grow, build, or find great designers and programmers.
The semester is over. All that remains for us professors is to grade final exams and turn in course grades. All that remains for some students is waiting anxiously for those grades to hit the record -- or ask for e-mail notice as soon as the grade is computed.
Writing the software engineering final exam reminded me how much my ideas about exam-giving have changed over my many years as a teacher. Consider this passage from Lewis Carroll's Alice:
'Give your evidence,' said the King; 'and don't be nervous, or I'll have you executed on the spot.'
Students often feel like the person hearing the king's command. I think back in the old days I secretly felt like the king, and thought that was good. I don't feel that way any more. I still demand a lot, especially in my technical courses, but my mindset is less the king's and more Charles Colton's:
Examinations are formidable even to the best prepared, for the greatest fool may ask more than the wisest man can answer.
I am much more careful about the questions I ask on exams now. If I have any doubts about a question -- what it means, what students might take it to mean, how it relates to what we have done class -- I try to re-write it in some way. Students taking the exam are working under the constraints of time and nerves, so it's important that questions be as clear and as straightforward as possible.
Surely I fail in this at times, but at least now I am aware of the problem and try to solve it. In my early years as a professor, I was probably a bit too cavalier. I figured that the grades would all work themselves out in the end. They always did, but I was forgetting about something else: the way students experienced the exams. Those feeling color how students feel about the course, and even the course's topic, along the way.
I've also changed a bit in how I think about grades. I have never thought of myself as "giving" grades to students; I merely assigned the grades that students earned. But I was pretty well fixed in how I approached the earning of grades. Do the homework, do the assignments, take the tests -- earn the grade. I've always been willing to make course-level adjustments in a due date or in how I would grade an assignment, in response to what is happening with me and the students. I feel more flexible these days in making individual adjustments, too, though I can't point think of many specific examples to serve as evidence that my feeling is warranted.
I do still have some quirks that set my grading apart from many of my colleague's. Assignments are due when they are due. (I've written about that before.) I do not prepare study guides for the class. (That seems like the students' job.) And I don't create gratuitous extra-credit work at the end of the term for students who simply didn't do the regularly-assigned work earlier in the semester. (That hardly seems fair to the students who did the work.) But my mentality is different. I have always tried to encourage and reassure students. Now I try to pay as much attention to the signals I send implicitly as to my explicit behavior. Again, I know I don't always succeed in this, but my students are probably better off when I'm trying than when I'm oblivious.
Today I was thinking retrospectively about themes as I wrote the final exam for my software engineering course. It occurred to me that one essential theme of the course needs to be design for testability. We talked about issues such as coupling and cohesion, design heuristics and patterns for making loosely connected, flexible code, and Model-View-Controller as an architectural style. Yet much of the code they wrote was coupled too tightly to test conveniently and thoroughly. I need to help them connect the dots from the sometimes abstract notions of design to both maintenance and testing. This will help me bring some coherence to the design unit of the course.
I am beginning to think that much of the value in this course comes from helping students to see the relationships among the so-called stages of the software life cycle: design and testing, specification and testing, design and maintenance, and so on. Each stage is straightforward enough on its own. We can use our time to consider how they interact in practice, how each helps and hinders the others. Talking about relationships also provides a natural way to discuss feedback in the life cycle and to explore how the agile approaches capitalize on the relationships. (Test-driven development is the ultimate in design for testability, of course. Every bit of code is provoked by a test!)
I realize that these aren't heady revelations. Most of you probably already know this stuff. It's amazing that I can teach a course on writing software for an entire semester, after so many years of writing software myself, and only come to see such a basic idea clearly after having made a first pass. I guess I'm slow. Fortunately, I do seem eventually to learn.
Last night I read a few words that I needed to see. They come from Elizabeth Gilbert, on writing:
Quit your complaining. It's not the world's fault that you wanted to be an artist. It's not the world's job to enjoy the films you make, and it's certainly not the world's obligation to pay for your dreams. Nobody wants to hear it. Steal a camera if you have to, but stop whining and get back to work.
Plug 'programmer' or 'teacher' in for 'artist', and 'laptop' for 'camera', and this advice can help me out on most days. Not because I feel unappreciated, but because I feel pulled in so many directions away from what I really want to do: prepare better courses and -- on far, far too many days -- from writing code. Like Gilbert, I need to repeat those words to myself whenever I start to feel resentful of my other duties. No one cares. I need to find ways to get back to work.
Gilbert closes her essay with more uplifting advice that also feels right:
My suggestion is that you start with the love and then work very hard and try to let go of the results.
Oh, and if you haven't seen Gilbert's TED talk, walk back to her home page and watch. It's a good one.
We have entered finals week, which for me mean grading team projects and writing and grading a final exam. As I think back over the term, a few things stand out.
Analysis. This element of software engineering was a challenge for me. The previous instructor was an expert in gathering requirements and writing specs, but I did not bring that expertise to the course. I need to gather better material for these topics and think about better ways to get help students experience them.
Design and implementation. The middle part of my course disappointed me. These are my favorite parts of making software and the areas about which I know the most, both theoretically and practically. Unfortunately, I never found a coherent approach for introducing the key ideas or giving students deep experiences with them. In the end, my coverage felt too clinical: software architectures, design patterns, refactoring... just ideas. I need to design more convincing exercises to give students a feel for doing these; one big team project isn't enough. Too much of the standard software engineering material here boils down to "how to make UML diagrams". Blech.
Testing. Somewhat to my surprise, I enjoyed this material as much as anything in the course. I think now that I should invert the usual order of the course and teach testing first. This isn't all that crazy, given the relationship between specs and tests, and it would set us up to talk about test-driven design and refactoring in much different ways. The funny thing is that my recently-retired software engineering colleague, who has taught this course for years, said this idea out loud first, with no prompting from me!
More generally, I can think of two ways in which I could improve the course. First, I sublimated my desire to teach an agile-driven course far too much. This being my first time to teach the course, I didn't want to fall victim to my own biases too quickly. The result was a course that felt too artificial at times. With a semester under my belt, I'll be more comfortable next time weaving agile threads throughout the course more naturally.
Second, I really disappointed myself on the tool front. One of my personal big goals for the course was to be sure that students gained valuable experience with build tools, version control, automated testing tools, and a few other genres. Integrating tool usage into a course like this takes either a fair amount of preparation time up front, or a lot more time during the semester. I don't have as much in-semester time as I'd like, and in retrospect I don't think I banked enough up-front time to make up for that. I will do better next time.
One thing I think would make the course work better is to use an open-source software project or two as a running example in class throughout the semester. An existing project would provide a concrete way to introduce both tools and metrics, and a new project would provide a concrete way to talk about most of the abstract concepts and the creative phases of making software.
All this said, I do think that the current version of the course gave students a chance to see what software engineering is and what doing it entails. I hope we did a good enough job to have made their time well-spent.
Joe Haldeman is a writer of some renown in the science fiction community. I have enjoyed a novel or two of his myself. This month he wrote the Future Tense column that closes the latest issue of Communications of the ACM, titled Mightier Than the Pen. The subhead really grabbed my attention.
Haldeman still writes his novels longhand, in bound volumes. I scribble lots of notes to myself, but I rarely write anything of consequence longhand any more. In a delicious irony, I am writing this entry with pen and paper during stolen moments before a basketball game, which only reminds me how much my penmanship has atrophied has from disuse! Writing longhand gives Haldeman the security of knowing that his first draft is actually his first draft, and not the result of the continuous rewriting in place that word processors enable. Even a new generation word processor like WriteBoard, with automatic versioning of every edit, cannot ensure that we produce a first draft without constant editing quite as well as a fountain pen. We scientists might well think as much about the history and provenance of our writing and data.
Yet Haldeman admits that, if he had to choose, he would surrender his bound notebooks and bottles of ink:
... although I love my pens and blank books with hobbyist zeal, if I had to choose between them and the computer there would be no contest. The pens would have to go, even though they're so familiar they're like part of my hand. The computer is part of my brain. It has reconfigured me.
We talk a lot about how the digital computer changes how we work and live. This passage expresses that idea as well as any I've seen and goes one step more. The computer changes how we think. The computer is part of my brain. It has reconfigured me.
Unlike so many others, Haldeman -- who has tinkered with computers in order to support his writing since the Apple II -- is not worried about this new state of the writer's world. This reconfiguration is simply another stage in the ongoing development of how humans think and work.
Our town was hit with a blizzard over the last couple of days. Not only did it close the local schools, it even shut down my university -- a powerful storm, indeed.
I thought I might treat the day off as 'found time', and hack a little code I've been thinking about...
I feel a kinship with [Cormac McCarthy's] sense of a perfect day. To sit in a room, alone, with an open terminal. To write, whether prose or code, but especially code. (11/21/09)
... but I never wrote a line of code. Instead, I shoveled snow (a lot of snow). I wrote Christmas cards in the kitchen while my daughters baked cookies for their teachers. We listened to Christmas music and made chili and laughed.
Unlike McCarthy, I do not think that everything other than writing is a waste of time. Today was a perfect day.
Can there be two kinds of perfect day? Can there be different kinds of perfect? Indeed, there are multitudes. The sky is always a perfect sky, even as it changes from moment to moment.
We live in a world of partial order. There is no total ordering on experience.
An undefined problem has an infinite number of solutions.
-- Robert A. Humphrey
It's always seemed to me that one of the best motivations for writing tests first is to know when I'm done. I am prone to wandering in the wilderness and to overthinking what I do. Writing a test helps keep me close to task. When I first heard Kent Beck ask, "How do you know when you are done?", a little light went on for me. I felt something similar when I first saw the above quote on the old "Thinking Again!" blog. A test helps to define my task, circumscribing the problem, taking me from what is usually an incomplete statement in the specification or in a list of requirements to a concrete answer to the question, "Am I done?"
The idea behind test-driven development is that well-written tests can do more. They also evoke a particular design and implementation. This gives me not only an idea of where I am going, but also an idea of how to get there.
That said, an undefined problem may have zero solutions, and executable code can help us avoid this circumstance, too. Adam Bosworth makes this connection in his blog entry on creating healthcare XML standards. He warns against writing standards in the abstract only:
5. Always have real implementations that are actually being used as part of design of any standard. It is hard to know whether something actually works or can be engineered in a practical sense until you actually do it.
Much like analysis and design done independent of any code, a standard written independent of a reference implementation may have big holes in it, not make sense to potential implementors, or even be implementable at all! This ought not be surprising. A standard is a specification for a set of systems that we envision, and our imaginations can outrun our ability to build clear solutions. The best way not to get ahead of ourselves is to build a clear solution as you go along.
The working system we build is a proof of concept for the standard, a concrete implementation that helps us to verify that we are on the right track agile. If those who are intimately familiar with the standard being written cannot implement it, then almost certainly others will struggle more.
This implementation is different than a test in TDD, a mirror image really. But it plays a similar role in helping us to know if you are done. If our reference implementation is unclear or are hard to build, we can find weaknesses in the standard as it is written. If we want to write a good standard, a usable standard, then we are not done.
Bosworth's advice echoes the values that underlie continuous testing in the agile approaches: a preference for working code, a desire to test ideas against reality, and a desire for continuous feedback that we can use to make our product better.
Other advice in Bosworth's article embodies agile values, too, especially a preference for people and simplicity. Consider:
1. Keep the standard as simple and stupid as possible.
2. The data being exchanged should be human readable and easy to understand.
Simplicity and communication, intertwingled.
This talk ends with a passage that brought to mind discussion in recent months among agile software developers and consultants about a the idea of certifying agile practitioners:
Everyone interested in licensing our field might note that the reason licensing has been invented is to protect the public not designers or clients. "Do no harm" is an admonition to doctors concerning their relationship to their patients, not to their fellow practitioners or the drug companies.
Much of the discussion in the agile community about certification seems more about protecting the label "agile" from desecration than about protecting our clients. It may well be that some clients are being harmed when unscrupulous practitioners do a lazy or poor job of introducing agile methods, because they are being denied the benefits of a more responsive development process grounded in evidence gathered from continuous feedback. A lot of the concern, though, seems to be with the chilling effect that poorly- executed agile efforts have on the ability of honest and hard-working agile consultants and developers to peddle our services under that banner.
I don't know what the right answer to any of this is, but I like the last sentence of Glaser's talk:
If we were licensed, telling the truth might become more central to what we do.
Whether we are licensed or not, I think the answer will ultimately back to a culture of honesty and building trust in relationships with our clients. So we can all practice Glaser's tenth piece of advice: Tell the truth.
It has been over five weeks since my blog entry on running. Why? At first, I was recovering from running a marathon and didn't have much to say: a few extra days off, a few fewer miles, but mostly a natural recovery. Then, three weeks ago or so, I came down with a little cold that now seems to be more of the same of what I was feeling most of last year and earlier this year. That's been disheartening, both because it has cut into my running and because it raises the specter of a longer down time. On top of that, the last bout of this ended without resolution, so who knows how my health will be in the coming months.
At this point, I am on track to to reach my 2008 mileage in 2009, as long as I can get back to even some low-mileage weeks to finish off December. That might make for a nice psychological boost.
I did smile last month when I read tweets and blog posts reporting on the unofficial RubyConf 5K Run, which I first saw mentioned a couple of months back. Awesome. I run solo at so many conferences; it would be great to have some company. A couple of my students have been following the Couch to 5K program that some of the Rubyists used to get ready for the fun run. Again, awesome. I'm glad to see folks making changes to live healthier lives, and running has given me a lot of happy hours -- and burned a lot of calories for me!
If the 5K becomes a regular event, it provides one more good reason for me to attend RubyConf -- if I can ever afford it...
My daughter sent me a link to Pranav Mistry's TED India talk, which has apparently been making the rounds among media-savvy high school students and teachers. In it, Mistry demonstrates some very cool technology that blurs the boundary between human experience in the world and human experience mediated by a computer. The kids and teachers turned on by the video are all media-savvy, many are tech-savvy, but few are what we would consider computer science-style techies. They are so excited by Mistry's devices because these devices amplify what humans can do and create qualitatively different kinds of experience.
I loved Mistry's own way of accounting for the excitement his technology causes in people who see and experience it:
We humans actually are not interested in computing. What we are interested in is information. We want to know about things.
Spot on. People want to use computers to compute something of consequence. This is true of most non-techies, but I think it's also true of people who are inclined to study computer science. This is one of the key insights behind Astrachan's Law and its corollary, the Pixar Effect. Students want to do something worth doing. Programming with data and algorithms that are interesting enough to challenge students' expectations can be enough to satisfy these laws, but I have to admit that when we hook our programs up to devices that mediate between the world and our human experience -- wow, amazing things can happen.
If nothing else, Mistry's video has raised the bar on what my daughter would like for a Christmas present. I'll have to send him a thank-you note...