I know all about the idea of "writing to learn". It is one of the most valuable aspects of this blog for me. When I first got into academia, though, I was surprised to find how many books in the software world are written by people who are far from experts on the topic. Over the years, I have met several serial authors who pick a topic in conjunction with their publishers and go. Some of these folks write books that are successful and useful to people. Still the idea has always seemed odd.
In the last few months, I've seen several articles in which authors talk about how they set out to write a book on a topic they didn't know well or even much at all. Last summer, Alex Payne wrote this about writing the tapir book:
I took on the book in part to develop a mastery of Scala, and I've looked forward to learning something new every time I sit down to write, week after week. Though I understand more of the language than I did when I started, I still don't feel that I'm on the level of folks like David Pollak, Jorge Ortiz, Daniel Spiewak, and the rest of the Scala gurus who dove into the language well before Dean or I. Still, it's been an incredible learning experience ...
Then today I ran across Noel Rappin's essay about PragProWriMo:
I'm also completely confident in this statement -- if you are willing to learn new things, and learn them quickly, you don't need to be the lead maintainer and overlord to write a good technical book on a topic. (Though it does help tremendously to have a trusted super-expert as a technical reference.)
Pick something that you are genuinely curious about and that you want to understand really, really well. It's painful to write even a chapter about something that doesn't interest you.
This kind of writing to learn is still not a part of my mentality. I've certainly chosen to teach courses in order to learn -- to have to learn -- something I want to know, or know better. For example, I didn't know any PHP to speak of, so I gladly took on a 5-week course introducing PHP as a scripting language. But I have a respect for books, perhaps even a reverence, that makes the idea of publishing one on a subject I am not expert in unpalatable. I have to much respect for the people who might read it to waste their time.
I'm coming to learn that this probably places an unnecessary limit on myself. Articles like Payne's and Rappin's remind me that I can study something and become expert enough to write a book that is useful to others. Maybe it's time to set out on that path.
Getting people to take this step is one good reason to heed the call of Pragmatic Programmers Writing Month (PragProWriMo), which is patterned after the more generic NaNoWriMo (NaNoWriMo). Writing is like anything else: we can develop a habit that helps us to produce material regularly, which is a first and necessary step to ever producing good material regularly. And if research results on forming habits is right, we probably need a couple of months of daily repetitions to form a habit we can rely on.
So, whether it's a book or blog you have in mind, get to writing.
(Oh, and you really should click through the link in Rappin's essay to Merlin Mann's Making the Clackity Noise for a provocative -- if salty -- essay on why you should write. From there, follow the link to Buffering, where you will find a video of drummer Sonny Payne playing an extended solo for Count Basie's orchestra. It is simply remarkable.)
I was happy to come across Greg Wilson's talk, Bits of Evidence, on empirical data in software engineering. When I started preparing to teach software engineering last summer, I looked for empirical data to support some of the claims that we make about building software. I wasn't all that successful. I figured that either I wasn't looking hard enough, or there wasn't much. The answer probably lies somewhere in the middle.
Someone could do the SE world a great service by gathering, organizing, and providing links to all the good work that has been done. Wilson is one of the people I turn to for pointers to empirical SE results.
I did have fun reading some classic old work in this area. One is McCabe's original article on cyclomatic complexity. This is very cool and has a nice tie to theory, but it simply describes a metric. It does not gather present data from real programs against which the metric has applied, and it doesn't provide any base line for comparison. When he speaks of 10 as a reasonable upper bound for a module's cyclomatic complexity, or of code with cyclomatic complexity in the 3-7 range as "quite well structured", I wonder "Why?" He drew these values from experience looking at production code available to him, but these numbers feel a bit unreliable.
I'm not a stickler who needs statistically significant analysis of carefully collected data, though. I like to learn from experience that is gathered and presented qualitatively. I enjoyed reading about an informal experiment into the complexity of code developed test-first. We need to be careful to take the results with a grain of salt, given the informality of the definitions and methodology used, but the experiment seems to say something useful. I also liked Martin Fowler's reflective analysis of dynamic type checking in production Ruby code from ThoughtWorks. He also wrote an interesting reflection on Ruby at ThoughtWorks that I learned from.
Still, we need more of the sort of empirical evidence that Wilson offers in his talk. As a discipline, we can do a better job of paying attention to the validity of our claims, and of more frequently asking, "Data, please!"
I really enjoyed reading the text of William Cook's banquet speech at ECOOP 2009. When I served as tutorials chair for OOPSLA 2006, Cook was program chair, and that gave me a chance to meet him and learn a bit about his background. He has an interesting career story to tell. In this talk, he tells us this story as a way to compare and contrast computer science in academia and in industry. It's well worth a read.
As a doctoral student, I thought my career path might look similar to Cook's. I applied for research positions in industry, with two of the Big Six accounting firms and with an automobile manufacturer, at the same time I applied for academic positions. In the end, after years of leaning toward industry, I decided to accept a faculty position. As a result, my experience with industrial CS research and development is limited to summer positions and to interactions throughout grad school.
Cook's talk needs no summary; you should read it for yourself. Here are a few points that stood out to me as I read:
Venture capitalists talk about pain killers versus vitamins. A vitamin is a product that might make you healthier over the long run, but it's hard to tell. Pain killers are products where the customer screams "give it to me now" and asks what it costs later. Venture capitalists only want to fund pain killers.
This is true not only of venture capitalists, but of lots of people. As a professor, I recognize this in myself and in my students all of the time. Cook points out that most software development tools are vitamins. So are many of the best development practices. We need to learn tools and practices that will make us most productive and powerful in the long run, but without short-term pain we may not generate the resolve to do so.
We read all the standard references on OO for business applications. It didn't make sense to us. We started investigating model-driven development styles. We created modeling languages for data, user interfaces, security and workflow. These four aspects are the core of any enterprise application. We created interpreters to run the languages, and our interpreters did many powerful optimizations by weaving together the different aspects.
To me, this part of the talk exemplifies best how a computer scientist thinks differently than a non-computer scientist, whether experienced in software development or not. Languages are tools we create to help us solve problems, not merely someone else's solutions we pluck off the shelf. Language processors are tools we create to make our languages come to life in solving instances of actual problems.
The way I see it is that industry generally has more problems than they do solutions, but academia often has more solutions than problems.
Cook makes a great case for a bidirectional flow between industry, with its challenging problems in context, and academia, with its solutions built of theory, abstraction, and design. This transfer can be mutually beneficial, but it is complicated by context:
Industrial problems are often messy and tied to specific technical situations or domains. It is not easy to translate these complex problems into something that can be worked on in academia. This translation involves abstraction and selection.
The challenge is greatest when we then take solutions to problems abstracted from real-world details and selected for interestingness more than business value and try to re-inject them into the wild. Too often, these solutions fail to take hold, not because people in industry are "stupid or timid" but because the solution doesn't solve their problem. It solves an ideal problem, not a real one. The translation process from research to development requires a finishing step that people in the research lab often have little interest in doing and that people in the development studio have little time to implement. The result is a disconnect that can sour the relationship unnecessarily.
Finally, the talk is full of pithy lines that I hope to steal and use to good effect sometime soon. Here is my favorite:
Simplicity is not where things start. ... It is where they end.
Computer scientists seek simplicity, whether in academia or in industry. Cook gives us some clues in this talk about how people in these spheres can understand one another better and, perhaps, work better together toward their common goal.
I don't usually advertise much in this space, but I have to put in a positive word for the On the Road for Education marathon, which I ran this weekend. Actually, the event is a set of races: a full marathon, a half marathon, a 10K, and a 5K. They are organized as a fundraiser for Mason City's parochial school system.
I signed up for this race in large part because it let me keep my options open as late as possible. I started training late and wasn't sure I'd be ready for an October marathon. On the Road for Education was later than most midwest marathons, October 25, and had a late early registration date. It also was less expensive than other, better-known races. All of these added up to an attractive package for an unsure trainer. They also left me uncertain; I had no idea what to expect from the race or organization.
This was a great little marathon. It is super small by the standard of most well-known marathons these days. This year, 78 men and 25 women finished, with only one runner who started not finishing. Add in the 101 half-marathoners, 35 10K runners, and 78 5K runners, and the event is still super small. That creates an intimate setting, as well as an opportunity to run a lot of the race solo.
The trimmings of the race were all good and better than I hoped:
In addition, the race hotel was excellent and happy to serve the runners from out of town.
The one risk you face with this race is one the organizers cannot control: the weather. The last weekend of October in northern Iowa can be dicey. We were lucky, with temps in the 40s F. and clouds to keep the heat down at the end of the race.
This was the 11th year for the On the Road for Education marathon. These folks have experience putting on a race, and it showed. I give it two two thumbs up and recommend it highly for an intimate, personal marathon experience. If you want the experience of a "spectacle marathon", look elsewhere. This one isn't about tens of thousands of spectators or bands playing at every mile. It's about the run.
One final warning: don't expect a course built for PRs. Eleven of the miles were net climbs, with steep rises during miles 6, 7, and 21. Eleven of the miles were net drops, and the rest of the course is flat. But even that is deceptive. Miles 15-19 show as flat on the course elevation map, but they were in a nature area. They might be net neutral, but they undulate throughout. This makes for interesting running! Just don't expect a flat Iowa course and an easy PR.
I made it. My sixth marathon is now in the books. Yesterday I ran the On the Road for Education marathon in Mason City, Iowa. No two marathons have been the same for me, and this one was unusual in many ways, from how the race went to the venue itself.
The weather was dreary but almost perfect for a race: in the low 40s, cloudy, with little wind.
After feeling good with an 8:30/mile pace in a half marathon six weeks ago, I decided to try that pace for the full. After nine miles, I was right on target, though by an unusual path. The first three miles were downhill and fast, the next three were uphill and slower, and the next three were flatter and down in a steady 25:30.
My time at the halfway mark was 1:51:51. After fifteen miles, I was at 2:07:36 -- still on an 8:30/mile pace. It seemed too good. I need to look up my times from previous races, but I think that these may be my best times ever at those two points in a marathon.
On top of it all, I felt good. At the Mile 15, I thought to myself, "I can do this." And I could, but not at that pace.
By mile 15, we had entered a 5-mile loop on soft trail through the Lime Creek Nature Center. My pace slowed for at least three reason. The course elevation map says this part of the race is flat, but that must be net rise. These miles were hilly. Add to that the overnight rain which had softened them up even more and created a little bit of mud for us. Slippery footing on hills means a slower pace.
The third ingredient was a dose of reality. My legs weren't quite ready to maintain the ambitious pace for a full marathon. I did manage a couple of miles at near-goal pace after leaving the nature center, but in general I had slowed down. I took a second restroom break at the 20-mile mark, which added more than a minute to my time but gave my legs a brief respite.
Then came miles 23, 24, and 25. My legs were dead, and my pace slowed further. I had another short reprieve when a half dozen of us experienced our own Des Moines moment in Mile 23. For a while in the last two miles, I chanted to myself just keep running. For some reason I had the strongest of desires not to walk even as my body had the strongest desire to stop. Somehow I kept moving. I never reached what one of my friend's calls "the dead man's shuffle", but my pace had slowed dramatically.
The last 0.2 of a mile snuck up on me. I felt a brief boost of energy as I crossed the Mile 26 marker and managed to run through the finish line with a smile on my face.
It was a tough finish. In retrospect should have gone out with 8:45 miles for 10, 15, or 20 miles and then taken stock of what I had left. That would have been the conservative approach, the wise strategy. But the siren call of 8:30 miles and a time near my PR was too much for my dreams, or my vanity. I had to know. Now I do! Going into the arena has a way of showing us reality.
This race was unusual for me in another respect. My wife and daughters made the trip with me. This was the first time I'd had family with me since my wife accompanied me to my first. They met me along the route a half dozen times or so, and seeing them boosted my spirits every time. It was easy for them to do this, which is one of the benefits of a race in a small town. I'll have more to say about a small town race in a coming entry.
So, I survived. Even with the rest stops, I ended up with my third-best time ever. After my experience the last two years, with dicey health, on-again, off-again training, and the mental doubts that came from those two, I really could not be happier with my time. Asking for more would be unrealistic and would devalue what turned out to be a good performance.
I could feel better physically, though. A few days' rest can give me that. Heck, I already feel like a short jog!
I know that you are talking about visual design, but I am struck by how this approach applies to many other domains.
But I could have.
I started university life intending to become an architect, and my interest in visual design has remained strong through the years. I was delighted when I learned of Christopher Alexander's influence on some in the software world, because it gave me more opportunities to read and think about architectural design -- and to think about how its ideas relate to how we design software. I am quite interested in the notion that there are universal truths about design, and even if not what we can learn from designers in other disciplines.
Garr Reynolds identifies seven principles of the Zen aesthetic of harmony. Like the commenter, my thoughts turned quickly from the visual world to another domain. For me, the domain is software. How well do these principles of harmony apply to software? Several are staples of software design. Others require more thought.
(1) Embrace economy of materials and means
(3) Keep things clean and clutter-free
These are no-brainers. Programmers want to keep their code clean, and most prefer an economical style, even when using a language that requires more overhead than they would like.
(6) Think not only of yourself, but of the other (e.g., the viewer).
When we develop software, we have several kinds of others to consider. The most obvious are our users. We have disciplines, such as human-computer interaction, and development styles, such as user-centered design, focused squarely on the people who will use our programs.
We also need to think of other programmers. These are the people who will read our code. Software usually spends much more time in maintenance than in creation, so readability pays off in a huge way over time. We can help our readers by writing good documentation, an essential complement to our programs. However, the best way to help our readers is to write readable code. In this we are more like Reynolds's presenters. We need to focus on the clarity and beauty of our primary product.
Finally, let's not forget our customers and our clients, the people who pay us to write software. To me, one of the most encouraging contributions of XP was its emphasis on delivering tangible value to our customers every day.
(7) Remain humble and modest.
This is not technical advice. It is human advice. And I think it is underrated in too many contexts.
I have worked with programmers who were not humble enough. Sadly, I have been that programmer, too.
A lack of humility almost always hurts the project and the team. Reynolds is right in saying that true confidence follows from humility and modesty. Without humility, a programmer is prone to drift into arrogance, and arrogance is more dangerous than incompetence.
A programmer needs to balance humility against gumption, the hubris that empowers us to tackle problems which seem insurmountable. I have always found that humility is a great asset when I have the gumption to tackle a big problem. Humility keeps me alert to things I don't understand or might not see otherwise, and it encourages me to take care at each step.
... Now come a couple of principles that cause me to thing harder.
(2) Repeat design elements.
Duplication is a bane to software developers. We long ago recognized that repetition of the same code creates so many problems for writing and modifying software that we have coined maxims such as "Don't repeat yourself" and "Say it one once and only once." We even create acronyms such as DRY to get the idea across in three spare letters.
However, at another level, repetition is unavoidable. A stack is a powerful way to organize and manipulate data, so we want to use one whenever it helps. Rather than copy and paste the code, we create an abstract data type or a class and reuse the component by instantiating it.
Software reuse of this sort is how programmers repeat design elements. Indeed, one of the most basic ideas in all of programming is the procedure, an abstraction of a repeated behavioral element. It is fundamental to all programming, and one of the contributions that computer science made as we moved away from our roots in mathematics.
In the last two decades, programmers have begun to embrace repeatable design units at higher levels. Design patterns recur across contexts, and so now we do our best to document them and share them with others. Architectures and protocols and, yes, even our languages are ways to reify recurring patterns in a way that makes using them as convenient as possible.
(4) Avoid symmetry.
Some programmers may look at this principle and say, "Huh? How can this apply? I'm not even sure what it means in the context of software."
When linguistic structures and data structures repeat, they repeat just as they are, bringing a natural symmetry to the algorithms we use and the code we write. But at the level of design patterns and architectures, things are not so simple. Christopher Alexander, the building architect who is the intellectual forefather of the software patterns community, famously said that a pattern appears a million times, but never exactly the same. The pattern is molded to fit the peculiar forces at play in each system. This seems to me a form of breaking symmetry.
But we can take the idea of avoiding symmetry farther. In the mathematical and scientific communities, there has long been a technical definition of symmetry in groups, as well as a corresponding definition of breaking symmetry in patterns. Only a few people in the software community have taken this formal step with design patterns. Chief among them are Jim Coplien and Liping Zhao. Check out their book chapter, Symmetry Breaking in Software Patterns, if you'd like to learn more.
A few years ago I was able to spend some time looking at this paper and at some of the scientific literature on patterns and symmetry breaking. Unfortunately, I have not been able to return to it since. I don't yet fully understand these ideas, but I think I understand enough to see that there is something important here. This glimmer convinces me that avoiding symmetry is perhaps an important principle for us software designers, one worthy of deeper investigation.
... This leaves us with one more principle from the Presentation Zen article:
(5) Avoid the obvious in favor of the subtle
This is the one principle out of the seven that I think does not apply to writing software. All other things being equal, we should prefer the obvious to the subtle. Doing something that isn't obvious is the single best reason to write a comment in our code. When we must do something unexpected by our readers, we must tell them what we have done and why. Subtlety is an impediment to understanding code.
Perhaps this is a way in which we who work in software differ from creative artists. Subtlety can enhance a work of art, by letting -- even requiring -- the mind to sense, explore, and discover something beyond the surface. As much art as there is in good code, code is at its core a functional document. Requiring maintenance programmers to mull over a procedure and to explore its hidden treasures only slows them down and increases the chances that they will make errors while changing it.
I love subtlety in algorithms and designs, and I think I've learned a lot from reading code that engages me in a way I've not experienced before. But there is something dangerous about code in which subtlety becomes more important than what the program does.
Blaine Buxton recently wrote a nice entry on the idea of devilishly clever code:
But, it got me thinking about clever and production code. In my opinion, clever is never good or wanted in production code. It's great to learn and understand clever code, though. It's a great mental workout to keep you sharp.
Maybe I am missing something subtle here; I've been accused of not seeing nuance before. This may be like the principle of avoiding symmetry, but I haven't reached the glimmer of understanding yet. Certainly, many people speak of Apple's great success with subtle design that engages and appeals to users in a way that other design companies do not. Others, though attribute its success to creating products that are intuitive to use. To me, intuitiveness points more to obviousness and clarity than to subtlety. And besides, Apple's user experience is at the level of design Reynolds is talking about, not at the level of code.
I would love to hear examples, pro and con, of subtlety in code. I'd love to learn something new!
I went last night to see a talk by Aaron Schurman, co-founder and CEO of Phantom EFX. Phantom is a homegrown local company that makes video games. The talk told the story of their latest and most ambitious release, Darkest of Days, a first-person shooter game built around historic narratives and a time-travel hook.
Phantom got its start with casino games. They started from scratch, with no training in software development. Part of the team did have background in graphic design, which gave them a foundation to build on. In the last decade, they have became serious players in the market, with several top-selling titles.
I'm am not a "computer gamer" and rarely ever play the sort of games that are so popular with students these days. But as a computer scientists, I am interested in them as programs. Nearly every game these days requires artificial intelligence, both to play the game and, in character-based games, to provide realistic agents in the simulated world. My background in AI made me a natural local resource to the company when they were getting started. As a result, I have had the good fortune to be a long-time friend of the company.
Aaron's talk was like the game; it had something for almost everyone: history, creative writing, art, animation, media studies, and computer science. The CS is not just AI, of course. A game at this level of scale is a serious piece of software. The developers faced a number of computational constraints in filling a screen with a large number of realistic humans and while maintaining the frame rate required for an acceptable video experience. There were also software development challenges, such as building for multiple platforms in sync and working with contractors distributed across the globe. There is a lot to be learned by conducting a retrospective of this project.
Aaron spoke a lot about the challenges they faced. His response was the sort you expect from people who succeed: Don't be dismayed. Do you think you are too small or too poor to compete with the big boys? Don't be dismayed. You can find a way, even if it means rolling your own gaming engine because the commercial alternatives are too expensive. Don't know how to do something? Don't be dismayed. You simply don't know yet. Work hard to learn. Everyone can do that.
The practical side of me is glad that we are so close to a company like this and have connections. We've recently begun exploring ways to place our students at Phantom EFX for internships. I love the idea of running an iPhone development class to port some of the company's games to that market. This is a great opportunity for the students, but also for professors!
The dreamer in me was inspired by this talk. I am always impressed when I meet people, especially former students, who have a vision to build something big. This sort of person accepts risks and works hard. The return on that investment can be huge, both monetarily and spiritually. I hope more of our students take stories like this to heart and realize that entrepreneurship offers an alternative career path when they have ideas and are willing to put their their work hours toward something that they really care about.
At its bottom, this is the story of small-town Iowa guys staying in small-town Iowa and building a new tech company. Now they have Hollywood producers knocking on their doors, bidding to option their script and concept for a major motion picture. Not a bad way to make a living.
A while back -- a year? two? -- the folks at the College Board announced some changes to the way they would offer the AP exams in computer science. I think the plan was to eliminate the B exam (advanced) and redesign the A exam (basic). At the time, there was much discussion among CS educators, at conferences, on the SIGCSE mailing list, and in a few blogs. In 2008 sometime, I read a CACM article by a CS educator on the issue. Her comments were interesting enough that I made some notes in the margins and set the article aside. I also collected a few of my thoughts about the discussions I had read and heard into a text file. I would write a blog article!
But I never did.
I went looking for that text file today. I found it in a folder named recent/, but it is not recent. The last time I touched the file was Tuesday, December 9, 2008.
I guess it wasn't all that urgent after all.
Actually, this isn't all that uncommon for blog article ideas. Many come to mind, but few make it to the screen. Yet this seems different. When the original news was announced, the topic seemed so urgent to many of my close friends and colleagues, and that made it seem urgent to me. The Future of Computer Science in the High Schools was at stake. Yet I could never bring myself to write about the article.
To be honest, it is hard for me to care much about AP. I have been teaching at my university for over seventeen years, and I cannot recall a single student who asked us about AP CS credit. We simply never see it.
Computer programming courses long ago disappeared from most high schools in my state. I am willing to wager that no Iowa schools ever taught computer science qua computer science; if any did, the number was nearly zero. Even back in the early to mid-1990s when dedicated CS courses existed, they were always about learning to program, usually in Basic or Pascal. That made sense, because the best way to help high school students get ready for the first college CS course is to introduce them to programming. Whatever you think about programming as the first course, that is the way most universities work, as well as nearly every college in Iowa. Those programming courses could have been AP courses, but most were not.
Unfortunately, falling budgets, increasing demands in core high school subjects, and a lack of certified CS teachers led many schools to cut their programming courses. If students in my state see a "computer course" in high school these days, it is almost always a course on applications, usually productivity tools or web design.
Maybe I am being self-centered in finding it hard to care about the AP CS exams. We do not see students with AP CS credit or receive inquiries about its availability here. AP CS matters a lot to other people, and they are better equipped to deal with the College Board and the nature and content of the exams.
Then again, maybe I am being short-sighted. Many argue that AP CS is the face of computer science in the high schools, and for better or worse it defines what most people in the K-12 world think CS is. I am less bothered with programming as the focus of that course than many of my friends and colleagues. I'm even sympathetic to Stuart Reges's ardent defense of the current exam structure at his site to preserve it in the penumbra of the University of Washington. But I do think that the course and exam could do a better job of teaching and testing programming than it has over the last decade or so.
Should the course be more than programming, or different altogether? I am open to that, too; CS certainly is more than "just programming". Alas, I am not sure that the academic CS world can design a non-programming high school CS course that satisfies enough of the university CS departments to garner widespread adoption.
But for someone at a university like mine, and in a state like mine, all of the money and mindshare spent on AP Computer Science seems to go for naught. It may benefit the so-called top CS programs, the wealthier school districts, and the students in states where computing already has more of a presence in the high school classroom. In my context? It's a no-op.
Why did I dig a ten-month old text file out for blogging now? There is much ado again about AP CS in light of the Georgia Department of Education announcing that AP Computer Science would no longer count towards high school graduation requirements. This has led to a wide-ranging discussion about whether CS should count as science or math (the TeachScheme! folks have a suggestion for this), the content of the course, and state curriculum standards. Ultimately, the issue comes down to two things: politics, both educational and governmental, and the finite number of hours available in the school day.
So, I will likely return to matters of greater urgency to my university and my state. Perhaps I am being short-sighted, but the simple fact is this. The AP CS curriculum has been around for a long time, and its existence has been of no help in getting my state to require or endorse high school CS courses, certify high school CS teachers, or even acknowledge the existence of computer science as a subject or discipline essential to the high school curriculum. We will continue to work on ways to introduce K-12 students to computer science and to help willing and interested schools to do more and better CS-related material. The AP CS curriculum is likely to have little or no effect on our success or failure.
If you come hear to read only about computer science, software development, or teaching, then this entry probably isn't for you.
On Saturday, I attended the wedding of a family friend, the son of my closest friend from college. Some weddings inspire me, and this one did. I've been feeling a little jaded lately, and it was refreshing to see two wonderful young people, well-adjusted and good-hearted, starting a new chapter of life together.
During the minister's remarks to the bride and groom, I found myself thinking about love, and about big moments and little moments.
We often speak of one person loving another so much that he would lay down his life for her. That is a grand sort of love indeed. Many of our most romantic ideas about love come back to this kind of great personal sacrifice. It occurred to me that this is love in the big moment.
But how many of us are ever in a position where we must or even can demonstrate love in this way?
Nearly all of our chances to demonstrate love come in the nondescript moments that bathe us every day. These are not the big moments we dream of. We dream about big challenges, but the biggest challenge is to make small choices that demonstrate our love all the time. It is so easy for me to be selfish in the little desires that I act to satisfy daily. The real sacrifice is to surrender ourselves in those moments, to act in a way that puts another person, the one we love, at the front, to place her needs and wants ahead of our own.
When relationships falter, it is rarely because one person missed an opportunity to lay his life down -- literally. Much more often, it a result of small choices we make, of small sacrifices we could have made but didn't. I think that is one of the great sources of confusion for people whose at the end of relationships. They may well still be willing to lay down their loves for the loved one; what more could the other person want? It's hard to recognize all those little opportunities to sacrifice as they come by. How important they are.
The minister closed his remarks Saturday with a wish for the new couple that, at the end of long, happy lives together, they will be able to say, "I would choose you again." I make this wish for them, too. But I think one of the best ways to prepare for that distant moment is to wake up each day, say "I choose you" in the present tense, and then strive to live the little moments of that day well.
I've been working on a jumble of administrative duties all week long, with an eye toward the weekend. While cleaning up some old files, I ran across three items that struck me as somehow related, at least in the context of the last few days.
Listen to Your Conscience
Here's a great quote from an old article by John Gruber:
If you think your users would be turned off by an accurate description of something, that doesn't mean you should do it without telling them. It means means you shouldn't be doing whatever it is you don't want to tell them about.
This advice applies to so many different circumstances. It's not bulletproof, but it's worth being the first line of thought whenever your conscience starts to gnaw at you.
Listen With Your Heart
And here's a passage on writing from the great Joni Mitchell:
You could write a song about some kind of emotional problem you are having, but it would not be a good song, in my eyes, until it went through a period of sensitivity to a moment of clarity. Without that moment of clarity to contribute to the song, it's just complaining.
This captures quite nicely one of the difficulties I have with blogging about being a department head: I rarely seem to have that moment of clarity. And I need them, even if I don't intend to blog about the experience.
Somebody Must Be Listening
One piece of nice news... I recently received a message saying that Knowing and Doing has been included in a list of the top 100 blogs by professors on an on-line learning web site. There are a lot of great blogs on that list, and it's an honor to be included among them. I follow a dozen or so of those blogs closely. One that some of my readers might not be familiar with is Marginal Revolution, which looks at the world through the lens of an economist.
If I could add only one blog to that list, right now it would be The Endeavour, John Cook's blog on software, math, and science. I learn a lot from the connections he makes.
In any case, it's good to know that readers find some measure of value here, too. I'll keep watching for the moments of clarity about CS, software development, teaching, running, and life that signal a worthwhile blog entry.
After my long run yesterday, I was both sorer and more tired ('tireder'?) than after last Sunday's big week and fast long run. Why? I cut my mileage last week from 48 miles to 38, and my long run from 22 miles to 14. I pushed hard only during Wednesday's track workout. Shouldn't last week have felt easy, and shouldn't I be feeling relatively rested after an easy long run yesterday?
No, I shouldn't. The expectation I should is a mental illusion that running long ago taught me was an impostor. It's hard to predict how I will feel on any day, especially during training, but the best predictor isn't what I did this week, but last; not today, but yesterday.
Intellectually, this should not surprise us. The whole reason we train today is to be better -- faster, strong, more durable -- tomorrow. My reading of the running literature says that it takes seven to ten days for the body to integrate the effects of a specific workout. It makes sense that the workout can be affecting our body in all sorts of ways during that period.
This is good example of how running teaches us a lesson that is true in all parts of life:
We are what and who are we are today because of what we did yesterday.
This is true of athletic training. It is true of learning and practice more generally. What we practice is what we become.
More remarkable than that this true in my running is that I can know and write about habit of mind as an intellectual idea without making an immediate connection to my running. I often find in writing this blog that I come back around on the same ideas, sometimes in a slightly different form and sometimes in much the same form as before. My mind seems to need that repetition before it can internalize these truths as universal.
When I say that I am living with yesterday, I am not saying that I can live anywhere but in this moment. That is all I have, really. But it is wise to be mindful that tomorrow will find me a product of what I do today.
Once again, quick hits, for different values of x and different values of day.
Blog Title of the Day
I love the title of this one:
If you're too old to remember DOS, this may not mean much to you. Ironically, the article is about a vestige of old Mac OS that is disappearing. It's an interesting read, even for non-Mac guys, as an example of software evolution.
Tweet of the Day
The simplest advice is often the best:
i've said it before, @jbogard : never take software advice from a bug tracking system salesman
Duct tape that.
Alfred Thompson says that we should beware boring the smart kids in our classes, as part of a wider discussion about losing potential CS majors because of what they experience in CS1. I agree with many people that we need to work hard to bore neither our brightest students nor our average students. And it is hard work, indeed. It's hard enough sometimes even when we have only one of these groups of students in class, say, when we teach an honors section.
However, what struck me most as I read through several blog entries and comments in this conversation was this:
It is sad that we have created an educational system in which it's possible to think that this is a legitimate choice: risk losing great students OR risk losing average students.
We have created a system of schools and universities that are organized more for administrative convenience and high throughput than for learning. Actually, our system dates to a time when schools and classes were smaller, and it just doesn't scale very well as either grows.
Our regimented curricula and university systems are the real problem, not the teachers or the students.
Reader and occasional writer that I am, Michael Nielsen's Six Rules for Rewriting seemed familiar in an instant. I recognize their results in good writing, and even when I don't practice them successfully in my own writing I know they would often make it better.
Occasional programmer that I am, they immediately had me thinking... How well do they apply to refactoring? Programming is writing, and refactoring is one of our common forms of rewriting... So let's see.
First of all, let's acknowledge up front that a writer's rewriting is not identical to a programmer's refactoring. First of all, the writer does not have automated tests to help her ensure that the rewrite doesn't break anything. It's not clear to me exactly what not breaking anything means for a writer, though I have a vague sense that it is meaningful for most writing.
Also, the term "refactoring" does not refer to any old rewrite of a code base. It has a technical meaning: to modify code without changing its essential functionality. There are rewrites of a code base that are not refactoring. I think that's true of writing in general, though, and I also think that Nielsen is clearly talking about rewrites that do not change the essential content or purpose of a text. His rules are about how to say the same things more effectively. That seems close enough to our technical sense of refactoring to make this exercise worth an effort.
Every sentence should grab the reader and propel them forward.
Can we say that every line of code should grab the reader and propel her forward?! I certainly prefer to read programs in which every statement or expression tells me something important about what the program is and does. Some programming languages make this harder to do, with boilerplate and often more noise than signal.
Perhaps we could say that every line of code should propel the program forward, not get in the way of its functionality? This says more about the conciseness with which the programmer writes, and fits the spirit of Nielsen's rule nicely.
Every paragraph should contain a striking idea, originally expressed.
Can we say that every function or class should contain a striking idea, originally expressed? Functions and classes that do not usually get in the reader's way. In programming, though, we often write "helpers", auxiliary functions or classes that assist another in expressing an essential, even striking, idea. The best helpers capture an idea of deep value, but it's may be the nature of decomposition that we sometimes create ones that are striking only in the context of the larger system.
The most significant ideas should be distilled into the most potent sentences possible.
Yes! The most significant ideas in our programs should be distilled into the most potent code possible: expressions, statements, functions, classes, whatever the abstractions our language and style provide.
Use the strongest appropriate verb.
Of course. Names matter. Use the strongest, most clearly named primitives and library functions possible. When we create new functions, give them strong, clear names. This rule applies to our nouns, too. Our variables and classes should carry strong names that clearly name their concept. No more "manager" or "process" nouns. They avoid naming the concept. What do those objects do?
This rule also applies more broadly to coding style. It seems to me that Tell, Don't Ask is about strength in our function calls.
Beware of nominalization.
In code, this guideline prescribes a straightforward idea: Don't make a class when a function will do. You Aren't Gonna Need It.
None of the above rules should be consciously applied while drafting material.
Anyone who writes a lot knows how paralyzing it can be to worry about writing good prose before getting words down onto paper, or into an emacs buffer. Often we don't know what to write until we write it; why try to write that something perfect before we know what it is?
This rule fits nicely with most lightweight approaches to programming. I even encourage novice programmers to write code this way, much to the chagrin of my more engineering-oriented colleagues. Don't be paralyzed by the blank screen. Write freely. Make something work, anything on the path to a solution, and only then worry about making it right and fast. Do the simplest thing that will work. Only after your code works do you rewrite to make it better.
Not all rewriting is refactoring, but all refactoring is rewriting. Write. Pass the test. Refactor.
Many people find that refactoring provides the most valuable use of design patterns, as a target toward which one moves the code. This is perhaps a more important use of patterns than initial design, at which time many of us tend to overdesign our programs. Joshua Kerievsky's Refactoring to Patterns book makes shows programmers how to do this safely and reliably. I wonder if there is any analogue to this book in the writing world, or if there even could be such a book?
I once wrote a post on writing in an agile style, and rewriting played a key role in that idea. Some authors like rewriting more than writing, and I think you can say the same thing of many, many programmers. Refactoring brings a different kind of joy, at getting something right that was before almost right -- which is, of course, another way of saying not yet right.
I recall once talking with a novelist over lunch about tools for writers. Yet even the most humble word processor has done so much to change how authors write and rewrite. One of the comments on Nielsen's entry asks whether new tools for writing have changed the way writers think. We might also ask whether new tools -- the ability to edit and rewrite so much more easily and with so much less= technical effort -- has changed the product created by most writers. If not, could it?
New tools also change how we rewrite code. The refactoring browser has escaped the confines of the Smalltalk image and now graces IDEs for Java, C++, and C## programmers; indeed, refactoring tools exist for so many languages these days. Is that good or bad? Many of my colleagues lament that the ease of rewriting has led to an endemic sloppiness, to a rash of random programming in which students keep making seemingly random changes to their code until something compiles. Back in the good old days, we had to think hard about our code before we carved it into clay tablets... It seems clear to me that making rewriting and refactoring easier is a huge win, even as it changes how we need to teach and practice writing.
In retrospect, a lot of Nielsen's rules generalize to dicta we programmers will accept eagerly. Eliminate boilerplate. Write concise, focused code. Use strong, direct, and clear language. Certainly when we abstract the tasks to a certain level, writing and rewriting really are much the same in text and code.
I just finished by biggest week of this marathon training season (48 miles) and the longest long run (22 miles). I feel good. My time this morning surprised me, about 10 minutes faster than I had planned. The week went well, and I'm ready to taper.
I'm one the road in Muncie, Indiana, this weekend, which means I did my 22 miles on the Cardinal Greenway. It also means that I face eight hours in the car driving home today. That may be a bigger challenge for my legs than the run itself!