May 30, 2009 11:15 PM

How To Be Invincible

Everyone is trying to accomplish something big,
not realizing that life is made up of little things.
-- Frank A. Clark

Instead of trying to be the best, simply do your best.

Trying to be the best can turn into an ego trap: "I am better than you." In fact, the goal of being the best is often driven by ego. If it doesn't work out, this goal can become a source of finding fault and tearing oneself down. "I am not good enough." I should probably say "when", rather than "if". When your goal is to be the best, there always seems to be someone out there who does some task better. The result is like a cruel joke: trying to be the best may make you feel like you are never good enough.

In more prosaic sense, trying to be the best can provide a convenient excuse for being mediocre. When you realize that you'll never be as good as a particular someone, it's easy to say, "Well, why bother trying to be the best? I can spend my time doing something else.." This is a big problem when we decide to compare ourselves to the best of the best -- Lebron James, Haile Gebreselassie, or Ward Cunningham. Who among us can measure up to those masters? But it's also a problem when we compare ourselves to that one person in the office who seems to get and do everything right. Another cruel joke: trying to be the best ultimately gives us an excuse not to try to get better.

Doing your best is something that you can do any time or any place. You can succeed, no matter who else is involved. As time goes by, you are likely going to get better, as you develop your instincts. This means that every time you do your best you'll be in a different state, which adds a freshness to every new task you take on. Even more, I think that there is something about doing our best that causes us to want to get better; we are energized by the moment and realize that what we are doing now isn't the best we could do.

I've never met Lebron James or Haile Gebreselassie, but I've had the good fortune to meet and work with Ward Cunningham. He is a very bright guy, but he seems mostly to be a person who cares about other people and who has a strong drive to do interesting work -- and to get better. It's good to see that the folks we consider the best are... human. I've met enough runners, programmers, computer scientists, and chessplayers who are a lot better than I, and most of them are simply trying to do their best. That's how they got to be so good.

Some of you may say this is a distinction without a difference, but I have found that the subtle change in mindset that occurs when I shift my sights from trying to be the best to trying to do my best can have a huge effect on my attitude and my happiness. That is worth a lot. Again, though, there's more. The change in mindset also affects how I approach my work, and ultimately my effectiveness. Perhaps that's the final lesson, not a cruel joke at all: Doing your best is a better path to being better -- and maybe even the best -- than trying to the best.

(This entry is a riff on a passage from David Allen's Ready for Anything, from which I take the entry's title. Allen's approach to getting things done really does sync well with agile approaches to software development.)


Posted by Eugene Wallingford | Permalink | Categories: General

May 28, 2009 10:02 PM

Developing Instinct

One of the challenges every beginner faces is learning the subtle judgments they must make. How much time will it take for us to implement this story? Should I create a new kind of object here? Estimation and other judgments are a challenge because the beginner lacks the "instinct" for making them, and the environment often does provide enough data to make a clear-cut decision.

I've been living with such a beginner's mind this week on my morning runs. Tuesday morning I started with a feeling of torpor and was sure I'd end with a slow time. When I finished, I was surprised to have run an average time. On Wednesday morning, I felt good yet came in with one of my slowest times for the distance ever. This morning, my legs were stiff, making steps seem a chore. I finished in one of my better times at this distance since working my mileage back up.

My inaccurate judgments flow out of bad instincts. Sometimes, legs feel slow and steps a challenge because I am pushing. Sometimes, I run with ease because I'm not running very hard at all!

At this stage in my running, bad instincts are not a major problem. I'm mostly just trying to run enough miles to build my aerobic base. Guessing my pace wrong has little tangible effect. It's mostly just frustrating not to know. Occasionally, though, the result is as bad as the judgment. Last week, I ran too fast on Wednesday after running faster than planned on Tuesday. I ended up sick for the rest of the week and missed out on 8-10 miles I need to build my base. Other times, the result goes the other way, as when I turned in a best-case scenario half-marathon in Indianapolis. Who knew? Certainly not me.

So, inaccurate instincts can give good or bad results. The common factor is unpredictability. That may be okay when running, or not; in any case, it can be a part of the continuous change I seek. But unpredictability in process is not so okay when I am developing software. Continuous learning is still good, but being wrong can wreak havoc on a timeline, and it can cause problems for your customer.

Bad instincts when estimating my pace wasn't a problem two years, though it has been in my deeper past. When I started running, I often felt like an outsider. Runners knew things that I didn't, which made me feel like a pretender. They had instincts about training, eating, racing, and resting that I lacked. But over time I began to feel like I knew more, and soon -- imperceptibly I began to feel like a runner after all. A lot -- all? -- of what we call "instinct" is developed, not inborn. Practice, repetition, time -- they added up to my instincts as a runner.

Time can also erode instinct. A lack of practice, a lack of repetition, and now I am back to where I was several years ago, instinct-wise. This is, I think, a big part of what makes learning to run again uncomfortable, much as beginners are uncomfortable learning the first time.

One of the things I like about agile approaches to software development is their emphasis on the conscious attention to practice. They encourage us to reflect about our practice and look for ways to improve that are supported by experience. The practices we focus on help us to develop good instincts: how much time it will take for us to implement a story, when to write -- and not write -- tests, how far to refactor a system to prepare for the next story. Developing accurate and effective instinct is one way we get better, and that is more important than being agile.

The traditional software engineering community thinks about this challenge, too. Watts Humphrey created the Personal Software Process to help developers get a better sense of how they use their time and to use this sense to get better. But, typically, the result feels so heavy, so onerous on the developers it aims to help, that few people are likely to stick with it when they get into the trenches with their code.

An aside: This reminds me of conversations I had with students in my AI courses back in the day. I asked them to read Alan Turing's classic Computing Machinery and Intelligence, and in class we discussed the Turing Test and the many objections Turing rebuts. Many students clung to the notion that a computer program could never exhibit human-like intelligence because humans lacked "gut instinct" -- instinct. Many students played right into Turing's rebuttal yet remained firm; they felt deeply that to be human was different. Now, I am not at ease with scientific materialism's claim that humans are purely deterministic beings, but the scientist in me tells me to strive for natural explanations of as much of every phenomenon as possible. Why couldn't a program develop a "gut feeling"? To the extent that at least some of our instincts are learned responses, developed through repetition and time, why couldn't a program learn the same instincts? I had fun playing devil's advocate, as I always do, even when I was certain that I was making little progress in opening some students' minds.

In your work and in your play, be aware of the role that practice, repetition, and time play in developing your instincts. Do not despair that you don't have good instincts. Work to develop them. The word missing from your thought is "yet". A little attention to your work, and a lot of practice, will go a long way. Once you have good instincts, cherish them. They give us comfort and confidence. They make us feel powerful. But don't settle. The same attention and practice will help you get better, to grow as a developer or runner or whatever your task.

As for my running, I am certainly glad to be getting stronger and to be able to run faster than I expect. Still, I look forward to the feeling of control I have when my instincts are more reliable. Unpredictable effort leads to unpredictable days.


Posted by Eugene Wallingford | Permalink | Categories: Running, Software Development, Teaching and Learning

May 26, 2009 7:18 PM

The Why of Lambda

The Lambda Chair

For the programming languages geek who has everything, try the Lambda Chair. The company took the time to name it right, then sadly took the promotional photo from the wrong side. Still, an attractive complement to any office -- and only $2,000. Perfect for my university budget!

I found out about this chair on the PLT mailing list today. The initial frivolity led to an interesting excursion into history when someone asked:

Does anyone know if Church had anything in mind for lambda to stand for, or was it just an arbitrary choice?

In response, Matthias Felleisen shared a story that is similar to one I'd heard in the languages community before. At the beginning of the last century, mathematicians used ^ to indicate class abstractions, such as î : i is prime. Church used ^`, the primed version of the hat, to indicate function abstraction, because a function is a special kind of set. Church's secretary read this notation as λ, and Church let it stand.

Later in the thread, Dave Herman offered pointers to a couple of technical references that shed further light on the origin of lambda. On Page 7 of History of Lambda-Calculus and Combinatory Logic, Cardone and Hindley cite Church himself:

(By the way, why did Church choose the notation "λ"? In [Church, 1964, §2] he stated clearly that it came from the notation "î" used for class-abstraction by Whitehead and Russell, by first modifying "î" to "∧i" to distinguish function-abstraction from class-abstraction, and then changing "∧" to "λ" for ease of printing. This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and "λ" just happened to be chosen.)

The two internal references are to an unpublished letter from Church to Harald Dickson, dated July 7, 1964, and to J. B. Rosser's 1984 paper Highlights of the History of the Lambda Calculus from the Annals of the History of Computing.

Herman also pointed to Page 182 of The Impact of the Lambda Calculus in Logic and Computer Science:

We end this introduction by telling what seems to be the story how the letter 'λ' was chosen to denote function abstraction. In [100] Principia Mathematica the notation for the function f with f(i) = 2i + 1 is 2 î + 1. Church originally intended to use the notation î.2i + 1. The typesetter could not position the hat on top of the i and placed it in front of it, resulting in ∧i.2i + 1. Then another typesetter changed it into λi.2i + 1.

(I changed the variable x to an i in the preceding paragraph, because, much like the alleged trendsetting typesetter, I don't know how to position the circumflex on top of an x in HTML!)

Even in technical disciplines, history can be an imprecise endeavor. Still, it's fun when we go from anecdote to a more reliable source. I don't know that I'll ever need to tell the story of the lambda, but I like knowing it anyway.


Posted by Eugene Wallingford | Permalink | Categories: Computing

May 25, 2009 9:48 PM

Is There a Statute of Limitations for Blogging?

I had a few free minutes tonight with no big project at the front of my mind, so I decided to clean up my blog-ideas folder. Maybe one of the ideas would grab my imagination and I would write. But this is what grabbed my attention instead, a line in my main ideas.txt file:

(leftovers from last year's SIGCSE -- do them!?)

You have heard of code smells. This is a blog smell.

I have two entries still in the hopper from SIGCSE 2008 listed in my conference report table of contents: "Rediscovering the Passion and Beauty", on ways to share our passion and awe with others, and "Recreating the Passion and Beauty", because maybe it's just not that much fun any more. Both come from a panel discussion on the afternoon of Day 2, and both still seem worth writing, even after fourteenth months.

The question in the note to myself in the ideas file lets a little reality under the curtain... Will I ever write them? As conference report, they probably don't offer much, and the second entry has been preempted a bit by Eric Roberts giving a similar talk in other venues, and posting his slides on the web. But timeliness of the conference report isn't the only reason I write; the primary reason is to think about the ideas. The writing both creates the thinking and records it for later consideration. In this regard, they still hold my interest. Not all old ideas do.

When I first started this blog, I never realized how much my blogging would exhibit the phenomenon I call the Stack of Ideas. Sometimes an entry is a planned work, but more often I write what needs to be written based on where I am in my work. Hot ideas will push ideas that recently seemed hot onto the back burner. Going to a conference only makes the problem worse. The sessions follow one after another, and each one tends to stir me up so much as to push even the previous session way back in my mind. I have subfolders for hot ideas and merely recent ideas, and I do pull topics from them -- "hot" serving up ideas more reliably than "recent".

This is one risk of having more ideas than time. Of course, ideas are like most everything else: a lot of them are bunk. I suspect that many of my ideas are bunk and that the Stack of Ideas does me and my readers the Darwinian service of pushing the worst down, down, down out of consciousness. When I look back at most of the ideas that haven't made the cut yet, they feel stale. Are they just old, or were they not good enough? It's hard to say. Like other Darwinian processes, this one probably isn't optimal. Occasionally a good idea may lose out only because it wasn't fit for the particular mental environment in which it found itself. But all in all, the process seems to get things mostly right. I just hope the good ideas come back around sometime later. I think the best ones do.

This is one of the reasons that academics can benefit from keeping a blog. A lot of ideas are bunk. Maybe the ones that don't get written shouldn't be written. For the ideas that make the cut, writing this sort of short essay is a great way to think them through, make them come to life in words that anyone can read, and then let them loose into the world. Blog readers are great reviewers, and they help with the good and bad ideas in equal measure. What a wonderful opportunity blogging offers: an anytime, anyplace community of ideas. Most of us had little access to such a community even ten years ago.

I must say this, though. Blogging is of more value to me than just as a technical device. It can also offer an ego boost. There is nothing quite like having someone I met several years ago at SIGCSE or OOPSLA tell me how much they enjoy reading my blog. Or to have someone I've never met come up to me and say that they stumbled across my blog and find it useful. Or to receive e-mail saying, "I am a regular reader and thought you might enjoy this..." What a joy!

Will those old SIGCSE 2008 entries ever see the light of day? I think so, but the Stack of Ideas will have its say.


Posted by Eugene Wallingford | Permalink | Categories: General

May 22, 2009 4:05 PM

Parsing Expression Grammars in the Compiler Course

Yesterday, a student told me about the Ruby gem Treetop, a DSL for writing language grammars. This language uses parsing expression grammar, which turns our usual idea of grammar inside out. Most compiler theory is built atop the context-free and regular grammars of Chomsky. These grammars are generative: they describe rules that allow us to create strings which are part of the language. Parsing expression grammars describe rules that allow us to recognize strings which are part of the language.

This new kind of grammar offers a lot of advantages for working with programming languages, such as unifying lexical and syntactic descriptions and supporting the construction of linear-time parsers. I remember seeing Bryan Ford talk about packrat parsing at ICFP 2002, but at that point I wasn't thinking as much about language grammars and so didn't pay close attention the type of grammar that underlay his parsing ideas.

While generative grammars are a fundamental part of computing theory, they don't map directly onto the primary task for which many software people use them: building scanners and parsers for programming languages. Our programs recognize strings, not generate them. So we have developed mechanisms for building and even generating scanners and parsers, given grammars that we have written under specific constraints and then massaged to fit our programming mechanisms. Sometimes the modified grammars aren't as straightforward as we might like. This can be a problem for anyone who comes to the grammar later, as well as a problem for the creators of the grammar when they want to change it in response to changes requested by users.

A recognition-based grammar matches our goals as compiler writers more closely, which could be a nice advantage. Parsing expression grammars make explicit the specification of the code we write against them.

For those of us who teach compiler courses, something like a parsing expression grammar raises another question. Oftentimes, we hope that the compiler course can do double duty: teach students how to build a compiler, and help them to understand the theory, history, and big issues of language processors. I think of this as a battle between two forces, "learning to" versus "learning about", a manifestation of epistemology's distinction between "knowing that" and "knowing how".

Using recognition-based grammars as the foundation for a compiler course introduces a trade-off: students may be empowered more quickly to create language grammars and parsers but perhaps not learn as much about the standard terminology and techniques of the discipline. These standard ways are, of course, our historical ways of doing things. There is much value in learning history, but at what point do we take the step forward to techniques that are more practical than reminiscent?

This is a choice that we have to make all the time in a compiler course: top-down versus bottom-up parsing, table-driven parsers versus recursive-descent parsers, writing parsers by hand versus using parser generators... As I've discussed here before, I still ask students to write their parser by hand because I think the experience of writing this code teaches them more than just about compilers.

Now that I have been re-introduced to this notion of recognition-based grammars, I'm wondering whether they might help me to balance some of the forces at play more satisfactorily. Students would have the experience of writing a non-trivial parser by hand, but against a grammar that is more transparent and easier to work with. I will play with parsing expression grammars a bit in the next year or so and consider making a change the next time I teach the course. (If you have taught a compiler course using this approach, or know someone who has, please let me know.)

Going this way would not commit me to having students write their parsers by hand. The link that started this thread of thought points to a tool for automating the manipulation of parsing expression grammars. Whatever I do, I'll add that tool to the list of tools I share with students.

Oh, and a little Ruby Love to close. Take a look at TreeTop. Its syntax is beautiful. A Treetop grammar reads cleanly, crisply -- and is executable Ruby code. This is the sort of beauty that Ruby allows, even encourages, and is one of the reasons I remain enamored of the language.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 21, 2009 5:58 AM

The Body Adapts Slowly, Especially Now

I mentioned last time that I successfully completed five AM runs last week. That is my standard schedule, except when I stick in a sixth morning run during the heaviest stage of marathon training. But after low mileage for so many months, five runs in a week feels like an accomplishment.

To make the accomplishment even greater, I added a mid-week 5-miler. What's the big deal? My standard five-day schedule is 5-8-5-8 on Tuesday through Thursday, with speed workouts on Friday and maybe Wednesday, and ≥ 12 miles on Sunday. But after those down months, I started back in February by doing mostly 3s, with an occasional 5 thrown in for an afternoon or evening run just before running the 500 Festival half.

I had planned to add a second 5-miler this week. I figured I'd do them on Tuesday and Thursday, because I have important meetings on Wednesday and Fridays for the next couple of week. It seemed like a good idea to take it easier on those days, so that if the extra miles led to a down day, I'd be down on days that mattered less.

Indeed, it was a very good idea. But Tuesday's run felt so good... Impatience rose up. I ran a second 5-miler on Wednesday and again felt good, so I ran a little faster. Afterwards, I still felt good -- until about 8:00 AM. (Remember, I start early.) I felt progressively worse as time passed, and in the end the run killed my day.

You would think that, after health problems had affected my running for over a year, I would be more patient. But it's hard not to get excited, and I gave in to temptation. Impatience exacted its price.

I'll do better from now on, at least until the next time I succumb. I will script my runs for the next few weeks -- and stick to the plan. Another bright line "Thou shalt not..." is what I need right now.

All that said, I am hopeful that with patience I'll be able to get back to my standard schedule within a couple of months. If I do, I am entertaining the idea of taking on a race in the fall.


Posted by Eugene Wallingford | Permalink | Categories: Running

May 20, 2009 4:26 PM

Bright Lines in Learning and Doing

Sometimes it pays to keep reading. Last time, I commented on breaking rules and mentioned a thread on the XP mailing list. I figured that I had seen all I needed there and was on the verge of skipping the rest. Then I saw a message from Laurent Bossavit and decided to read. I'm not surprised to learn something from Laurent; I have learned from him before.

Laurent's note introduced me to the legal term bright line. In the law, a bright-line rule is...

... a clearly defined rule or standard, composed of objective factors, which leaves little or no room for varying interpretation. The purpose of a bright-line rule is to produce predictable and consistent results in its application.

As Laurent says, Bright lines are important in situations where temptations are strong and the slope particularly steep, a well-known example is alcoholics' high vulnerability to even small exceptions. Test-driven development, or even writing tests soon after code and thus maintaining a complete suite of automated tests, requires a bright line for many developers. It's too easy to slide back into old habits, which for most developers are much older and stronger. Staying on the right side of the line may be the only practical way to Live Right.

This provides a useful name for what teachers often do in class: create bright lines for students. When students are first learning a new concept, they need to develop a new habit. A bright-line rule -- "Thou shalt always write a test first." or "Thou shalt write no line of code outside of a pair." -- removes from the students' minds the need to make a judgment that they are almost always not prepared to make yet: "Is this case an exception?" While learning, it's often better to play Three Bears and overdo it. This gives your mind a chance to develop good judgment through experience.

(For some reason, I am reminded of one way that I used to learn to play a new chess opening. I'd play a bazillion games of speed chess using it. This didn't train my mind to think deeply about the positions the opening created, but it gave me a bazillion repetitions. I soon learned a lot of patterns that allowed me to dismiss many bad alternatives and focus my attention on the more interesting positions.)

I often ask students to start with a bright line, and only later take on the challenge of a balancing test. It's better to evolve toward such complexity, not try to start there.

The psychological benefits of a bright-line test are not limited to beginners. Just as alcoholics have to hold a hard line and consider every choice consciously every day, some of us need a good "Thou shalt.." or "Thou shalt not..." in certain cases. As much as I like to run, I sometimes have to force myself out of bed at 5:00 AM or earlier to do my morning work-out. Why not just skip one? I am a creature of habit, and skipping even one day makes it even harder to get up the next, and the difficulty grows until I have a new habit.

(This has been one of the most challenging parts of trying to get back up to my old mileage after several extended breaks last year. I am proud finally to have done all five of my morning runs last week -- no days off, no PM make-ups. A new habit is in formation.)

If you know you have a particular weakness, draw a bright line for yourself. There is no shame in that; indeed, I'd say that it shows professional maturity to recognize the need and address it. If you need a bright line for everything, that may be a problem...

Sometimes, I adopt a bright line for myself because I want everyone on the team to follow a practice. I may feel comfortable exercising judgment in the gray area but not feel the rest of the team is ready. So we all play by the rules rather than discuss every possible judgment call. As the team develops, we can begin having those discussions. This is similar to how I teach many practices.

This may sound too controlling to you, and occasionally a student will say as much. But nearly everyone in class benefits from taking the more patient road to expertise. Again, from Laurent:

Rules which are more ambiguous and subtle leave more room for various fudge factors, and that of course can turn into an encouragement to fudge, the top of a slippery slope.

Once learners have formed their judgment, they are ready to balance forces. Until then, most are more likely to backslide out of habit than to make an appropriate choice to break the rule. And time spent arguing every case before they are ready is time not spent learning.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 18, 2009 8:58 PM

Practice and Dogma in Testing

Shh.

I have a secret.

When I am writing a program, I will on occasion add a new piece of functionality without writing a test.

I am whispering because I have seen the reaction on the XP mailing list and on a number of blogs that Kent Beck received to his recent article, To Test or Not to Test? That's a Good Question. In this short piece, Kent describes his current thinking that, like golf, software development may have "long game" and "short game", which call for different tools and especially mentalities. One of the differences might be whether one is willing to trade automated testing for some other value, such as delivering a piece of software sooner.

Note that Kent did not say that in the long game he chooses not to test his code; he simply tested manually. He also didn't say that he plans never to write the automated tests he needs later; he said he would write them later, either when he has more time or, perhaps, when he has learned enough to turn 8 hours of writing a test into something much shorter.

Many peoples' public reactions to Kent's admission have been along these lines: "We test you to make this decision, Kent, but we don't trust everyone else. And by saying this is okay, you will contribute to the delinquency of many programmers." Now you know why I need to whisper... I am certainly not in the handful of programmers so good that these folks would be willing to excuse my apostasy. Kent himself is taking a lot of abuse for it.

I have to admit that Kent's argument doesn't seem that big a deal to me. I may not agree with everything he says in his article, but at its core he is claiming only that there is a particular context in which programmers might choose to use their judgment and not write tests before or immediately after writing some code. Shocking: A programmer should use his or her judgment in the course of acting professionally. Where is the surprise?

One of the things I like about Kent's piece is that he helps us to think about when it might be useful to break a particular rule. I know that I'll be breaking rules occasionally, but I often worry that I am surrendering to laziness or sloppiness. Kent is describing a candidate pattern: In this context, with these goals, you are justified in breaking this rule consciously. We are balancing forces, as we do all the time when building anything. We might disagree with the pattern he proposes, but I don't understand why developers would attack the very notion of making a trade-off that results in breaking a rule.

In practice, I often play a little loose with the rules of XP. There are a variety of reasons that lead me to do so. Sometimes I pay for not writing a test, and when I do I reflect on what about the situation made the omission so dangerous. If the only answer I can offer is "You must write the test, always.", then I worry that I have moved from behaving like a professional to behaving like a zealot. I suspect that a lot of developers make similar trade-offs.

I do appreciate the difficulty this raises for those of us who teach XP, whether at universities or in industry. If we teach a set of principles as valuable, what happens to our students' confidence in the principles when we admit that we don't follow the rules slavishly? Well, I hope that my students are learning to think, and that they realize any principle or rule is subject to our professional judgment in any given circumstance.

Of course, in the context of a course, I often ask students to follow the rules "slavishly", especially when the principles in question require a substantial change in how they think and behave. TDD is an example, as is pair programming. More broadly, this idea applies when we teach OOP or functional programming or any other new practice. (No assignment statements or sequences until Week 10 of Programming Languages!) Often, the best way to learn a new practice is to live it for a while. You understand it better then than you can from any description, especially how it can transform the way you think. You can use this understanding later when it comes to apply your judgment about potential trade-offs.

Even still, I know that, no matter how much an instructor encourages a new practice and strives to get students to live inside it for a while, some students simply won't do it. Some want to but struggle changing their habits. I feel for them. Others willfully choose not to try the something new and deny themselves the opportunity to grow. I feel for them, too, but in a different way.

Once students have had a chance to learn a set of principles and to practice them for a while, I love to talk with them about choices, judgment, and trade-offs. They are capable of having a meaningful discussion then.

It's important to remember that Kent is not teaching novices. His primary audience is professional programmers, with whom he ought to be able to have a coherent conversation about choices, judgment, and trade-offs. Fortunately, a few folks on the P list have entertained the "long game versus short game" claim and related their own experiences making these kind of decisions on a daily basis.

If we in the agile world rely on unthinking adherence to rules, then we are guilty of proselytizing, not educating. Lots of folks who don't buy the agile approaches love when they see examples of this rigidity. It gives them evidence to support their tenuous position about the whole community. From all of my full-time years in the classroom, I have learned that perhaps the most valuable asset I can possess is my students' trust in my goals and attitudes. Without that, little I do is likely to have any positive effect on them.

Kent's article has brought to the surfaced another choice agilistas face most every day: the choice between dogma and judgment. We tend to lose people when we opt for unthinking adherence to a rule or a practice. Besides, dogmatic adherence is rarely the best path to getting better every day at what we do, which is, I think one of the principles that motivate the agile methods.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 15, 2009 8:30 PM

Robert's Rules of Order and Agile Forces

I am coming to a newfound respect for Robert's Rules of Order these days. I've usually shied away from that level of formality whenever chairing a committee, but I've experienced the forces that can drive a group in that direction.

For the last year, I have been chairing a campus-wide task force. Our topic is one on which there are many views on campus and for which there is not currently a shared vision. As a result, we all realized that our first priority was communication: discussing key issues, sharing ideas, and learning what others thought. I'll also say that I have learned a lot about what I think from these discussions. I've learned a lot about the world that lies outside of my corner of campus.

With sharing ideas and building trust as our first goals, I kept our meetings as unstructured as possible, even allowing conversations to drift off topic at times. That turned out well sometimes, when we came to a new question or a new answer unexpectedly.

We are nearing the end of our work, trying to reach closure on our descriptions and recommendations. This is when I see forces pushing us toward more structure. It is easy to keep talking, to talk around a decision so much that we find ourselves doubting a well-considered result, or even contradicting the it. At this point, we are usually cover well-trod ground. A little formality -- motion, second, discussion, vote, repeat -- may help. At least I now have some first hand experience of what might have led Mr. Robert to define his formal set of rules.

It occurs to me that Robert's Rules are a little like the heavyweight methodologies we often see in the software development world. We agile types are sometimes prone to look down on big formal methodologies as obviously wrong: too rigid, too limiting, too unrealistic. But, like the Big Ball of Mud, these methodologies came into being for a reason. Most large organizations would like to ensure some level of consistency and repeatability in their development process over time. That's hard to do when you have a 100 or a 1000 architects, designers, programmers, and testers. A natural tendency is to formalize the process in order more closely to control it. If you think you value safety more than discovery, or if you think you can control the rate of change in requirements, then a big process looks pretty attractive.

Robert's Rules looks like a solution to a similar problem. In a large group, the growth in communication overhead can outpace the value gained by lots of free-form discussion. As a group grows larger, the likelihood of contention grows as well, and that can derail any value the group might gain from free-form discussion. As a group reaches the end of its time together, free-form discussion can diverge from consensus. Robert's Rules seek to ensure that everyone has a chance to talk, but that the discussion more reliably reach a closing point. They opt for safety and lowering the risk of unpredictability, in lieu of discovery.

Smaller teams can manage communication overhead better than large ones. This is one of the key ideas behind agile approaches to software development: keep teams small so that they can learn at the same time they are making steady process toward a goal. Agile approaches can work in large organizations, too, but developers need to take into account the forces at play in larger and perhaps more risk-averse groups. That's where the sort of expertise we find in Jutta Eckstein's Agile Software Development in the Large comes in so handy.

While I sense the value of running a more structured meeting now, I don't intend to run any of my task force or faculty meetings using Robert's Rules any time soon. But I will keep in mind the motivation behind them and try to act in the spirit of a more directed discussion when necessary. I would rather still value people and communication over rules and formalisms, to the greatest extent possible.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Patterns, Software Development

May 14, 2009 9:10 PM

Computer as Medium

While waiting for a school convocation to start last night, I was digging through my bag looking for something to read. I came across a print-out of Personal Dynamic Media, which I cited in my entry about Adele Goldberg. I gladly re-re-read it. This extended passage a good explanation of the idea that the digital computer is more than a tool, that it is a medium, and a more powerful medium than any other we humans have created and used:

"Devices" which variously store, retrieve, or manipulate information in the form of messages embedded in a medium have been in existence for thousands of years. People use them to communicate ideas and feelings both to others and back to themselves. Although thinking goes on in one's head, external media serve to materialize thoughts and, through feedback, to augment the actual paths the thinking follows. Methods discovered in one medium provide metaphors which contribute new ways to think about notions in other media.

For most of recorded history, the interactions of humans with their media have been primarily nonconversational and passive in the sense that marks on paper, paint on walls, even "motion" pictures and television, do not change in response to the viewer's wishes. A mathematical formulation -- which may symbolize the essence of an entire universe -- once put down on paper, remains static and requires the reader to expand its possibilities.

Every message is, in one sense or another, a simulation of some idea. It may be representational or abstract. The essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. Moreover, this new "metamedium" is active -- it can respond to queries and experiments -- so that the messages may involve the learner in a two-way conversation. This property has never been available before except through the medium of an individual teacher. We think the implications are vast and compelling.

I agree. But after reading this paper again all I can think is: No wonder Kay is so disappointed by what we are doing in the world of computing in 2009. Looking at what he, Goldberg, and their team were doing back in the 1970s, with technology that looks so very primitive to us these days -- not only the interactivity of the medium they were creating, but the creations of the people working in the medium, even elementary school students. Even if I think only in terms of how they viewed and created language... We have not done a good job living up to the promise of that work.

If you are eager to embrace this promise, perhaps you will be inspired by this passage from the Education in the Digital Age interview I mentioned in Making Language:

Music is in the person.
An instrument amplifies it.
The computer is like that.

How are you using the computer to amplify the music inside of you? What can you do to help the computer amplify what is inside others?


Posted by Eugene Wallingford | Permalink | Categories: Computing

May 12, 2009 4:22 PM

Surprises, Problems, and Small Aircraft

Earlier today I listened to a TED talk by Tony Robbins. One snippet stood out. Here is a paraphrase, in part to clean up the language (because that's how I roll):

If I ask you whether you like variety, you'll say yes. Baloney. You like surprises you want. The others, you call problems.

Some people are better than others at accepting the surprises that they don't want. Perhaps that is why Robbins's anecdote reminded me of a story I read last summer in a book by John G. Miller called QBQ! The Question Behind the Question. The story, perhaps fictional, tells of a father and young daughter out for a fun plane ride one day, with dad behind the controls. When the plane's engine dies unexpectedly, dad turns to his daughter and says, calmly, I'm going to need to fly the plane differently.

I don't make generally New Year's resolutions, but when I am next tempted, I'll probably think again of this story. I want to be that guy, and I'm not.

----

(Quick book review: QBQ! is pretty standard for this genre of business self-help lit. It says a lot of things we all should already know, and probably do. But there are days when some of us need a reminder or a little pep talk. This book is full of short pep talks. It's a quick read and good enough at its task, as long as you remember that unless you change your behavior books like these are nothing but empty calories. A bit like software design methodologies.)


Posted by Eugene Wallingford | Permalink | Categories: Personal, Software Development

May 12, 2009 11:27 AM

Lessons from Compilers Course Experiment

Though final grades are not all yet submitted, the semester is over. We made adjustments to the specification in my compilers course, and the students were able to produce a compiler that produced compilable, executable Java code for a variety of source programs. For the most part, the issues we discussed most at their finals week demo dealt with quality control. We found some programs that confounded their parser or code generator, which were evidence of bits of complexity they had not yet mastered. There is a lesson to be learned: theory and testing often take a back seat to team dynamics and practices. Given the complexity of their source language, I was not too disappointed with their software, though I think this team fell short of its promise. I have been part of teams that have fallen similarly short and so can empathize.

So, what is the verdict on a couple of new ideas we tried this semester: letting the team design its own source language and working in a large team, of six? After their demo, we debriefed the project as a group, and then I asked them to evaluate the project and course in writing. So I have some student data on which to draw as well as my own thoughts.

On designing their own language: yes, but. Most everyone enjoyed that part of the project, and for some it was their favorite activity. But the students found themselves still churning on syntax and semantics relatively late into the project, which affected the quality and stability of their parser. We left open the possibility of small changes to the grammar as they learned more about the language by implementing it, but this element of reality complicated their jobs. I did not lock down the language soon enough and left them making decisions too late in the process.

One thing I can do the next time we try this is to put a firmer deadline on language design. One thing thing that the students and I both found helpful was writing programs in the proposed language and discussing syntactic and semantic language issues grounded in real code. I think I'll build a session or two of this into the course early, before the drop-dead date for the grammar, so that we can all learn as much as we can about the language before we proceed on to implementing it.

We also discussed the idea of developing the compiler in a more agile way, implementing beginning-to-end programs for increasing subsets of the language features until we are done. This may well help us get better feedback about language design earlier, but I'm not sure that it addresses the big risks inherent in letting the students design their own language. I'll have to think more on this.

On working is a team of size six: no. The team members and I were unanimous that a team of size six created more problems than it solved. My original thinking was that a larger team would be better equipped to do the extra work introduced by designing their own language, which almost necessarily delayed the start of the compiler implementation. But I think we were bitten by a preemptive variation of Brooks's Law -- more manpower slowed them down. Communication overhead goes up pretty quickly when you move from a team of three to a team of six, and it was much harder for the team to handle all of its members' ideas effectively. This might well be true for a team of experienced developers, but for a team of undergrads working on their first collaborative project of this scale, it was an occasional show-stopper. I'll know better next time.

As an aside, one feature the students included in the language they designed was first-class functions. This clearly complicated their syntax and their implementation. I was pleased that they took the shot. Even after the project was over and they realized just how much extra work first-class functions turned out to be, the team was nearly unanimous in saying that, if they could start over, they would retain that feature. I admire their spunk and their understanding of the programming power this feature gave to their language.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 08, 2009 6:31 AM

The Annual Book March

There is a scene in the television show "Two and a Half Men" in which neurotic Alan has a major meltdown in a bookstore. He decided to use some newly-found free time to better himself through reading the classics. He grabs some books from one shelf, say, Greek drama, and then from another, and another, picking up speed as he realizes that there aren't enough hours in a day or a lifetime to read all that is available. This pushes him over the edge, he makes a huge scene, and his brother is embarrassed in front of the whole store.

I know that feeling this time of year. When I check books out from the university library, the due date is always next May, at the end of finals week for spring semester. Over the year, I run into books I'd like to read, new and old, in every conceivable place: e-mail, blogs, tweets, newspapers, ... With no particular constraint other than a finite amount of shelf space -- and floor space, and space at home -- I check it out.

Now is the season of returning. I gather up all the books on my shelves, and on my floors, and in my home. For most of my years here, I have renewed them. Surely I will read them this summer, when time is less rare, or next year, on a trip or a break. At the beginning of the last couple of Mays, though, I have been trying to be more honest with myself and return books that have fallen so far down the list as to be unlikely reads. Some are far enough from my main areas of interest or work that they are crowded out by more relevant books. Others are in my area of interest but trumped by something newer or more on-point.

Now, as I walk to the library, arms full, to return one or two or six, I often feel like poor, neurotic Alan. So many book, so little time! How can I do anything but fall farther and farther behind withe each passing day? Every book I return is like a little surrender.

I am not quite as neurotic as Alan; at least I've never melted down in front of the book drop for all my students to see. I recognize reality. Still, it is hard to return almost any book unread.

I've had better habits this year, enforcing on myself first a strict policy of returning two books for every new one I checked out, then backsliding to an even one-for-one swap. As a result, I have far fewer books to return or new. Still, this week I have surrendered Knuth's Selected Papers on Analysis of Algorithms, David Berlinski's The Advent of the Algorithm, and Jerry Weissman's Presenting to Win. Worry not; others will take their place, both old (Northcote Parkinson, Parkinson's Law) and new: The Passionate Programmer and Practical Programming. The last of these promises an intro to programming for the 21st century, and I am eager to see how well they carry off the idea.

So, in the end, even if something changed radically to make the life of a professor less attractive, I agree with Learning Curves on the real reason I will never give up my job: the library.


Posted by Eugene Wallingford | Permalink | Categories: General

May 06, 2009 4:16 PM

Making Language

I've been catching up on some reading while not making progress on other work. I enjoyed this interview with Barbara Liskov, which discusses some of the work that earned her the 2008 Turing Award. I liked this passage:

I then developed a programming language that included this idea [of how to abstract away from the details of data representation in programs]. I did that for two reasons: one was to make sure that I had defined everything precisely because a programming language eventually turns into code that runs on a machine so it has to be very well-defined; and then additionally because programmers write programs in programming languages and so I thought it would be a good vehicle for communicating the idea so they would really understand it.

Liskov had two needs, and she designed a language to meet them. First, she needed to know that her idea for how to organize programs were sound. She wanted to hold herself accountable. A program is an effective way to implement an idea and show that it works as described. In her case, her idea was about _writing_ programs, so she created a new language that embodied the idea and wrote a processor for programs written in that language.

Second, she needed to share her idea with others. She wanted to teach programmers to use her idea effectively. To do that, she created a language. It embodied her ideas about encapsulation and abstraction in language primitives that programmers could use directly. This made it possible for them to learn how to think in their terms and thus produce a new kind of program.

This is a great example of what language can do, and why having the power to create new languages makes computer science different. A program is an idea and a language is a vehicle for expressing ideas. We are only beginning to understand what this means for how we can learn and communicate. In the video Education in the Digital Age, Alan Kay talks about how creating a new language changes how we learn:

The computer allows us to put what we are thinking into a dynamic language and probe it in a way we never could before.

We need to find a way to help CS students see this early on so that they become comfortable with the idea of creating languages to help themselves learn. Mark Guzdial recently said much the same thing: we must help students see that languages are things you build, not just use. Can we introduce students to this idea in their introductory courses? Certainly, under the right conditions. One of my colleagues loves to use small BASIC-like interpreters in his intro course or his assembly language courses. This used to be a common idea, but as curricula and introductory programming languages have changed over time, it seems to have fallen out of favor. Some folks persist, perhaps with simple a simple command language. But we need to reinforce the idea throughout the curriculum. This is less a matter of course content than the mindset of the instructor.

After reading so much recently about Liskov, I am eager to spend some time studying CLU. I heard of CLU as an undergraduate but never had a chance for in-depth study. Even with so many new languages to dive into, I still have an affinity for older languages and and for original literature on many CS topics. (If I were in the humanities, I would probably be a classicist, not a scholar of modern lit or pop culture...)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 05, 2009 3:37 PM

Problem-Based Universities and Other Radical Changes

Last week, every administrator at my university seemed to be talking about Mark Taylor's End the University as We Know It. Like many other universities, we have been examining pretty much everything we do in light of significant changes in the economy (including, for public universities, drastic reductions in state funding) and demographics. Taylor, a department chair in a humanities department at Columbia University, starts with a critique of graduate programs, which he contends produce a product few need because they also provide an essential commodity for the modern university: cheap teaching labor that frees faculty to do research. From here, he asserts that American higher education should undergo a radical transformation and proposes six steps in the direction he thinks best.

The second proposal caught my attention:

Abolish permanent departments, even for undergraduate education, and create problem-focused programs.

In recent years I have developed a strong belief in the value of project-based education, especially in CS courses. I also have a fondness for the idea of problem-based learning which Owen Astrachan has been touting for some time now. I think of these as having at least one valuable attribute in common: students do something real in context where they have to make real decisions.

Taylor proposes that we build the university of today not around permanent discipline-specific departments but evolving programs centered on "zones of inquiry", Big Problems that matter across disciplines. When I discussed this idea with my provost, I told him that I was fortunate: my discipline, Computer Science, will have a role to play in most every problem-focused program for the foreseeable future. Talk about job security!

This idea may sound wonderful, but there are risks. As Michael Mitzenmacher points out, you still need discipline-specific expertise. Taylor's offers as an example a program built around pressing issues related to water, which would ...

... bring together people in the humanities, arts, social and natural sciences with representatives from professional schools like medicine, law, business, engineering, social work, theology and architecture.

To do that, you need people with expertise in the humanities, arts, social and natural sciences, medicine, law, business, engineering, social work, theology and architecture. Where will these experts come from? Taylor doesn't say as much, but there is a hint in his article, and in others like it, that collaborative work on big problems is all that we need. But I wonder how such a university would prepare students who would become the next generation of experts in the arts, sciences, and professions, let alone the next generation of researchers who will discover the ideas and create the tools we need to solve the big problems. I don't suppose there is any reason in principle that a university in such way could not prepare experts, and my Inner Devil's Advocate is already working on ways that it might succeed swimmingly. But I get a little worried whenever I hear people talking about making radical changes to complex systems without having considered explicitly all of the story.

This particular risk is at the front of my mind because we face the same risk, at a different scale, when we create problem- and project-focused curricula. There is a natural tension between depth in the discipline and working in the context of a specific application or other domain. If I build an intro programming course around, say, media computation or biology, will students learn all of the CS they should learn in that course? Some time will be spent on images and sounds, or on DNA and amino acid base pairs, and that is time not spent on procedures, arrays, pointers, and big-O analysis. I am well aware that adding more content to a course does not mean that everyone will get it all. But some do, and maybe those are the people who will be the experts of the future?

We have used media computation as a theme in a few sections of our intro course over the last four years, and we observe this tension in the results. Some students get just what we want them to get out of the theme: motivation to dig deep, experiment, and discuss important ideas. Others don't connect with the theme, and they just end up knowing less CS-specific content. The prof who has taught this course most often is beginning to see ways in which he can trade back some of the context for opportunities to program in other contexts, which may hit a broader variety of students than a pure media comp course.

When I think about how this trade-off would scale to the level of an entire university in programs that bring together eight, twelve, or twenty disciplines, I realize that we would need to think carefully before proceeding too far. Perhaps Taylor hopes that his article will cause faculty and administration to begin the process of thinking carefully. Some people have been thinking about these issues for a while and even put some of those thoughts into writing (PDF).

The law of unintended consequences lurks in the darkness behind many suggestions of radical changes to complex systems. For example, Taylor suggests that universities abolish tenure. Many would agree. After being a department head for many years, I appreciate many of the advantages of this idea. But consider what might happen in disciplines whose faculty are in great demand in industry. Hmm, such as computer science. Without tenure and its concomitant security, I suspect that a fair number of CS faculty would find their way into industry. Right now, the allure of bigger paydays in industry are balanced against all sorts of risk. Universities offer a level of security in exchange for much lower salaries. Without that security, I might be better off out there in the real world writing programs, hoping that one turns out to be the next Twitter or the IDE that revolutionizes how we program in dynamic languages.

I am not suggesting that we not think radical thoughts or consider how we might do things differently. In fact, I spend a large part of my administrative and academic lives doing just that. I do suggest that we not rush headlong into ideas before thinking them through, even when they seem tantalizingly right at first blush.

As with so many articles of this sort, Taylor may hope simply to cause people to consider new ideas, not adopt the specific prescriptions he offers. Certainly, many schools are already experimenting with ideas such as greater collaboration across disciplines and institutions. As in the case of courses that are focused on problems or projects, the rub is in balancing the forces at play so that we achieve our goal of better helping students to learn.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 04, 2009 7:59 AM

Best Case Scenario

My expectations for the 500 Festival Mini-Marathon were rather low. I've been battling subpar health for a year, so my mileage has been down. I've gone through a few dry stretches of six to eight weeks without running much or at all.. I've been running again for the last ten weeks or so, but I've managed only to reach the mid-twenties of miles in any given week. My body just isn't ready for running many miles, let alone racing them.

My running buddy, Greg, and I arrived in downtown Indianapolis half an hour before the start time of the race. It was overcast and cool -- around 47 degrees -- with the slightest of breezes. I cast my lot with the possibility that we'd not run in the rain and left my cap in my checked back, but I did throw on my thinnest pair of gloves. A good choice.

When you run with 35,000 other runners, the start of a race is always a little crowded. After the official start of the race, Greg and I shuffled along for six and half minutes before we reached the starting line. From that point, we ran in tight traffic for only a third of a mile or so before we could move unencumbered. I was how quickly that moment came. I was also surprised at the pace of our first couple of miles. Even with the shuffling start we clocked a 9:08 for Mile 1, and then we did Mile 2 in 8:37. I won't be able to keep this up for much longer, I said, so don't feel bad about leaving me behind. But I didn't feel as if I were pressing, so I hung steady.

Talking as we ran helped me stay steady. I have gone to races with Greg and other friends before, but I have never actually run with them. We spend time together before and after the race, but during we find our own strides and run our own races. This time, we actually ran together. The miles clicked off. 8:33. 8:32. Can this be? 8:42. Ah, a little slower.

The we reached the famed Indianapolis Motor Speedway, home to the 500 mile race that gives its name to the race I am running. Race cars navigate this brick and asphalt oval in 40 seconds, but thousands of runners staked their claims in anywhere from twelve minutes to over an hour. We saw the 6-, 7-, and 8-mile markers inside the track, along with the 10K split and the halfway point. 8:46. 8:49. 8:44. Slower, yet hanging steady.

I felt a slight tug in my left calf just before the 9-mile marker. I did not mention it out loud, because I did not want to make it real. We kept talking, and I kept moving. 8:46. 8:29. What? 8:29?? The tenth mile was our fastest yet. I felt good -- not "just getting started" strong, but "I can keep doing this" strong. I thought of Barney Stinson's advice and just kept running.

I took a last sip of fuel just past the 10-mile mark. 8:22. Greg and I decided that we would let ourselves really run the last mile if we still felt good. We must have. We clipped off miles 12 and 13 in 16:16. Then came that last mad rush to the finish line. 1:52:25. I have never been so happy to run my second worst time ever. This was 8-10 minutes faster than I imagined I could run, and I finished strong, thinking I could do a little more if I had to. (Not another half, of course -- I am nowhere near marathon shape!)

Talking throughout the race definitely helped me. It provided a distraction from the fact that we were running hard, that the miles were piling up behind us. I never had a chance for my mind to tell I couldn't do what I was doing, because it didn't have a chance to focus on the distance. Our focus was on the running, on the moment. We took stock of each mile as a single mile and then took on the current mile. In an odd way, it was a most conscious race.

The only ill effect I have this morning is a barely sore left hamstring that gave its all for those last two mile and a minor headache. In all other ways I feel good and look forward to hitting the trails tomorrow morning with another challenge in mind.

The weekend itself was not an uninterrupted sequence of best case scenarios... As I pulled out of the parking garage after picking up my race packet in downtown Indianapolis, my car began gushing coolant. Was there any irony in the fact that I was at that moment listening to Zen and the Art of Motorcycle Maintenance? I am not one of those guys who tinkers with his own engine, but I know enough to know that you can't go far without coolant.

Still, I did not face a worst case scenario. I called the friend with whom I was to dine that night, and he came to get me. He arranged for a tow, and while I ran on Saturday morning a professional who knows his way around under the hood fixed the problem -- a faulty reservoir -- for only a couple of hundred dollars. Given the circumstances, I could hardly have asked for a better resolution.

Race day, May 2, was one year to the day of my last 100% healthy work-out... I do not think I am yet 100% healthy again, and I did not finish the half marathon with Ernie Banks's immortal words on my lips ("Let's play two!". But I have to say: Great day.


Posted by Eugene Wallingford | Permalink | Categories: Personal, Running