After being informed repeatedly that I will need either a Windows box or Office for the Mac in order to be a department head, due to the sheer volume of Word and Excel documents that flow through the university hierarchy, I found hope this piece of news. If Norway can do it, why not the State of Iowa or my university?
I once read a letter to the editor of our local paper calling for the state to adopt an open-formats-only policy, as a matter of fairness to competing businesses and to citizens. Alas, the benefits of standardization far outweigh the long-term economic effects for most people. Philosophical issues rarely enter the discussion at all. I suspect that it's easier for this sort of edict to originate at the top of a heap, because then groups lower in the hierarchy face a consideration that trumps standardization.
In any case, I think that I will get by just fine, at least for now. I could use OpenOffice, if I feel like running X-windows. But my preferred tool these days is NeoOffice/J, which is getting more solid with every release. (It's finally out of beta.) While it's not a native OS X app -- it looks and feels like a Windows port -- it does all I need right now for handling files in Office formats.
I have spent many, many years of gently encouraging department heads not to send me Word files, offering alternatives that were as effortless as possible. I'm not sure if I'm ready to take on higher levels of university administration. But I do feel some obligation to lead by teaching on the issue of open standards in computing.
This spring I was asked to participate on a panel at XP2005, which recently wrapped up in the UK. This panel was on agile practices in education, and as you may guess I would have enjoyed sharing some of my ideas and learning from the other panelists and from the audience. Besides, I've not yet been to any of the agile software development conferences, and this seemed like a great opportunity. Unfortunately, work and family duties kept me home for what is turning out to be a mostly at-home summer.
In lieu of attending XP2005, I've enjoyed reading blog reports of the goings-on. One of the highlights seems to have been Laurent Bossavit's Coding Dojo workshop. I can't say that I'm surprised. I've been reading Laurent's blog, Incipient(thoughts), for a while and exchanging occasional e-mail messages with him about software development, teaching, and learning. He has some neat ideas about learning how to develop software through communal practice and reflection, and he is putting those ideas into practice with his dojo.
The Coding Dojo workshop inspired Uncle Bob to write about the notion of repeated practice of simple exercises. Practice has long been a theme of my blog, going back to one of my earliest posts. In particular, I have written several times about relatively small exercises that Joe Bergin and I call etudes, after the compositions that musicians practice for their own sake, to develop technical skills. The same idea shows up in an even more obviously physical metaphor in Pragmatic Dave's programming katas.
The kata metaphor reminds us of the importance of repetition. As Dave wrote in another essay, students of the martial arts repeat basic sequences of moves for hours on end. After mastering these artificial sequences, the students move on to "kumite", or organized sparring under the supervision of a master. Kumite gives the student an opportunity to assemble sequences of basic moves into sequences that are meaningful in combat.
Repeating programming etudes can offer a similar experience to the student programmer. My re-reading of Dave's article has me thinking about the value of creating programming etudes at two levels, one that exercises "basic moves" and one that gives the student an opportunity to assemble sequences of basic moves in the context of a more open-ended problem.
But the pearl in my post-XP2005 reading hasn't been so much the katas or etudes themselves, but one of the ideas embedded in their practice: the act of emulating a master. The martial arts student imitates a master in the kata sequences; the piano student imitates a master in playing Chopin's etudes. The practice of emulating a master as a means to developing technical proficiency is ubiquitous in the art world. Renaissance painters learned their skills by emulating the masters to whom they were apprenticed. Writers often advise novices to imitate the voice or style of a writer they admire as a way to ingrain how to have a voice or follow a style. Rather than creating a mindless copycat, this practice allows the student to develop her own voice, to find or develop a style that suits their unique talents. Emulating the master constrains the student, which frees her to focus on the elements of the craft without the burden of speaking her own voice or being labeled as "derivative".
Uncle Bob writes of how this idea means just as much in the abstract world of software design:
Michael Feathers has long pondered the concept of "Design Sense". Good designers have a "sense" for design. They can convert a set of requirements into a design with little or not effort. It's as though their minds were wired to translate requirements to design. They can "feel" when a design is good or bad. They somehow intrinsically know which design alternatives to take at which point.
Perhaps the best way to acquire "Design Sense" is to find someone who has it, put your fingers on top of theirs, put your eyeballs right behind theirs, and follow along as they design something. Learning a kata may be one way of accomplishing this.
Watching someone solve a kata in a workshop can give you this sense. Participating in a workshop with a master, perhaps as programming partner, perhaps as supervisor, can, too.
The idea isn't limited to software design. Emulating a master is a great way to learn a new programming language. About a month ago, someone on the domain-driven design mailing list asked about learning a new language:
So assuming someone did want to want to learn to think differently what would you go with? Ruby, Python, Smalltalk?
Ralph Johnson's answer echoed the importance of working with a master:
I prefer Smalltalk. But it doesn't matter what I prefer. You should choose a language based on who is around you. Do you know somebody who is a fan of one of these languages? Could you talk regularly with this person? Better yet, could you do a project with this person?
By far the best way to learn a language is to work with an expert in it. You should pick a language based on people who you know. One expert is all it takes, but you need one.
The best situation is where you work regularly with the expert on a project using the language, even if it is only every Thursday night. It would be almost as good if you would work on the project on your own but bring code samples to the expert when you have lunch twice a week.
It is possible to learn a language on your own, but it takes a long time to learn the spirit of a language unless you interact with experts.
Smalltalk or Scheme may be the best in some objective (or entirely subjective!) sense, but unless you can work with an expert... it may not the right language for you, at least right now.
As a student programmer -- and aren't we all? -- find a person to whom you can "apprentice" yourself. Work on projects with your "master", and emulate his style. Imitate not only high-level design style but also those little habits that seem idiosyncratic and unimportant: name your files and variables in the same way; start your programming sessions with the same rituals. You don't have to retain all of these habits forever, and you almost certainly won't. But in emulating the master you will learn and internalize patterns of practice, patterns of thinking, and, yes, patterns of design and programming. You'll internalize them through repetition in the context of real problems and real programs, which give the patterns the richness and connectedness that make them valuable.
After lots of practice, you can begin to reflect on what you've learned and to create your own style and habits. In emulating a master first, though, you will have a chance to see deeper into the mind and actions of someone who understands and use what you see to begin to understand better yourself, without the pressure of needing to have a style on your own yet.
If you are a computer scientist rather than a programmer, you can do much the same thing. Grad students have been doing this as long as there have been grad students. But in these days of the open-source software revolution, any programmer with a desire to learn has ample opportunity to go beyond the Powerbook on his desk. Join an open-source project and interact with a community of experts and learners -- and their code.
And we still have open to us an a more traditional avenue, in even greater abundance, literature. Seek out a writer whose books and articles can serve in an expert's stead. Knuth, Floyd, Beck, Fowler... the list goes on and on. All can teach you through their prose and their code.
Knowing and doing go hand in hand. Emulating the masters is an essential part of the path.
No race ever seems to go quite the way I expect it to. This year's Sturgis Falls Half Marathon was no different.
I went to sleep Saturday night to rain. Little did I know that the rain would continue all night, or that a major thunderstorm would roll in at about 2:00 PM and continue for five hours. With the race scheduled to start at 7:00 AM, runners and organizers alike were left to wait and wonder when the race would start.
The thunderstorm ended right around 7:00 AM, though the rain continued for another 50 minutes or so. But once the storm had past, the organizers and course volunteers did a marvelous job setting things up for a 7:45 AM start.
Like the race details, my run did not go as planned. Last year, I ran a personal-best 1:40, which over 13.1 miles averages to 7:37 minutes per mile. This year, I was aiming for a 7:00 minute pace. With the rain delay, I didn't do a very good job of warming up. But the biggest effect of the rain was, well, the water. With water on the course, we all have to run differently. I also made the tactical decision to wear an older pair of shoes, because I didn't want to ruin a new relatively new pair with an hour and a half of pounding through puddles. (That's the surest way to end the useful life of running shoes: run for a long time in them the soaking wet.)
You can see the effects of my legs tiring in my mile splits. Here are the first eight:
6:56 - 6:49 - 7:02 - 7:02 -
7:03 - 7:08 - 7:08 - 7:13
I'm slowing down a bit... I noticed that my last three miles had fallen off pace, so I tried to pick up the pace to get back on target. Here are my next four miles:
7:10 - 7:40 - 7:22 - 7:40
Ouch -- and I'm not talking about my legs. "Picking up the pace" only got me a 7:10, and then I really slowed down. I hadn't anticipated running any miles as slow as 7:40, and I became a little dispirited when I realized that I didn't have what I needed for the day.
With 1.1 miles left, I gave it all I had left and ran a 7:07 mile. With the crowd cheering all finishers on in the home stretch I sprinted the last 1/10 of a mile in 0:40.
The final result: 1:34:11, which was six minutes faster than my old PR.
Once the race was over, my dispiritedness turned to cheerfulness. My time was good, given the conditions. The last half of the run was difficult, but I stuck with it and finished strong. Life is good. Just because it doesn't always follow the script in my mind doesn't make it otherwise.
I can still do a better job of preparing for my long races during the last week. I didn't eat properly this time; too much "stressful" food in the last 72 hours. Worse, I ran too fast for a few miles each on Tuesday, Wednesday, and Thursday, which I know is a recipe for tired legs. I didn't think I was running too fast at the time, but next time I'll know to run slower, or at least not so far.
The other place I can improve is to slow down at the beginning of the race. Somehow, I let myself go way too fast for the first two miles, especially the second one. I'd be better off running the first two miles in 7:03 and saving some energy and leg strength for later in the race. Now that I'm able to run faster, it's even more important to pace myself early.
While visiting with friends and cheering other runners on after the race, one of my friends gave me one last bit of good race news -- I finished 5th in the 40-44 age group! This is my first medal in a race longer than 5K. Check me out in the unofficial race results. I'm 32nd among the men, which is a big improvement over even last year, when I came in 73rd.
Progress past, progress future. A good place to be: today.
Update (08/22/08): Situational Leadership ® is a registered trademark of the Center for Leadership Studies, Inc. This accounts for use of the ® symbol and the capitalization of the phrase throughout the entry.
The more I think, read, and learn about management and leadership, the more I believe that managers will succeed best if they invert the traditional pyramid that has the president of a company at the top, managers in the middle, and employees on the bottom. I wrote about my credo of leader-as-servant in a recent essay. Now consider this quote:
... managers should work for their people, ... and not the reverse. ... If you think that your people are responsible and that your job is to be responsive, you really work hard to provide them with the resources and working conditions they need to accomplish the goals you've agreed to. You realize your job is not to do all the work yourself or sit back and wait to 'catch them doing something wrong', but to roll up your sleeves and help them win. If they win, you win.
This is from Page 18 of "Leadership and the One-Minute Manager", by Kenneth Blanchard, Patricia Zigarmi, and Drea Zigarmi. I've not read Blanchard's "One-Minute Manager" yet, but I ran across this thin volume in the stacks while checking out some other books on leadership and figured it would be worth a read. I've read some of Blanchard's stuff, and I find his staged storytelling approach an enjoyable way to learn material that might otherwise be pretty dry.
This book extends the principles of the one-minute manager to "situational leadership", the idea that managers should adapt how they work with employees according to the context, specifically the task to be done and the employee's level of competence and confidence with the task. This approach emphasizes communication between manager and the employee, which allows the two to agree on a leadership style most suitable for the situation.
On this approach, leadership style is "how a manager behaves, over time, when trying to influence the performance of others". The authors are careful to remind me that my style is not what I say my style is, or how I mean to behave, but how others say I behave. This reminds of a basic principle of object-oriented programming: a message is interpreted in the context of the receiver, not the sender. The same principle applies to human communication.
In order to be a good situational leader, a manager must have three skills:
An employee's development level and performance are a product of two variables, which Blanchard call 'competence' and 'commitment'. I tend to think of these variables in terms of skill and confidence. Whatever the terms, managers would do well to think about these variables when facing inadequate performance in the workplace. Performance problems are usually competence problems, commitment problems, or both.
Blanchard asserts that most people follow a typical path through this two-dimensional space on the way from being a novice to being an expert:
The odd change in commitment level follows the psychology of learners. Novices begin with a high level of commitment, eager to learn and willing to be instructed. But, "as people's skills grow, their confidence and motivation drop". Why? It's easy to become discouraged when you realize just how much you have left to learn, how much work it will take to become skilled. After commitment bottoms out, it can recover as the learner attains a higher level of competence and begins to see the proverbial light at the end of the tunnel. Blanchard's whole approach to Situational Leadership ® is for the leader to adapt her style to the changes of the developing learner in a way that maximizes the chance that the learner's commitment level recovers and grows stronger over time. That is essential if the manager hopes for the learner to stick with the task long enough to achieve a high level of competence.
I accept this model as useful because I have observed the same progression in my students, especially when it comes to problem solving and programming. They begin a course eager to do almost anything I ask. As they learn enough skills to challenge harder problems, they begin to realize how much more they have to learn in order to be able to do a really good job. Without the right sort of guidance, university students can lose heart, give up, change their majors. With the right sort of guidance, they can reverse emotional course and become powerful problem solvers and programmers. How can instructor provide the right sort of guidance?
As an aside, I have a feeling I'll be approaching Development Level 2 soon with my new administrative tasks. At times, I have a glimpse of how hard it will be to manage all the information coming at me and to balance the number of different activities and egos and goals that I encounter. Maybe a little self-awareness will help me combat any sudden desire to cry out as Job. :-)
The way a manager or instructor can provide the right sort of guidance to a particular employee at a particular point in time is to choose a leadership style that fits the context. There are four basic styles of leadership available to the situational leader:
Like development levels, the leadership styles are a combination of two variables: directive behavior, by which the manager carefully structures the environment and closely supervises the employee, and supportive behavior, by which the manager praises, listens, and helps the employee. The idea is to choose the right combination to meet the needs of the employee on a particular task at a particular moment in time. As the authors say, "Leaders need to do what the people they supervise can't do for themselves at the present moment."
We can think of the four leadership styles as occupying quadrants in a 2x2 grid:
To match the unusual path that most people follow through the levels of development, a situational leader needs to follow a particular path through the two-dimensional space of leadership styles, in the order listed above: directing for Level 1, coaching for Level 2, supporting for Level 3, and delegating for Level 4. While the actual progression will depend on the specific employee, one can imagine the stereotypical path to be a bell-like curve from the lower left of the grid, up through the top quadrants, down through the lower right of the grid:
This progression can help students, too. They usually need a level of coaching and support that increases throughout the first year, because they can become discouraged when their skills don't develop as quickly as they had hoped. At the same time, the instructor continues to structure the course carefully and direct their actions, though as the year progresses the students can begin to shoulder more of the burden for deciding what to do when.
Communication remains important. Managers have to tell employees what they are doing, or they are likely to misinterpret the intention behind the leadership style. An employee who is being directing will often interpret the manager's constant attention as "I don't trust you", while an employee who is being delegated to may interpret the manager's lack of attention as "I don't care about you". When people lack information, they will fill in the blanks for themselves.
That's where contracting comes in. The situational leader communicates with the people he supervises throughout the entire process of diagnosis and style selection. The person being supervised should be in agreement with the supervisor on the development level and thus the leadership style. When they disagree on development level, the supervisor should generally defer to the employee, on the notion that the employee knows himself best. The manager may then contract for a bit closer supervision over the short-term in order to confirm the diagnosis. When they disagree on leadership style, the employee should generally defer to the manager, who has developed experience in working with employees flexibly. Again, though, the manager may then agree to revisit the choice in a short time and make adjustments based on the employee's performance and comfort.
Communication. Feedback. Adaptation. This all sounds 'agile' to me.
Blanchard relates the Situational Leadership ® approach to "teaching to the exam":
Once your people are clear on their goals..., it's your job to do everything you can to help them accomplish those goals ... so that when it comes to performance evaluation ..., they get high ratings....
For Blanchard, teaching to the exam is a good idea, the right path to organizational and personal success. I tend to agree, as long as the leader's ultimate goal is to help people to develop into high competence, high commitment performers. The goal isn't to pass the exam; the goal is to learn. The leader's job is to help the learner succeed.
Like the agile methods, this process is a low ceremony but high in individual discipline, on the part of both the leader and the learner.
What should the situational leader do when a novice's performance is way out of bounds?
You go back to goal setting. You say, 'I made a mistake. I must have given you something to do that you didn't understand. Let's backtrack and start again.'
Trainers have to be able to praise, and they have to be able to admit to mistakes. We teachers should keep this in mind -- we aren't often inclined to admitting that we are the reason students aren't succeeding. Instructors do face a harder task that Blanchard's managers in this regard, though. Situational Leadership ® is based on one-on-one interactions, but an instructor may have a class of 15 or 30 or 100, with students at various levels of development and performance and confidence. In an ideal world, all teacher-student interactions might be one-on-one or in small groups, but that's not the world we live in right now.
As I read books like this one, I have to keep in mind that my context as teacher and future department head is somewhat different from the authors'. A department head is not in the same relationship to the faculty as a manager is to employees. The relationship is more "first among equals" (my next leadership book to read...). Still, I find a lot of value in learning about how to be more self-aware and flexible in my interactions with others.
One thing I have noticed in my last few weeks preparing to move into the Big Office Downstairs: I view the actions of the administrators around me in a different light. Where I might have reacted immediately to some behavior, often negatively, I now am a bit more circumspect. What could make that seem the right thing to do? If nothing else, I am aware that I will soon be in the position of having to make such decisions, and it probably looks easier to do than it is. Kind of like playing Jeopardy! from the comfort of your own home... even Ken Jennings is an easy mark when you're sitting on your sofa.
Swapping roles is a great way to develop empathy for others. This is certainly true for students and teachers. I do't know how many students who, after having to teach a short course at work or having to lecture in place of a traveling advisor, have told me, "I never knew how hard your job was!" Those students tend to treat their own instructors differently thereafter.
Playing many different roles on a software team can serve a similar purpose. Developers who have tested or documented software often appreciate the difficulties of those jobs more than "pure" developers. Of course, playing different roles can help software people do more than develop empathy for their teammates; it can help them build skills that help them do all the jobs better. Writing and testing code come to my mind first in this regard.
Empathy is a good trait to have. I hope to have more of it -- and put it to good use -- as a result of my new experience.
The August 2005 issue of Communications of the ACM will contain an opinion piece entitled "The Thrill Is Gone?" (pdf) by Sanjeev Arora and Bernard Chazelle, who are computer scientists at Princeton University. This article suggests that we in computing have done ourselves and the world a disservice in failing to communicate effectively the thrill of computer science -- not technology -- to the general public. On their view, this failure probably plays a role in our declining undergraduate enrollments, declining research funding, and a general lack of appreciation for importance of computer science by the public. People, especially young people, should be excited by computing, but instead they are nonplussed.
As I read Arora and Chazelle's piece, I recalled some of the recent themes in my writing here, including the irony that this is the most exciting time to study CS ever, the love of computing planted by Gödel, Escher, Bach, and the need for a CS administrator to be a teacher to a broader audience. But this article makes a larger point.
Arora and Chazelle do a very nice job of pointing out that the story of how computer science has shaped the technologies we all use can and should be told to the outside world. Fundamental ideas such as P-versus-NP, cryptography, and quantum computing are accessible to folks with a high school education or less when described in a way that strips away unnecessary complexity. The effect of computing on other disciplines such as physics, biology, and economics relies more on computer science ideas than on advanced engineering of hardware, but most people have no clue. How could they? We've never told them.
Reading this piece now has heightened my resolve to do something this fall that I've been saying I would do for at least two years: put together a short course on computing for the middle-school students at my daughters' school. I am thinking of using the Computer Science Unplugged materials created by a group of New Zealand computer scientists who evidently believe what I am preaching here -- but who have done me one better in writing instructional material that is accessible to elementary school students.
Another step I will take this fall is to look for opportunities to write an op-ed piece or two for the local paper. A few weeks back, our paper ran this story on software bugs in automobiles. At the time, I thought that this was a great opportunity for someone in my department to write a piece that might help the paper's readers better understand the issues involved, and maybe even garner a little good publicity for our academic programs. But I was busy, so... I considered asking a colleague to write the piece instead, but shied away. From now on, though, I plan to make the time or make the request. It's has always been a responsibility of my academic position, but now perhaps moreso.
Regular readers of this blog can probably guess my favorite line from the Arora and Chazelle piece. It's a theme that appears here often, often with a link back to Alan Kay:
Computer science is a new way of thinking.
But I also like the final line of the article, enough to quote it here:
We think it is high time that the computer science community should reveal to the public our best kept secret: our work is exciting science--and indispensable to the nation.
Friday was my wife's and my 16th wedding anniversary. To celebrate, we went out for lunch prepared by a well-known local chef, put on at the Waterloo Center for the Arts. We had the pleasure of dining with Cedar Falls author Nancy Price, her daughter, and her son-in-law. Ms. Price is most famous for her novel Sleeping with the Enemy, which was made into a major motion picture starring Julia Roberts. Her father, Malcolm Price, was president of my university back in the 1940s.
As is often the case, when these folks found out that I am a computer scientist, they eagerly shared their views on how programs help and hinder them in their jobs. All three have plenty of experience with computers, though as relatively non-technical users. The daughter and son-in-law hold Ph.D.s and write as a part of their jobs. The son-in-law, a botanist, claims to have been the first person in his department at Cal-Berkeley to produce his dissertation using a word processor.
Ms. Price herself writes, and so her computer is a big part of her professional life. She wasn't shy in telling me that, in her experience, software still doesn't support her work very well. "Available programs just don't do a very good job helping an author work with a four- or five-hundred page document."
The ensuing conversation led me to several questions.
What programs do authors use these days? Microsoft Word is the de facto standard, I suppose, as it seems to be the de facto word-processing standard everywhere these days. As a faculty member, I have long tilted at windmills in an attempt to eliminate the Word .doc as the standard interchange format. I have heard from colleagues who've written books using Word, and they tell stories of annoying little quirks. But these days many publishers seem to be using Word, at least as the interface with their authors.
I wasn't much help to my lunch partners in this regard, as I hang with a technical crowd that likes to get their hands dirty when writing. I wrote my dissertation using WordPerfect, and I did have to fight the system on issues such as pagination and figure placement. Some still use Framemaker, though that seems to be losing mindshare. The academic standard has long been LaTex, which has the author write at the lowest level in plain text. These days, software folks are as likely to roll their own authoring systems, writing with XML and creating custom solutions to their own problems, such as writing source code in text. But that isn't an option for most writers, who just want a program that lets them do what comes naturally.
What should a novelist's workbench look like? What should it do? Googling on novelists software brings up lots and lots of tools, mostly for Windows. I don't have any good sense for what, if anything, these programs offer an author that a standard word processor doesn't have. When I examine my own writing needs, I usually end up thinking about technical problems such as source code in text that have no meaning to a poet or novelist. I guess I should find a project working with a writer to produce such a program -- that's always the best way for me to learn about the needs of a domain, by writing programs for the experts.
Who would produce such a product? Ms. Price offered an answer, based only on an anecdote from a writing colleague. She said that he had spent some time working with a software company in an effort to find out why there weren't better programs for writers out there. He had reported back, she related with a playful smile, that the programs were only mediocre because there was no money to be made -- authors simply weren't a big enough or rich enough market to drive a software niche.
This is the sort of cynical attitude that we software folks often encounter when we talk to users out in the world. I think it should bother us more than it sometimes does. Why are we willing, if not content, to let people think that we are unwilling or incapable of meeting the needs of users?
Actually, a program for writers seems like the perfect sort of task for a small, specialty software house. My Google link earlier certainly indicates that a lot of small developers are in play. I doubt that the market could support an expensive program, but the right product might be able to attract enough volume to be lucrative enough. I don't imagine that this program would be a huge technical challenge to a software developer, which would leave plenty of energy for adapting the program and creating a wonderful interface.
One last note from lunch. Our dining partners expressed great surprise that I, a computer scientist, am a Mac user. "I didn't figure they'd let you use a Mac in a CS department," the botanist said. I explained that I've been a Mac man since graduate school in the 1980s, though I've also been a Unix user for just as long. Now that Mac OS is a Unix, I explained, my tools are quite in vogue. "Even the Linux geeks will talk to me now!" If I'd had more time and an inclination to ramble on, I'd've told them how so many high profile folks in the software world use Macs these days. But they didn't seem to be sold on the virtues of a Mac, and lunch time was winding down.
I enjoyed our lunch and conversation and was reminded that we computer scientists and software developers can find interesting questions almost everywhere we go, just by talking to users of the programs we write.
Now that I am taking on more and different sorts of administrative tasks, I'm beginning to feel the load of managing a large number of to-do items of various urgency, complexity, and duration. I know that there are "productivity apps" and "personal information managers" out there aimed at just this sort of problem, but I tend to be a low-overhead, plain-text kind of guy. So I'm exploring some lightweight tools that I can use to document, organize, and use all the information that is rushing over me these days. Right now, I'm looking at some simple wiki-like tools.
One tool I like a lot after a little experimentation is VoodooPad, a notepad that acts like a wiki. As a text editor, it feels just like TextEdit or NotePad, except that wiki names automatically create new pages and link to them. But it also supports lots of other operations, such as export to HTML (to publish pages to a server) and scripting (to add functionality for common actions). VoodooPad costs $24.95, though I've been exploring with both free options: using full VoodooPad with a limit of 15 pages per document, and using VoodooPad Lite with unlimited pages but no scripting and limited export.
Oh, sorry for you non-Mac folks. VoodooPad is an OS X-only app.
At this point, I am perhaps more enamored of the idea of TiddlyWiki than its usefulness to me. I really want my productivity app to be as simple as a text editor and support plain-text operations as much as possible (if only via export). But what a neat idea this is!
Finally, and lowest tech of all, I am finding many different ways to grow and use a Hipster PDA to meet my information management needs. However much I love and use my laptop, there are times when pen and paper are my preferred solution. With custom products like the do-it-yourself Hipster PDA planner, I can be up and running in minutes -- with no power adapter required.
This isn't (just) a running post, though it starts with a running story.
This morning, I did my last speed workout before the half marathon I will run in 11 days. I have not yet started back to real interval training since my marathon last October, Instead, I have been focusing on sustaining speed over longer distances. I would ordinarily have run 8 miles today at something just under 7:00/mile. With the half-marathon looming, I decided to test myself a bit and try to run 10 miles at 7:00/mile. In an ideal world, I would run that pace in my race. For my workout, I'd be slowing down a bit to target race pace but sustaining the pace for a longer stretch. It's good to train the body and mind to recognize and hold your target pace, so I was hoping to run all of my miles this morning in about the same time, between 6:50-6:54/mile.
Going from 8 miles to 10 at a challenging pace may not seem like all that much, but it is a challenge. Usually, I finish my 8-mile workout pretty well spent, often having to work hard over the last three miles to keep the pace up. Where would I get the extra energy, both physical and mental, to go as fast for longer?
In some ways, the physical part is the easier of the two. You can usually do more physically than you think. When a person tells me, "I can't even jog a block", I usually think they could. It might well hurt a bit, but they could do it. There muscles are more powerful than they realize. Since getting into better shape, I have often been surprised that my body could do more than I expected of it.
The mental energy is another story. Even if my body can handle 10 miles at race pace, it will beginning feeling the stress much sooner. I knew that my body would be talking back to me this morning -- "Why? Why?" -- by the six or seventh mile. Being able to keep the faith and keep the pace becomes a mental challenge, answering the calls of fatigue and pain with energy found elsewhere.
How did I manage? I think that the key was having a fixed and realistic target for the run. 10 miles isn't that much more than 8, so I know that my body can do it. Knowing that I only had to put together two more miles allowed my mind to adjust to the small increment before the run began. When I started to feel the workout in its seventh mile, my mind was ready... "Just a couple of more miles. Focus on the pace of individual laps. It's only two miles beyond what you usually do." Then, when I reached the 8-mile mark and my body mostly felt like stopping, I could say to myself, "Just a couple of more miles. You just did two tough ones. Will these really be any harder?" They weren't. My body could do it.
I don't actually conduct this internal dialogue all that much as I run, only in the moments when my focus shifts away from the lap I'm running to the seemingly vast number of laps that remain. I can't run those laps yet; all I have is this one.
I think it's a game of expectations. With reasonable expectations, the mind can help you do more than otherwise would be comfortable. An important part of reasonableness is duration--this only works well in the short term. Had I tried to run a full 13 miles this morning at race pace, my mind may have failed me. My body is still recovering from recent 5K and a 12-mile weekend run, and so it would begin to complain with increasing fervor as the miles added up. And I doubt that my spirit would have been strong enough to win the battle, because doing a 13 miles at race pace isn't reasonable for this day. But with a reasonable short-term expectation, I was able to handle crunch time with that short-term horizon in mind.
I've written about sustainable pace before, including about what happens when I try to go faster than a sustainable pace for too long and how software developers might train for speed. (I've even referenced sustainable pace in the context of a Bill Murray film.) But the idea has been on mind again lately in a different way. The pace that is sustainable is closely tied to one's expectations about its endurance. This mental expectations game applies in running, but it also applies in other human endeavors, including software development.
A recent thread on the XP mailing list considered the proposition that "crunch mode" doesn't work. There didn't seem to be many folks defending the side that crunch mode can work. That's because most people were thinking about sustainable pace over the long run, which is what agile folks tend to mean when they talk about pace. They do so for good reason, because the danger in software development has usually been for companies to try to overdo it, to push developers too far and too fast, with the result being a decrease in software quality and ultimately the burn-out of the developers.
At least one person, though, argued that crunch mode can work. The gist of SirGilligan's claim is that a software team can go faster and still do quality work -- but only for a well-defined short term. He even used a running metaphor in defining such a crunch time: We are not 1/3 along the way, we are in the straight-a-way and heading for the finish line. How can developers win the expectations game in such a scenario? The end is clearly in sight:
Pending features are well defined. Order of feature implementation is defined. Team is excited for the chance to deliver. It is the team's choice and idea to crunch, not some manager's idea. We enter crunch mode! After we deliver everyone gets the following Friday and Monday off for a four-day weekend!
That sounds an awful lot like what a runner does when she races a marathon or half-marathon faster than she thinks otherwise possible. The pending goal is well-defined. The runner is excited to deliver. She chooses to push faster. And, after the race, you take a few days off for rest. A party, of course, is always in order!
I think the great ones are able to manage their expectations in a way that allows them to do more than usual for a short while. The good news for the rest of us is that we can learn to do the same, which gives us the ability to succeed in a wider variety of circumstances. Just always keep in mind: You can't keep going faster indefinitely.
UPDATE 08/03/06: Near the end of this entry, I had originally used a cartoon image for comic effect, with a link to the author's web site. However, I had not obtained legal permission to use the image. The author asked me to remove the cartoon from the entry. I have replaced it with a description of the gag.
Before I applied to be department head, I was discussing the idea with some colleagues over dinner at the spring planning meeting for OOPSLA 2005. During that conversation, Brian Marick asked me, "What kind of leadership can a department head provide?"
The question shouldn't have caught me off-guard, but it did. I'd been so busy thinking about the details of administration, the politics of my own department, and the mechanics of applying that I hadn't spent enough time thinking about the big picture, about the universe of what a head can and should accomplish.
I gave some mumble-mumble answer to Brian at the time, but afterward that question consumed a lot of my energy over the next week or so, as I wrote up application materials. How might I answer the question now? Here are some of my thoughts. As always, I'd love to hear yours.
What kind of leadership can a department head provide?
Within the department itself, the head's primary jobs are to remove friction and facilitate discussion. Removing friction means making sure that the department runs in such a way that faculty can do their jobs without interference. The head must take care of paperwork, routine interactions with students and university, and any other issue that distracts the faculty. The head should facilitate discussion so that the faculty can set the course for the department. In order to accomplish this, the head must work to create an environment in which all feel comfortable participating in and contributing to the department's welfare. In this regard, the head's job is to help the faculty do its job, only better.
At the boundary of the department and the world, the head's job is one of agency. First, the head advocates for the department at the college, university, and regents levels, ensuring that the faculty's concerns are heard and arguing its case for the resources it needs to achieve its goals. Second, the head represents the department to students, parents, the university community, the civic community, and the computer science community. (As I wrote recently, I realize now that a non-trivial component of this representative role can be thought of in terms of the department head as teacher.) To be an effective advocate and representative, the head must care deeply for the department and its purpose. I do.
Finally, I believe that a department head can provide a form of personal leadership, by setting an example of open communication, transparent decision making, and respect for others. A good head does not settle for a passive stewardship of duties but instead seeks actively to help the faculty define and achieve its goals.
As I thought about applying for the position, and then went through the process, I sometimes wondered if I were putting myself in the same position as the narrator a classic comic I once saw. Two cavemen have set a trap for a saber-toothed tiger: a box sitting over a hunk of meat and propped up by a stick. When the tiger comes to partake in the treat, the cavemen will pull a rope tied to the stick and drop the box down on top of the unsuspecting diner. The visual punch line: the cavemen are standing under the box, too.
What was I getting myself into? But ultimately I acted with confidence, because deep down I believe that leading my department is a worthwhile task, an opportunity to help my colleagues achieve something honorable.
For my assignment as department head, I have adopted the following quote as my credo:
The first responsibility of a leader is to define reality. The last is to say thank you. In between the two, the leader must become a servant and a debtor.
-- Max DePree, Leadership is an Art
(I found it in a wonderful little book I first learned about at Ward's wiki, on a page about wonderful little books.)
And, besides, if I do find myself trapped under a box with the tiger that is my faculty, I can take solace that my current appointment as Keeper of the Box runs for only three years...
As I think I've mentioned here before, I am a big fan of Electron Blue, a blog written by a professional artist who has taken up the study of physics in middle age. Her essays offer insights into the mind of an artist as it comes into contact with the abstractions of physics and math. Sometimes, the two mindsets are quite different, and others they are surprisingly in tune.
I just loved this line, taken from an essay on some of the similarities between a creator's mind and a theoretical scientists:
When I read stuff about dark energy and string theory and other theoretical explorations, I sometimes have to laugh, and then I say, "And you scientists think that we artists make things up!"
Anyone who has done graduate research in the sciences knows how much of theory making is really story telling. We in computer science, despite working with ethereal computation as our reality, are perhaps not quite so make-believe as our friends in physics, whose efforts to explain the workings of the physical world long ago escaped the range of our senses.
Then again, I'm sure some people look at XP and think, "It's better to program in pairs? It's better to write code with duplication and then 'refactor'? Ri-i-i-i-ght." What a story!
My first race of the season was a success. Last night I ran the Loop the Lakes 5K here in Cedar Falls. The conditions weren't ideal at the start -- 80+ degrees Fahrenheit and muggy, with a few raindrops -- but I knew that I had a chance to improve on my personal 5K best. The last few months I've been doing speed workouts of 8 miles at 6:40-7:00 minutes per mile, and my best 5K time was 21:26. But with the weather and the hills of an outdoor run, I knew that things might not go as smoothly as a track workout.
Any worries were quickly erased. I ran Mile 1 in 6:28, my fastest recorded mile ever. Then Mile 2 passed in 6:34, and I was well on my way to a PR.
Then came Mile 3. The raindrops had turned to a heavier sprinkle, but worse was the stiff head wind that greeted us at the beginning of the mile, next to the big lake. After 2/3 of a mile or so, we turned out of the wind... right onto a steep incline to finish the race. Everyone slowed down a bit, and I completed the mile in 7:08. After a sprint over the last 1/10 of a mile, I reached the finish line in 20:50. A fine time, which I am most happy with!
I even won a prize. I finished 2nd in the 40-49 age group. The guy who came in first beat me by only 4 seconds, and we nad run within reach of each other the whole race. I've won two age group prizes before, but they were flukes -- my slower times were good enough only because my age group didn't have many participants. This prize was legitimate; the times we ran in 40-49 were in reasonable range of what the other age group winners ran.
My speed goals now turn to long, sustained pace. To run a 3:30-hour marathon, I need to run an 8:00 minute/mile pace. I'd like my next few months of training to turn the pace I now find comfortable for 8 miles into a comfortable but somewhat slower 25 miles. The mindset for this sort of training is much different for me. I can't push myself to my limits too soon, but instead must start slower with the idea that this will become my limit in a few miles -- and then keep at it when it gets hard. That's the challenge in training for a marathon, at least for me. I'm beyond the point where long miles bother me much, but setting the right pace for the long runs is still tough.
But for a day or so I'll enjoy my new PR.
Last week, I posted a note on a cool consequence of Smalltalk being written in Smalltalk, the ability to change the compiler to handle new kinds of numeric literals. Here is another neat little feature of Smalltalk: You can quit the interpreter in the middle of executing code, and when you start back up the interpreter will finish executing the code!
Consider this snippet of code:
Smalltalk snapshot: true andQuit: true.
... code to execute at next start-up, such as:
PWS serveOnPort: 80 loggingTo: 'log.txt'
If you execute this piece of code, whether as a method body or as stand-alone code in a workspace, it will execute the first statement -- which saves the image and quits. But in saving the image, you save the state of the execution, which is precisely this point in the code being executed. When you next start up the system, it will resume write where it left off, in this case with the message to start up a web server.
How I wish my Java environments did this...
This is another one of those things that you want Java, Perl, Python, and Ruby programmers to know about Smalltalk: it isn't just a language; it is a living system of objects. When you program in Smalltalk, you don't think of building a new program from scratch; you think of molding and growing the initial system to meet your needs.
This example is from Squeak, the open-source Smalltalk I use when I have the chance to use Smalltalk. I ran across the idea at Blaine Buxton's blog, and he found the idea in a Squeak Swiki entry for running a Squeak image headless. (A "headless image" is one that doesn't come up interactively with a user. That is especially useful for running the system in the background to drive some application, say a web server.)
Some folks have expressed concern or even dismay that my becoming department head will pull me away from teaching. Administration can't be as much fun as the buzz of teaching, with its paperwork and meetings and bureaucracy. And there's no doubt that teaching one course instead of three will create a different focus to my days and weeks.
But the more I prepare for my move to the Big Office Downstairs, the more I realize that -- done well -- a head's job involves a fair amount of teaching, too, only in a different form and to broader audiences.
To address the problem of declining enrollments in our majors, we as a department need to educate students, parents, and high school counselors that this is the best time ever to major in computing. To ensure that the department has access to the resources it needs to do its job effectively, we as a department must educate deans, provosts, presidents, and state legislatures about the nature of the discipline and its needs. And that's just the beginning. We need to help high schools know how to better prepare students to study computer science at the university. We need to take educate the general public on issues where computing intersects the public interest, such as privacy, computer security, and intellectual property.
These opportunties to teach are all about what computing is, does, and can be. They aren't one of those narrow and somewhat artificial slices of the discipline that we carve off for our courses, such as "algorithms" or "operating systems". They are about computing itself.
The "we"s in the paragraph above refer to the department as a whole, which ultimately means the faculty. But I think that an important part of the department head's job is to be the "royal we", to lead the department's efforts to educate the many constituencies that come into contact with the department's mission -- suppliers, consumers, and everyone in between.
So, I'm learning more about the mindset of my new appointment, and seeing that there will be a fair bit of education involved after all. I'm under no illusion that it will be all A-ha! moments, but approaching the job with an educator's mind should prepare me to be a more effective leader for my department. The chance to educate a broader audience about computer science and its magic should be a lot of fun. And, like teaching anything else, the teaching itself should help me to learn a lot -- in this case, about my discipline and its role in the world. Whether I seek to remain in administration or not, in the long run that should make me a better computer scientist.
How's that for an unlikely title?
In the Bible, Jesus takes some heat for befriending sinners and tax collectors. I'm not sure that the modern view of the tax collector is much different from the common one in Jesus's time. We don't have to deal with individuals overcollecting and skimming the difference, but we do have to deal with the IRS, known in the public mind mostly for a bloated, byzantine tax code and bad phone advice.
But I have a good story to tell about the IRS.
It seems that I made a mistake on my federal tax return this year, one that cost me $2000. The IRS noticed, corrected the error, and increased my refund accordingly.
You see, I am one of those dinosaurs who still does his own taxes by hand -- pencil and paper, with folders of documents. I'm well organized and have some training in accounting, and I still enjoy the annual ritual of filling out the tax forms.
In over twenty years, I do not think I have made any but the most trivial errors on a tax return. I check and double-check my work to be sure it's right before I submit. There may have been cases where I was not aggressive enough in claiming a deduction, or maybe too aggressive (though that's less likely). But the numbers I submitted were pretty much the right ones.
This year, though, I forgot to claim my Child Tax Credit, on Line 51 of Form 1040. I correctly recorded my daughters' information on Page 1, but somehow wrote in $0 for credit, when it should have been $2000. That reduced my refund by, you guessed it, $2000. I don't honestly remember how I made this mistake, whether on my work or in transcribing the final answers. But it was there in black and white.
I am a bit embarrassed to admit this to readers who may now question my reliability on matters of great professional import. But in the interest of fairness, I want to give credit where credit is due. Most of us take the time to complain when the world treats us ill, but we usually forget to take the time to rejoice, at least publicly, when the world treats us well. The result is that an organization like the IRS, which has the thankless job of collecting our hard-earned money to support the workings of the republic, ends up with a thankless reputation, too. But today I can say "thank you" for a job well done.
I'm glad that the IRS did its job well and rescued me from a costly oversight. Apple is probably happy, too, because the larger refund means that I can now afford a higher-end PowerBook than was originally planning to buy!
Modern invention has been a great leveler.
A machine may operate far more quickly
than a political or economic measure to
abolish privilege and wipe out the distinctions
of class or finance.
-- Ivor Brown, The Heart of England
I finally read Paul Graham's newest essay, Hiring is Obsolete, this weekend. I've been thinking about the place of entrepreneurship in my students' plans recently myself. When I gave my interview presentation to our faculty when applying to be department head, I talked about how -- contrary to enrollment trends and popular perception -- this is the best time ever to go into computer science. One of the biggest reasons is the opportunity for ambitious students to start their own companies.
Philip Greenspun related an illustrative story in his blog:
The engineering staff at Google threw a big party for Silicon Valley nerds last Thursday night [May 5], ...
Larry Page, one of the founders, gave an inspiring talk about what a great time this is to be an engineer. He recalled how at one point Google had five employees and two million customers. Outside of Internet applications it is tough to imagine where that would be possible. Page also talked about the enjoyment of launching something, getting feedback from users, and refining the service on the fly. ...
This sounds like the same sort of experience that Graham touts from ViaWeb.
Admittedly, not every start-up will be a ViaWeb or a Google, but that's not the point. The ecosphere is rife with opportunities for small companies to fill niches not currently being served by larger companies. Not all such companies are doing work specifically for the web, but the web makes it possible for them to make their products visible and available. The web reduces a company's need for some of the traditional trappings of a business, such as a large, dedicated sales staff.
The sort of entrepreneurship Graham touts is more common in Silicon Valley and the Route 128 corridor, and more generally in large metropolitan areas, but Grahams's advice applies even here in the Great Midwest -- no, not "even here", but especially here. The whole point of the web's effect on business is that almost anyone almost anywhere can now create a business that does something no one else is doing, or does something others are doing but better, make a lot of money. Ideas and hard work are now more important than location or who you know.
UNI students have had a few successes in this regard. I keep in close touch with one successful entrepreneur who is former student of ours. When he was a student here, he already exhibited the ambition that would lead to his business success. He read broadly on the web and software and technology. He asked questions all the time. By the time he left UNI, he had already started a web hosting company with a vision to do things differently and better. I love to visit his place company, give whatever advice I can still give, and learn from him and what he is doing.
Back in the old days, most people would have moved to New York or San Francisco in order to start his first company -- because that's "where the action was". I'm sure that some people told him that he should move to Chicago or at least Minneapolis to have a chance to succeed. But he started his company right here in little ol' Cedar Falls, Iowa, and did just fine. He can enjoy the life available in a small city in a relatively rural part of America. His company's building is ten feet from a paved bike trail that circles a small lake and connects into a system of miles and miles of trails. His margins can be lower because his costs of doing business are lower. And working with the growing tech community here he can dream as big as he likes and is willing to work.
This guy hasn't made it big like Graham or Page or Gates, but he is one example of the bountiful opportunities available to students studying at schools like UNI throughout the world. And he could never have learned as much or done as much if he had followed the steady flow of our students to the big-box insurance companies and service businesses that hire most of our students.
How can we -- instructors and the world at large -- help students appreciate that the "cage is open", as Graham describes the Brave New World of business? The tendency of most university professors is to offer another course :-). When I was a grad student at Michigan State, I sat in on a course during my last quarter that was being offered jointly by the Colleges of Engineering and Business to teach some essential skills of the entrepreneurial engineer. I wish that it had come earlier in my studies because by then my mind was set on either going corporate (AI research with a big company like Ford or Price Waterhouse) or going academic.
There is certainly some value in incorporating this kind of material into our curricula and maybe even offering stand-alone courses with an entrepreneurial bent. But this transition in how the world works is more about attitude and awareness than the classroom. Students have to think of starting a company in the same they think of going to work for IBM or going to grad school, as a natural option open to everyone. Universities will serve students better by making starting their own companies a standard part of how we talk about their futures and of the futures we expose them to.
There are some particular skills that universities need to help students develop, beyond what we teach now. First and foremost is the ability to identify problems with economic potential. We are pretty good at helping students learn to identify cool problems with academic potential, because that's what we do when we do our own research. But a problem of basic academic interest rarely results in a program or service that someone would pay for, at least not enough someones to make it reasonable as the basis for a commercial venture. Graham gives some advice in his various essays on this topic, and the key almost always comes down to "Know users." Only by observing carefully people who are doing real work are we likely to stumble upon those inefficiencies that they would be willing to pay to make go away. Eric Sink has also written some articles useful to folks who are thinking about starting their own software house.
The other things we teach students are still important, too. A start-up company needs programmers, people who know how to develop software well and who have strong analytic skills. "The basics" such as data structures, algorithms, and programming languages are still the basics. Students just need to have a mindset in which they look for ways to use these skills to solve real problems that real users have -- problems that no one else is solving for them yet.
Hiring is Obsolete has more to say than just that students should consider being entrepreneurs. In particular, Graham talks about the opportunities available to large companies in an ecosphere in which start-ups do initial R&D and identify the most capable software developers. But I think that these big companies will take care of themselves. My interest is more in what we can do better in the university, including what we can do to get folks to see what a wonderful time this is to study computer science.
I think I should take my next sabbatical working for one of my former students...
I recently made a bittersweet decision: I am not going to renew my membership in AAAI. The AAAI is the American Association for Artificial Intelligence, and I have been a member since 1987, when I joined as a graduate student.
Like many computer scientists who grew up in the '70s and '80s, AI was the siren that lured me to computing. Programs that could play chess, speak and understand English sentences, create a piece of music; programs that could learn from experience... so many tantalizing ideas that all lay in the sphere of AI. I wanted to understand how the mind works, and how I could make one, if only a pretend one in the silicon of the rather inelegant machines of the day.
I remember when I first discovered Gödel, Escher, Bach and was enchanted even further by the idea of self-reference, by the intertwining worlds of music, art, mathematics, and computers that bespoke a truth much deeper than I had ever understood before. The book took me a whole summer to read, because every few pages set my mind whirling with possibilities that I had to think through before moving on.
I did my doctoral work in AI, at the intersection of knowledge-based systems and memory-based systems, and reveled in my years as a graduate student, during which the Turing Test and Herb Simon's sciences of the artificial and cognitive science were constant topics of discussion and debate. These ideas and their implications for the world mattered so much to us. Even more, AI led me to study psychology and philosophy, where I encountered worlds of new and challenging ideas that made me a better and more well-rounded thinker.
My AI research continued in my early years as an assistant professor, but soon my interests and the needs of my institution pulled me in other directions. These days, I think more about programming support tools and programming languages than I do AI. I still love the AI Enterprise but find myself on the outside looking in more often than not. I still love the idea of machine learning, but the details of modern machine learning research no longer enthrall me. Maybe the field matured, or I changed. The AI that most interests me now is whatever technique I need to build a better tool to support programmers in their task. Still, a good game-playing program still draws my attention, at least for a little while...
In any case, the idea of paying $95 a year to receive another set of printed magazines that I don't have time to study in depth seems wasteful of paper and money both. I read some AI stuff on the web when I need or want, and I keep up with what my students are doing with AI. But I have to admit that I'm not an AI scientist anymore.
For some reason, that is easier to be than to say.
I have had this link and quote in my "to blog" folder for a long time:
... the one thing that a Ruby (or Python) programmer should know about Smalltalk is, it's all written in Smalltalk.
But I wanted to have a good reason to write about it. Why does it matter to a Smalltalker that his language and environment are implemented in Smalltalk itself?
Today, I ran across a dynamite example that brings the point home. David Buck
... was working for a company once that did a lot of work with large numbers. It's hard, though, to write 45 billion as 45000000000. It's very hard to read. Let's change the compiler to accept the syntax 45b as 45 billion.
And he did it -- by adding six lines of code to his standard working environment and saving the changes. This is the sort of openness that makes working in Java or most any other ordinary language feel like pushing rocks up a mountain.
Lisp and Scheme read macros give you a similar sort of power, and you can use regular macros to create many kinds of new syntax. But for me, Smalltalk stands above the crowd in its pliability. If you want to make the language you want to use, start with Smalltalk as your raw material.
I ran into this quote over at Ben Hyde's blog:
Customers have a tendency to become like the kind of customers you treat them.
Ben related the quote as a commentary on trust in commerce. (Trust and social relationships are ongoing themes of his blog.) He notes that he has observed this truth in many situations. I have, too. Indeed, I think this truth applies in almost all human relationships.
(Like all generalizations, this one isn't foolproof, so feel free to prefix each of the following claims with "by and large" or your favorite waffle words.)
Parents and Children
Children grow into the people you expect them to be. The best sort of discipline for most children is to create an environment in which children know what your expectations are, and then live consistently in that way. Nagging youngsters doesn't work; they come to expect you to nag before they know you care about something. Yelling and screaming don't work, either; they come to think that you don't believe they can behave without being verbally assaulted. If you simply set a standard and then live as if you expect them to meet the standard, they will. When they don't, don't resort to needy negative reinforcement. Usually they know they've fallen short and strive to do better the next time.
Teachers and Students
Students become what their teachers expect of them, too. If you act as if they are not trustworthy, say, by creating elaborate rules for late work, cheating, and grading, they will soon look for ways to game the system. If you act as if they don't respect class time, say, by wasting it yourself through lack of preparation or rambling digression, they will soon come not to value their time in class.
If you set a high standard and expect them to learn and achieve, they usually will. If you trust them with masterpieces, they will come to value masterpieces.
Developers and Users
The quote applies to all sorts of developer/user relationships. If software developers don't trust their clients, then their clients will start to look out for themselves at the developer's expense. If an API designer acts as if programmers are not smart or reasonable enough to use the API wisely, and so creates elaborate rituals to be followed to ensure that programmers are doing the right thing, then programmers will look for ways to game the API. The result is hacks that confirm the API designer's expectations.
Agile methods place a high premium on developing a mutually beneficial relationship between the client and the programmer. The result is that programmers and clients feel free to be what they should be to one another: partners in creating something valuable.
Managers and Team Members
This truth is a good thing to keep in mind for someone embarking on an administrative or managerial position. When "bosses" treat their "employees" as adversaries in a competition, the employees soon become adversaries. They do so almost out of necessity, given the power imbalance that exists between the parties. But if a manager approaches the rest of the team with openness, transparency, and respect, I think that most members of the team will also respond in kind.
Husbands and Wives
All of the relationships considered above are hierarchical or otherwise imbalanced. What about peer relationships? I think the assertion still holds. In my many years of marriage, I've noticed that my wife and I often come to behave in the way we think our spouse expects. When one of us acts as if the other is untrustworthy, the other comes to protect his or her own interest. When one of us acts as if the other is incapable of contributing to a particular part of our lives together, the other stops caring to contribute. But when we act as if we are both intelligent, trustworthy, caring, and respectful, we receive that back from each other.
Given its wide range of applicability, I think that the truism needs to be restated more generally. Perhaps:
People tend to become like the kind of people you treat them to be.
Or maybe we can restate it as a new sort of Golden Rule:
Treat people like the kind of people you want -- or expect -- them to be.
Or perhaps "Do unto others as you expect them to be."
Leave it to the guys from Google to offer the Summer of Code program for students. Complete an open-source project through one of Google's collaborators, and Google will give you a $4500 award. The collaborators range from relatively large groups such as Apache and FreeBSD, through medium-sized projects such as Subversion and Mono, down to specific software tools such as Jabber and Blender. Of course, the Perl Foundation, the Python Software Foundation, and Google itself are supporting projects. You can even work on open-source projects in Lisp for LispNYC, a Lisp advocacy group!
The program bears a strong resemblance to the Paul Graham-led Summer Founders Program. But the Summer of Code is much less ambitious -- you don't need to launch a tech start-up; you only have to hack some code -- and so is likely to have a broader and more immediate effect on the tech world. Of course, if one of the SFP start-ups take off like Google or even ViaWeb, then the effects of the SFP could be much deeper and longer lasting.
This is another slick move from the Google image machine. A bunch of $4500 awards are pocket change to Google, and in exchange they generate great PR and establish hefty goodwill with the open-source organizations participating.
From my perspective, the best part of the Summer of Code is stated right on its web page: "This Summer, don't let your programming skills lie fallow...". I give this advice to students all the time, though they don't often appreciate its importance until the fall semester starts and they feel the rust in their programming joints. "Use it, or lose it" is trite but true, especially for nascent skills that are not yet fully developed or internalized. Practice, practice, practice.
The Summer of Code is a great chance for ambitious and relatively advance students to use this summer for their own good, by digging deep into a real project and becoming better programmers. If you feel up to it, give it a try. But even if you don't, find some project to work on, even if it's just one for your amusement. Perhaps I should say especially if it's just one for your amusement -- most of the great software in this world was originally written by people who wanted the end result for themselves. Choose a project that will stretch your skills a bit; that will force you to improve in the process. Don't worry about getting stuck... This isn't for class credit, so you can take the time you need to solve problems. And, if you really get stuck, you can always e-mail your favorite professor with a question. :-)
Oh, if you do want to take Google up on its offer, you will want to hurry. Applications are due on June 14.
A couple of days ago, blog reader Mike McMillan sent me a link to Stanley Fish's New York Times op-ed piece, Devoid of Content. Since then, several of my CS colleagues have recommended this article. Why all the interest in an opinion piece written by an English professor?
The composition courses that most university students take these days emphasize writing about something: current events, everyday life, or literature. But Fish's freshman composition class does something much different. He asks students to create a new language, "complete with a syntax, a lexicon, a text, rules for translating the text and strategies for teaching your language to fellow students". He argues that the best way to learn how to write is to have a deep understanding of "a single proposition: A sentence is a structure of logical relationships." His students achieve this understanding by having to design the mechanisms by which sentences represent relationships, such as tense, number, manner, mood, and agency.
Fish stakes out a position that is out of step with contemporary academia: Learning to write is about form, not content. Content is not only the not the point; it is a dangerous delusion that prevents students from learning what they most need.
Content is a lure and a delusion, and it should be banished from the classroom. Form is the way.
Fish doesn't say that content isn't important, only that it's should not be the focus of learning to write. Students learn content in their science courses, their social science courses, and their humanities courses -- yes, even in their literature courses.
(I, for one, am pleased to see Fish distinguish the goals of the composition courses taught in English departments from the goals of the literature courses taught there. Too many students lose interest in their comp courses when they are forced to write about, oh, a poem by Edna St. Vincent Millay. Just because a student doesn't connect with twentieth-century lyric poetry doesn't mean that he shouldn't or can't learn to write well.)
So, how is Fish's argument relevant to a technical audience? If you have read my blog much, you can probably see my interest in the article. I like to read books about writing, to explore ways of writing better programs. I've also written a little about the role form plays in evaluating technical papers and unleashing creativity. On the other side of the issue, though, I have several times recently about the role of real problems and compelling examples in learning to program. My time at ChiliPLoP 2005 was spent working with friends to explore some compelling examples for CS1.
In the context of this ongoing discussion among CS educators, one of my friends sloganized Fish's position as "It's not the application, stupid; it's the BNF."
So, could I teach my freshman computer programming class after Fish's style? Probably not by mimicking his approach note for note, but perhaps by adopting the spirit of his approach.
We first must recognize that freshman CS students are usually in a different intellectual state from freshman comp students. When students reach the university, they may not have studied tense and mood and number in much detail, but they do have an implicit understanding of language on which the instructor can draw. Students at my university already know English in a couple of ways. First, they speak the language well enough to participate in everyday oral discourse. Second, they know enough at least to string together words in a written form, though perhaps not well enough to please Fish or me.
My first-year programming students usually know little or nothing about a programming language, either as a tool for simple communication or in terms of its underlying syntactic structures. When Fish's students walk into his classroom, he can immediately start a conversation with them, in a rich language they share. He can offer endless example sentences for his students to dissect, to rearrange, to understand in a new way. These sentences may be context-free, but they are sentences.
In a first-year programming course, instructors typically have to spiral our dissection of programs with the learning of new language features and syntax. The more complex the language, the wider and longer the spiral must be.
Using a simple computer language might make an approach like Fish's work in a CS1 course. I think of the How to Design Programs project in these terms. Scheme is simple enough syntactically that the authors can rather quickly focus on the structure of programs, much as Fish focuses on the structure of sentences. The HtDP approach emphasizes form through its use of BNF definitions and "design recipes". However, I don't get the sense that HtDP removes content from the classroom so much as it removes it from the center of attention. Felleisen et al. still try to engage their students with examples that might interest someone.
So, I think that we may well be able to teach introductory programming in the spirit of Fish's approach. But is it a good idea? How much of the motivation to learn how to program springs from the desire to do something particular? I do not know the answer to this question, but it lies at the center of the real problems/compelling examples discussion.
In an unexpected twist of fate, I was thumbing through Mark Guzdial's new book, Introduction to Computing and Programming with Python: A Multimedia Approach, and read the opening sentences of its preface:
Research on computing education clearly demonstrates that one doesn't just "learn to program." One learns to program something [5,20], and the motivation to do that something can make the difference between learning and not learning to program .
(The references are to papers on situated learning of the sort Seymour Papert has long advocated.)
I certainly find myself in the compelling problems camp these days and so am heartened by Guzdial's quote, and the idea embodied in his text. But I also feel a strong pull to find ways to emphasize the forms that will help students become solid programmers. That pull is the essence of my interest in documenting elementary programming patterns and using them to gain leverage in the classroom.
Regardless of how directly we might use Fish's approach to teach first-year courses in programming, I am intrigued by what seems to me to be a much cleaner connection between his ideas and the CS curriculum, the traditional Programming Languages course! I'll be teaching our junior/senior level course in languages this fall, and it seems that I could adopt Fish's course outline almost intact. I could walk in on Day 1 and announce that, by the end of the semester, each group of students will have created a new programming language, complete with a syntax, a set of primitive expressions, rules for translating programs, and the whole bit. Their evolving language designs would serve as the impetus for exploring the elements of language at a deeper level, touching all the traditional topics such as bindings, types, scope, control structures, subprograms, and so on. We could even implement our growing understanding in a series of increasingly complex interpreters that extract behavior from syntactic expressions.
Actually, this isn't too far from the approach that I have used in the past, based on the textbook Essentials of Programming Languages. I'll need to teach the students some functional programming in Scheme first, but I could then turn students loose to design and implement their own languages. I could still use the EOPL-based language that I call Babel as my demonstration language in class.
School's barely out for the summer, and I'm already jazzed by a new idea for my fall class. I hope I don't peak too soon. :-)
As you can see, there are lots of reasons that Fish's op-ed piece has attracted the attention of CS folks. It's about language and learning to use it, which is ultimately what much of computer science and software development are about.
Have you heard of Stanley Fish before? I first ran into him and his work when I read a blog entry by Brian Marick on the role the reader plays in how we write code and comments. Brian cited Fish's work on reader-response criticism and hypothesized an application of it to programming. You may have encountered Fish through Brian's article, too, if you've read one of my oldest blog entries. I finally checked the book by Fish that Brian recommended so long ago out of the library today -- along with another Fish book, The Trouble with Principle, which pads my league-leading millions total. (This book is just for kicks.)