November 2004 will enter the books as my least-blogged month since starting Knowing and Doing back in July. I'm not all that surprised, given that:
When I started Knowing and Doing, I knew that regular blogging would require strict discipline and a little luck. Nothing much has changed since then. I still find the act of writing these articles of great value to me personally, whether anyone reads them or not. That some folks do find them useful has been both gratifying and educational for me. I learn a lot from the e-mails I receive from Faithful Readers.
I reached the 100-post plateau ten days ago with this puff piece. I hope that most of my posts are of more valuable than that! In any case, reaching 1000 entries will be a real accomplishment. At my current pace, that will take me three more years...
While on this nostalgic kick, I offer these links as some obligatory content for you. Both harken back to yesteryear:
On Saturday, Jason Yip started a conversation about the danger of accepting blame reflexively. He fears that accepting the blame preemptively "reinforc[es] the mistaken belief that having someone take blame is somehow important." Fixing the problem is what matters.
This is, of course, true. As Jerry Weinberg says in The Secrets of Consulting:
The chances of solving a problem decline the closer you get to finding out who was the cause of the problem.
When people focus on whose fault it is, they tend to become defensive, to guard their egos, even at the expense of fixing the problem. This is a very human response, and one that is hard to control intellectually. The result is both that the problem remains and its systematic cause remains, pretty much guaranteeing more problems in the future. If we can depersonalize the problem, focusing on what is wrong and how we can fix it, then there is hope of both solving the problem and learning how not to cause similar problems in the future.
I think Jason is right on that people can use "It's my fault" as a way to avoid discussion and thus be as much of an obstacle to growth and improvement as focusing on placing blame. And someone who *always* says this is either trying to avoid confrontation or making way too many mistakes. :-)
But as one commenter posted on his site, saying "I made a mistake. Let's fix it, and let's find a way to avoid such problems in the future" is a welcome behavour, one that can help individuals and teams grow. The real problem is saying "I made a mistake." when you didn't make a mistake or don't want to discuss what you did.
I have been fortunate never to have worked at a place where people played The Blame Game, at least not too destructively. Walking on eggshells all day is no way to live one's life, and not the sort of environment in which a person can learn.
These days, I don't have much trouble in professional settings acknowledging my mistakes, though I probably wasn't always as easy-going. I give a lot of credit for my growth in this regard to having papers I've written workshopped at PLoP conferences. At PLoP, I learned how better to separate my self from my product. The practices of writers workshops aim to create a safe environment, where authors can learn about how well their papers work. As much as the practices of the patterns community help, it probably takes going through the workshop process a few times to wear away the protective shell that most of use use to protect ourselves from criticism. Being with the right sort of people helps.
All that said, I had to chuckle at It's Chet's Fault, a fun little community practice of the famed C3 project that gave rise to XP as a phenomenon. On a team of folks working together to get better, I think that this sort of levity can help people remember to keep their eyes on what really matters. On the other hand, lots of things can work with a team of folks striving to get better. Being with the right sort of people helps.
While waiting for the UNI's first men's basketball game of the year to begin yesterday, I read a couple of articles I'd found while surfing the blogosphere. I was surprised to run across the same Big Idea in both papers, albeit it in different forms:
Design, well done, satisfies needs users didn't know they had.
I found it first in Paul Graham's new essay, Made in USA. Graham relates how he felt after buying an iPod:
I just got an iPod, and it's not just nice. It's surprisingly nice. For it to surprise me, it must be satisfying expectations I didn't know I had. No focus group is going to discover those. Only a great designer can.
The essay itself is about why the US is good at designing some things, like software, and bad at others, like cars. Graham's diagnosis: Americans don't care much for taste or quality. Instead of relying on a sense of good design, American auto manufacturers rely on focus groups to tell them "what people want".
So why is the US good at designing other products, such as software? Americans are driven by speed, and some products are done better when done quickly without undue emphasis on getting it "right". Indeed, when I read the essay, the quote that most struck me wasn't the one about the iPod, but this one:
In software, paradoxical as it sounds, good craftsmanship means working fast. If you work slowly and meticulously, you merely end up with a very fine implementation of your initial, mistaken idea. Working slowly and meticulously is premature optimization. Better to get a prototype done fast, and see what new ideas it gives you.
What a nice crystallization of the spirit of agile development: If you work slowly and meticulously, you merely end up with a very fine implementation of your initial, mistaken idea. Working slowly and meticulously is premature optimization.
Graham points to Steve Jobs as an exception to this general rule about technological fields and offers hope that American designers who care about quality and craftsmanship can succeed. Of course, Apple, has never been more than a niche player in the market for computer software and hardware (a niche in which I proudly reside).
Next, I found myself reading Malcolm Gladwell's recent The Ketchup Conundrum. Yes, an article about ketchup. Actually, it's about mustard, too, and Prego spaghetti sauce. When Campbell's Soup was trying to reinvigorate its Prego-brand sauce in the late 1980s, they brought in an unconventional market researcher named Howard Moskowitz to inject some new ideas in their approach. This quote caught my eye (emphasis added):
Standard practice in the food industry would have been to convene a focus group and ask spaghetti eaters what they wanted. But Moskowitz does not believe that consumers -- even spaghetti lovers -- know what they desire if what they desire does not yet exist. "The mind," as Moskowitz is fond of saying, "knows not what the tongue wants."
Moskowitz uses focus groups -- Graham's bane -- but with a twist. Rather than have the Prego folks create what they think people want and then taste-test it against standard sauces, Moskowitz had the Prego folks create forty-five varieties of Prego, in all the different combinations they could think of. Then ran these varieties against a panel of trained food tasters before taking the options to the people.
In many ways, this is exactly the opposite of the Jobs approach, which relies on a genius designer to assess the state of the world and create a product that scratches an itch no one quite knew they had. The Moskowitz approach is more in the Art and Fear philosophy, to produce a lot of artifacts. Many will have no shelf life, but in the volume you are more likely to create something of value. Not stated in the Gladwell article is another potential benefit of Moskowitz's approach: in producing lots of stuff, designers overcome the fear of creating, especially things that are different from what already exists. Even better, such designers can begin to develop a sense of what works and what doesn't through voluminous experience.
Design, well done, satisfies needs users didn't know they had. And you can do it well in different ways.
This morning I went out for a 12-mile run. That's my usual Sunday morning run when I'm not training for a marathon, part of my "maintenance mileage" year 'round. But before today I had run this far only once since running the Des Moines Marathon, plus I've been dragging a bit from running faster the last couple of weeks. So this morning I planned for a little LSD. That's long slow distance, not the psychotropic drug, though both can lead to out-of-body experiences.
Forty-eight minutes into the run, I felt a little discouraged. I still had a little over an hour to go! But then I thought, you run almost 48 minutes even on your shortest days; what's the big deal? The big deal was that that second thing: I still had a little over an hour to go. The psychology of a long run is much different than the psychology of a short run. That's what makes marathons so challenging.
Then I got to thinking about the psychology of long and short runs in software development. I wonder if the short iterations encouraged by agile methodologies help to buttress the morale of developers on those days and projects that leave them feeling down? I've certainly worked on traditional software projects where the thought of another six months before a release seemed quite daunting. Working from a state of relative uncertainty is no fun. On agile projects, we at least get to issue releases more frequently and incorporate feedback from the customer into the next iterations.
Sometimes, running requires endurance. If you want to run a marathon, then you had better get ready to run for 3, 4, or 5 hours. I suppose that big software projects require stamina, too. A year-long project will take a year whether done in many short releases or in one big one. But the closer horizon of short iterations can be comforting, even without considering the value of continuous feedback.
With my mind occupied thinking about software and psychology, pretty soon my run was over. I was tired, as expected, and a little stiff. But I felt good for having done 12. My next 45-minute iteration happens on Tuesday.
Two strange things happened while preparing my last blog entry. First, my editor flagged "Wozniak" as a misspelled word and offered wooziness as a correction. Clearly, my spell checker has been reading about Woz's fascination with Segways. :-)
Then, for a span of at least ten minutes, http://www.amazon.com/ returned Http/1.1 Service Unavailable to all http requests. I wonder how often that happens at Amazon or Google? Maybe I'm just sensitive to down time right now, after having been reminded of my dependence on my server by 36+ hours of server crash earlier this week, which made the world think that my blog and e-mail address had disappeared...
I recently finished reading Kary Mullis's Dancing Naked in the Mind Field. Mullis is the sort of guy people call a "character", the sort of guy whom my college friends would have called a "weird dude". But he's a weird dude who just happened to win a Nobel Prize in chemistry, for discovering PCR (polymerase chain reaction), a technique for finding and replicating an arbitrary sequence of nucleotides on a strand of DNA.
The book consists of sixteen disconnected chapters that talk about various parts of Mullis's life as free spirit, biochemist, and celebrity scientist. I enjoyed his many chapters on the joys of doing science. He writes of discovering chemistry and electricity as a child, and he writes of the May evening drive up California's Highway 128 on which he had the brainstorm that led to PCR.
In one chapter, Mullis tells us about making chemicals with a friend as a high school student, first in the commercial lab of a family friend and then in a homemade lab he and his friend built. Always the entrepreneur, Mullis hatched a scheme to make and sell chemicals that no one else was selling. In the doing so, he learned why no one else was doing it: the fabrication process was dangerous and wasteful. But he learned a lot.
A neat line: When Mullis and his buddy took their first batch of nitrosobenzene to their family friend, he was "pleased to the point of adopting us both as his children forever. Chemists get emotional about other chemists because of the language they have in common and the burns on their hands." I know this feeling well from working with student programmers. But the burns on our hands are all metaphorical; they consist in the dangling pointers we've all chased, in the data files we've created and overwritten, in the failed attempts to make a language say something it cannot.
This sort of precociousness has long been a hallmark of young computer programmers. From Jobs and Wozniak, Gates and Allen, all the way to all the local ISPs operating out of rural garages across the country, the history of computing is full kids who have set out to follow their curiosities and changed the world. The advent of the Internet and World Wide Web opened the doors to even more people. I only wish that I had the entrepreneurial spirit that accompanies their curiosity. Maybe I would have changed the world, too?
In another chapters, Mullis describes how he came to know that no titans of thought were "minding the store", overseeing the world of science with firm, guiding hands. The science world is just a bunch of mortals doing their own things, with no distinguished wisdom for knowing today or the future. He contrasted how a naive, somewhat flaky article he wrote in college was published in the journal Nature, which later rejected -- along with all the other highest-ranking journals -- his paper describing PCR and its implications.
I especially enjoyed the chapters that comment on the nature of science in the modern world. Mullis gives his views on how having to seek external grants distorts the scientific process, from the choosing of projects to the "selling" of results in a politically-correct culture. He tells that science has changed, and so should how we do science, but what people do doesn't change all that fast. He gives as an example something most high school graduates will remember, if only faintly: Avogadro's number. Computations using 6.02*1023 molecules (did I remember correctly, Mr. Smith?) used to be essential to the conduct of chemistry, when chemists had to work with relatively large masses of substance. But now chemists work with dozens of molecules, or 2, or 1. What's the point of doing calculations 23 orders of magnitude larger?
Computing has its own historic remnants that affect how we think about programming and programs long after the world changed underneath them. Social change is slow, though, and the university is no exception. As long as we are able to discuss controversial ideas and offer alternatives, we have some hope of making progress.
Perhaps my favorite chapter deals with how science and math are the result of humans trying to extend their limited senses. In the beginning, humans knew the world only from their natural senses, among which Mullis counts the traditional five plus the senses of falling and time. He argues that our sense modalities developed around the physical needs of the species. For example, our sense of hearing grew to hear sounds in the range that we can make, thus supporting the development of language; our sense of sight came to see the colors we needed to see and in the light conditions available to prehistoric man. As humans progressed intellectually, we derived science as way to see, hear, and otherwise sense things we could not perceive naturally. Mathematics grew as a way for us to describe these newly-perceived phenomena.
For Mullis, this is a natural progression. However, over time, science has increasingly moved away from the original range of our natural senses, to increasingly small objects (quarks, anyone?) and increasingly large objects (galaxies and universes). Mathematics has followed a similar path toward abstraction. The result has been science and math increasingly divorced from the lives and understanding of non-scientists. We have moved away from "human-scale" science, from things we can apprehend naturally, to the physics of the very small and very large. Mullis suggests that we return most of our energy -- and most of our funding, 90% or so -- to things that can matter to everyday people as they live everyday lives. He includes in this category the sort of biochemistry he does, of course, for its potential to affect directly and dramatically human life. But he also suggest that we seek a better understanding asteroids and comets so that we can prevent the next major impact, like the ones in prehistoric times that caused mass extinction of species. Are we any better prepared than the dinosaurs were for a major asteroid impact, even if we are able to predict its coming years in advance? This all seems a bit crazy, but then that's Mullis. Thinking way-out thoughts can lead to change, if the ideas gain traction.
Unfortunately, Dancing ... includes some chapters that are so unusual that they may turn some readers off. You will find plenty about drug use, alien abduction, and out-of-body experiences (that were. seemingly, not the result of drug use). Mullis clearly dances naked in the mind field and is not at all constrained by the rationalism that dominates science and technology these days. As a result, he ends up believing some odd juxtapositions at once. If you are put off by such stuff, skip these chapters; you'll not miss anything of "scientific substance". You may miss out on wondering just how a Nobel Prize-winning scientist can think so many strange thoughts, though. And, who knows, you may miss out on the next big thing.
Mullis's analysis is not always all that deep, and he has biases like any interesting person. But he writes about interesting ideas, which can serve as a trigger for his reader to do the same thing. I rate Dancing Naked in the Mind Field worth a read.
From Art and Fear:
In talking about how hard artists work, I am reminded of the story about the man who asked a Chinese artist to draw a rooster for him and was told to come back in a month. When he returned, the artist drew a fabulous rooster in a few minutes. The man objected to paying a large sum of money for something that took so little time, whereupon the artist opened the door to the next room in which there were hundreds of drawings of roosters.
Sometimes folks, non-runners and runners alike, comment on how they can't run as far or as fast (hah!) as I do. There are days when I wish that I could open a door to let the person see a room full of runs like the one I had this morning: hard work, pushing at my limits, finishing with nothing left, but still short of the goal I've been working toward.
Folks who make this sort of comment almost always mean well, intending a compliment. Often, though, I think that the unstated implication is that they couldn't do what I do even if they tried, that runners have some innate ability that sets them apart from everyone else. Now, I don't doubt at all that some people have innate physical abilities that give them some advantage at running. The very best -- the guys who run 9.9s in the 100m, or 2:10 marathons -- almost certainly have gifts. But I am not a gifted runner, other than having the good fortune to not injure easily or get sick very often.
And let's not forget how hard those 9.9s sprinters and 2:10 marathoners have to work in order to reach and maintain their level of excellence.
Richard Gabriel often says that "talent determines only how fast you get good, not how good you get". Good poems, good art, and good runs are made by ordinary people. Art and Fear says this: "Even talent is rarely distinguishable, over the long run, from perseverance and lots of hard work."
That is good news for runners like me. Maybe, if I keep working, I'll reach my speed goal next week, or the week after that.
It's also good news for programmers like me.
Computer science students should take this idea to heart, especially when they are struggling. Just keep working at it. Write programs every day. Learn a new something every day. You'll get there.
Just something rolling around my mind today...
During my daughter's violin lesson earlier this afternoon, I was browsing a pamphlet on the Suzuki method for learning music titled "The Power of Simplicity". I've forgotten the author's name already... The purpose of the pamphlet is to support the ideas that make up the Suzuki method with references to research in psychology and education, as well as writings from philosophers and master musicians.
I didn't get very far in thirty minutes, but the first chapter jogged my mind with its discussion of the Suzuki repertoire's design. For those of you who are unfamiliar, the technical idea of the Suzuki approach is a step-by-step mastery of specific skills via the mastery of a carefully arranged progression of musical pieces, some written specifically for the curriculum. As faithful readers know, practice and mastery have been on my mind lately. But the thing that grabbed my attention today was a reference to "the problem of the match", a phrase from educational psychology that refers to matching the elements of a curriculum to the needs of the learner. The author wasn't so much concerned with the technical details of the match (say, fingerings or chords) as with the learner's experience -- the balance between the learner's confidence and the challenge of the current piece.
This brought to mind something Alan Kay talked about at OOPSLA, in reference to designing learning environments for children. Kay said that we should consciously seek to widen the path of flow for learners, between anxiety and boredom. My experience with the Suzuki piano literature is that it does a remarkable job of balancing confidence with challenge, of navigating between anxiety and boredom. It does so in many ways: by introducing only or two new skill elements at a time, by repeating skill elements in subsequent pieces for mastery through repetition, by occasionally dropping an "easy" piece into the curriculum to let the learner bask in confidence for a while, and so on.
Many people, including Suzuki, Montessori, and Kay, have pointed out that this idea is essential when supporting children as learners. But they are just as important when working with more mature learners, including college students. Some computer science educators have written about Bloom's taxonomy as it applies to CS 1, and I've heard colleagues make arguments about what is and isn't appropriate for first-year courses based on the abstraction capabilities of typical 18- and 19-year-olds. Too often, though, our courses follow a path accreted over many years of programming language changes, textbook revisions, and little additions (and few deletions!) to our lecture notes.
I've grown increasingly dissatisfied with my Computer Science II curriculum over the last few years. I can see now that one source of my dissatisfaction is the mismatch between what I ask students to do and what they know when I ask them to do it. Balancing anxiety against challenge is hard. My tendency is toward challenge, which often results in high anxiety and low confidence.
I've made small changes to the course every semester, and a year or so ago I refactored the course a bit more substantially. But course offerings last for a semester, and with iterations that long refactoring works at a glacial rate. Besides, several Big Ideas have been taking root in my mind lately, and I want to redesign the course from the ground up to incorporate them. Given that opportunity, I want to design the course -- the concepts we cover, their order, my examples, my programming assignments, the whole bit -- taking my students' "flow" into account.
Talk about a task that gives rise to anxiety. One way that I hope to alleviate my own fear is to develop the course in an agile way. But I want to begin the course with plenty of raw material that will allow me to respond to what I learn nimbly.
I am glad to have stumbled across that humble little Suzuki pamphlet today, and to have been reminded of Alan Kay's discussion of flow. You never know where a little reading might lead your mind...
Continuing with our theme of trying new things in the classroom, I smiled when I read Kevin Rutherford's story about giving a lecture on agile software development as an agile presentation project. I had considered a similar idea while planning for my agile software development course: periodically stop the course and have the students help me set the direction for the next 'iteration'. Indeed, a couple of years ago, I outlined a paper that I wanted to write about applying XP practices to the teaching of a course: I could imagine a spike solution, and using the planning game to steer the course. What would it be like to teach 'test-first', with merciless refactoring? How would the other XP disciplines map onto how I teach, and how my students learn?
As with many of my wilder ideas, I have not yet followed through. It's heartening to know that someone like Kevin has tried this idea out and found it workable on a smaller scale.
Running a whole course in this way promises some interesting benefits and raises some potential problems. By allowing students to to help steer, it may help to keep them more engaged with the material they are learning. The most concrete benefit might be the periodic feedback the instructor receives. Even if only the professor drives the course, at least he would be able to do so with some knowledge of what the students think about what they have learned.
One of the potential problems lies in the fact that students are exactly like the clients in an agile software project. First, students typically don't know all that much about the content of the courses they take. I've occasionally asked students questions at the beginning of a semester to elicit ideas for course direction, and I have found them relatively unaware. But why should they be? At the beginning of a course on algorithms, they have no reason to know all that much about what an algorithms course should or could be like, or even what kinds of algorithms there are.
But this problem can be addressed in a reasonable way, and Kevin's article shows how: Start the course with a few pre-written story cards that form the basis of the first iteration. That way, the instructor can lay out some foundation material, present a broad view of the course material, and give the students some ideas about where the course can go. The key to doing the course in an agile way is to keep the number of pre-written cards to a minimum, so that you can get into client interaction as soon as possible. Selecting good starting stories would become an important teaching skill.
The second problem is that students are not the only clients of the course. Many different people hold a stake in your courses, including:
This problem, too, can be addressed in a manner similar to the first problem. The instructor can inject a few essential stories into the course, spread across the several iterations that make up the semester. These stories can ensure that the course covers material essential for courses downstream and for other stakeholder expectations. Again, it is essential to keep the number of such stories to a minimum. Academics are notorious for thinking that everything is required material, fundamental to the students' futures. (See the state of CS 1 courses around the US...) One side effect of this style of course planning might be to force the instructor to make some tough choices about what really is essential, and then get out of the way and let students help drive. We all might be surprised by the results.
Even with the instructor seeding each iteration with a story or two, the students would play a role in ordering material and choosing the direction of the course. If nothing else, this would give the students a sense of ownership and let them take more responsibility for their own learning.
Of course, a lower-risk experiment with this idea would be to do more what Kevin did, using it to run a single lecture period or a week-long unit.
This sounds like a lot of fun, and maybe a better way to run a course. Spring semester isn't too far away...
On Wednesday I blogged about helping students succeed more by cutting their projects in half. I haven't tried that yet, but I did take a small step in that spirit, toward helping students learn in a different way than usual.
In my agile development course, we've been reading The Pragmatic Programmer, and recently discussed the idea that professionals are masters of their tools. I've been encouraging students in the course to broaden their skill sets, including both tools specific to the agile arena, such as JUnit, and tools more general to their professional needs, such as a command shell, an editor, and a couple of programming languages. But I find that most students (have to) focus so much on the content of their programming assignments that they don't attend much to their practices.
So, I made their latest assignment be specifically to focus on skills. They will use a new tool (a testing framework other than JUnit) to work with a program they've already written and to write a very simple program. At the same time, they are to choose a particular skill they want to develop -- say, to learn emacs or to finally learn how to use a Linux command line -- and do it. With almost no content to the assignment, at least not new content, I hope that students will feel they have the time and permission to focus on developing their skills.
I am reminded of one of my favorite piano books, Playing the Piano for Pleasure, by Charles Cooke (Simon and Schuster, 1941). Cooke was a writer by trade but an ardent amateur pianist, and he used his writing skills to share some of his secrets for studying and practicing piano. Among other things, Cooke suggested a technique in which he would choose his weakest part of a piece, what he calls a 'fracture', and then practice it with such intensity and repetition that it becomes one of his strongest. He likened this to a bone healing after a fracture, when the newly knitted bone exceeds the strength of the bone around it.
I try to have this mindset when working on my professional skill set. And I'd certainly like my students to grow toward such a mentality as they develop into software professionals and happy adults.
I hope that some of the students working on my new assignment will attack their own weakest areas and turn them into strengths or, at the least, grow to the point that they no longer have to fear the weakness. Overcoming even a little fear can help a student move a long way toward being a confident and successful programmer.
Today is Kurt Vonnegut's birthday. I've been reading Vonnegut since high school, before I even knew that, like me, he was a native the uniquely Midwestern big city of Indianapolis. Some folks have one author they can always turn to when they want to remind themselves of their humanity, and Vonnegut is that author for me. I even spent a few days one summer a few years back (or misspent, depending on your perspective) tabulating The Books of Bokonon, the phony religion that Vonnegut created in his novel Cat's Cradle. Of all the hundreds or thousands of pages that I have created for the web, this one page generates more and more consistent feedback from readers around the world. Vonnegut readers are a kindred lot.
Billy Pilgrim, Kilgore Trout, Eliot Rosewater, Rabo Karabekian -- all are among my favorite characters in literature. Vonnegut has never been a haute auteur of the sort that attracts "serious" literary attention, but he can create as fully human a character as anyone I've read.
Now, someone's 82nd birthday is hardly the sort of landmark that ordinarily calls for a big celebration. (Well, inasmuch as an 82nd birthday isn't grounds enough to celebrate!) But Vonnegut has a particular connection to my blog: I very nearly named my blog for one of his stories.
I don't know about most other bloggers, but I spent considerable mental energy trying to find just the right name. Names are important. I wanted a name that I could live with a long time, one that would send the right message to potential readers. I ended up choosing "Knowing and Doing" in order to send a relatively straightforward message to readers, and to fit in with the mold of other blogs.
But the names of three Vonnegut stories made the final cut: "Now It Can Be Told", "Tomorrow and Tomorrow and Tomorrow", and "The Euphio Question". I like them all. "Now It Can Be Told" and "Tomorrow and Tomorrow and Tomorrow" even sound like blog names. The one I like best, though, was the "The Euphio Question", but I ultimately decided that it was just too indirect to suit me. Why did I consider it?
The story includes a description of a musical device created by Dr. Bockman, which leaves listeners "inexplicably grinning like idiots."
I guess that, deep down, I hoped to have a similar effect on all my readers. But that seems a pretty high bar to set for one's self before ever writing a single entry, so I settled for a name that sounds as pretentious but doesn't promise rapture to my readers. :-)
I liked Incipient Thought's recipe for project success yesterday.
It starts with T. J. Watson's well-known:
If you want to increase your success rate, double your failure rate.
... to which he adds a second part:
If you want to double your failure rate, all you have to do is halve your project length.
Then, winking, he points out that the second step may fail, because shorter projects may actually succeed more often!
I think this is wonderful software development advice.
As a teacher, I have been thinking about how to bring this idea to my courses and to my students.
I always encourage my students to approach their course projects in this way: take lots of small steps doing the simplest thing possible, writing test code to verify success at each step, and refactoring frequently to improve their design. By taking small steps, they should feel confident that whatever they've done so far actually works. And, when it doesn't, they haven't gone too far astray, because they only worked on a little bit of specification. Debugging can often be localized to the most recently added code.
Breaking a complex but unitary spec ("implement a program to play MasterMind") down into smaller stories is hard, especially for freshmen and sophomores. Even my upper division students sometimes have difficulty breaking requirements that are smaller but still too big into even smaller steps.
In recent semesters, I've tried to help by writing my assignments as a list of small requirements or user stories. My second-semester students are currently implementing a simple text-based voice mail system, which I specified as a sequence of individual requirements. Sometimes, I even specify that we will grade assignments story-by-story and stop as soon as the program "fails" a story.
This approach sounds great, but it is out of step with the student culture these days. Taking small steps, and especially refactoring, requires having a certain amount of time to go through the process, with some reflection at each step. Most students are inclined to start every project a few days or less before it's due, at which point they feel a need to "just do it". I've tried to encourage them not to postpone starting -- don't all professors face this problem? -- mostly to no avail.
In my Agile Software Development course, we've been doing 2- and 3-week releases, and I think we've had some success with doing 3 week-long iterations, with full submission of the project, within each release. Even still, word in the class is that many folks start each iteration on a day or two or three before it's due.
Maybe I should take the "recipe for project success" advice for them... I could assign two half-week assignments instead of a one-week assignment! Students would have twice as many opportunities to succeed, or to fail and learn.
One of the downsides of this idea for me is grading. I don't like to grade and, while am usually (but not always!) timely in getting assignments back to students, I use enough mental energy grading for three courses as it is. I could automate more of the grading, using JUnit, say, to run tests against the code. But to the extent that I need to look at their code, this approach requires some new thinking about grading.
One of the downsides of this idea for my students is the old time issue. With 15 hours of class and 30 hours at work and whatever time they spend in the rest of their lives, students have a tough time working steadily on each course throughout each week. Throw in exams and projects and presentations during special times in the semester, and the problem gets worse.
But I can't shake the feeling that there is something deeply right about halving the size of every project. I may have to "just do it" myself and face whatever difficulties arise as they do.
Alan Kay gave two talks at OOPSLA last week, the keynote address at the Educators Symposium and, of course, his Turing Award lecture. The former was longer and included most of the material from the Turing lecture, especially when you consider his generous question-and-answer session afterwards, so I'll just comment on the talks as a single presentation. That works because they argued for a common thesis: introductions to computing should use simple yet powerful computational ideas that empower the learner explore the larger world of ideas.
Kay opened by decrying the premature academization of computing. He pointed out that Alan Perlis coined the term "computer science" as a goal to pursue, not as a label for existing practice, and that the term "software engineering" came with a similar motivation. But CS quickly ossified into a discipline with practices and standards that limit intellectual and technical discovery.
Computing as science is still an infant. Mathematicians work in a well-defined space, creating small proofs about infinite things. Computing doesn't work that way: our proofs are programs, and they are large, sometimes exceedingly so. Kay gave a delightful quote from Donald Knuth, circa 1977:
Beware of bugs in the above code; I have only proved it correct, not tried it.
The proof of our programs lies in what they do.
As engineering, computing is similarly young. Kay contrasted the construction of the Great Pyramid by 200,000 workers over 20 years with the construction of the Empire State Building by fewer than 3,000 people in less than 11 months. He asserts that we are somewhere in the early Middle Ages on that metaphorical timeline. What should we be doing to advance? Well, consider that the builders of the Pyramid used hand tools and relatively weak ideas about building, whereas the engineers who made the Empire State Building used power tools and even more powerful ideas. So we should be creating better tools and ideas. (He then made a thinly veiled reference to VisualStudio.NET 2005, announced at OOPSLA, as just another set of hand tools.)
So, as a science and professional discipline in its youth, what should computing be to people who learn it? The worst thing we can do is to teach computer science as if it already exists. We can afford to be humble. Teach students that much remains to be done, that they have to help us invent CS, that we expect them to do it!
Kay reminded us that what students learn first will have a huge effect on what they think, on how they think about computing. He likened the effect to that of duck imprinting, the process in which ducklings latch onto whatever object interacts with them in the first two days of their lives -- even if the object is a human. Teaching university students as we do today, we imprint in them that computing is about arcane syntax, data types, tricky little algorithms, and endless hours spent in front of a text editor and compiler. It's a wonder that anyone wants to learn computing.
So what can we do instead? I ran into some interesting ideas on this topic at OOPSLA even before the Educators' Symposium and had a few of my own during the week. I'll be blogging on this soon.
Alan has an idea of the general direction in which we should aim, too. This direction requires new tools and curricula designed specifically to introduce novices to the beauty of computing.
He held up as a model for us Frank Oppenheimer's Exploratorium, "a collage of over 650 science, art, and human perception exhibits." These "exhibits" aren't the dead sort we find in most museums, where passive patrons merely view an artifact, though. They are stations with projects and activities where children can come face to face with the first simple idea of science: The world is not always as it seems. Why are there so many different exhibits? Kay quoted Oppenheimer to the effect that, if only we can bring each student into contact with that one project or idea that speaks directly to her heart, then we will have succeeded.
Notice that, with that many projects available, a teacher does not have to "assign" particular projects to students at a particular point in time. Students can choose to do something that motivates them. Kay likened this to reading the classics. He acknowledged that he has read most of the classics now, but he didn't read them in school when they were assigned to him. Then he read other stuff, if only because the he had chosen for himself. One advantage of students reading what they want is that a classroom will be filled with people who have read different things, which allows you to have an interesting conversation about ideas, rather than about "the book".
What are the 650 examples or projects that we need to light a fire in every college student's heart? Every high schooler? Every elementary school child?
Kay went on to say that we should not teach dumbed-down versions of what experts know. That material is unnecessarily abstract, refined, and distant from human experience. Our goal shouldn't be to train a future professional computer scientist (whatever that is!) anyway. Those folks will follow naturally from a population that has a deep literacy in the ideas of science and math, computing and communication.
Here, he pointed to the typical first course in the English department at most universities. They do not set out to create professional writers or even professional readers. Instead, they focus on "big ideas" and how we can represent and think about them using language. Alan thinks introductory computer science should be like this, too, about big ideas and how to represent and think about them in language. Instead, our courses are "driver's ed", with no big ideas and no excitement. They are simply a bastardization of academic computer science.
What are the big ideas we should introduce to students? What should we teach them about language in order that they might represent ideas, think about them, and even have ideas of their own?
Alan spent quite a few minutes talking about his first big inspiration in the world of computing. Ivan Sutherland's Sketchpad. It was in Sketchpad that Kay first realized that computing was fundamentally a dynamic medium for expressing new ideas. He hailed Sutherland's work "the greatest Ph.D. dissertation in computer science of all time", and delighted in pointing out Sketchpad's two-handed user interface ("the way all UIs should be"). In an oft-told but deservedly repeated anecdote, Kay related how he once asked Sutherland how he could have created so many new things -- the first raster graphics system, the first object-oriented system, the first drawing program, and more -- in a single year, working alone, in machine language. Sutherland replied, "... because I didn't know it was hard".
One lesson I take from this example is that we should take care in what we show students while they are learning. If the see examples that are so complex that they can't conceive of building them, then they lose interest -- and we lose a powerful motivator.
Sutherland's dissertation includes the line, "It is to be hoped that future work will far surpass this effort". Alan says we haven't.
Eventually, Kay's talk got around to showing off some of the work he and his crew have been doing at Squeakland, a science, math, and computing curriculum project built on top of the modern, open source version of Smalltalk, Squeak. One of the key messages running through all of this work can be found in a story he told about how, in his youth, he used to take apart an old Model T Ford on the weekend so that he could figure out how it worked. By the end of the weekend, he could put it back together in running condition. We should strive to give our students the same experience in the computing environments they use: Even if there's a Ferrari down there, the first thing you see when you open the hood is the Model T version -- first-order theories that, even if they throw some advanced ideas away, expose the central ideas that students need to know.
Alan demoed a sequence of increasingly sophisticated examples implemented and extended by the 5th- and 6th-grade students in B.J. Conn's charter school classes. The demos in the Educators' Symposium keynote were incredible in their depth. I can't do them justice here. The best you can do is to check out the film Squeakers, and even that has only a small subset of what we saw in Vancouver. We were truly blessed that day!
The theme running through the demos was how students can explore the world in their own experience, and learn powerful ideas at the "hot spot" where math and science intersect. Students can get the idea behind tensor calculus long before they can appreciate the abstractions we usually think of as tensor calculus. In the course of writing increasingly complex scripts to drive simulations of things they see in the world, students come to understand the ideas of the variable, feedback, choice, repetition, .... They do so by exposing them in action, not in the abstract.
The key is that students learn because they are having fun exploring questions that matter to them. Sometime along in here, Kay uttered what was for me the Quote of the Day, no, the Quote of OOPSLA 2004:
If you don't read for for fun, you won't be fluent enough to read for purpose.
I experience this every day when interacting with university students. Substitute 'compute' or 'program' for 'read', and you will have stated a central truth of undergraduate CS education.
As noted above, Kay has a strong preference for simple, powerful ideas over complex ideas. He devoted a part of his talk to the Hubris of Complexity, which he believes long ago seduced most folks in computing. Software people tend to favor the joy of complexity, yet we should strive for the joy of simplicity.
Kay gave several examples of small "kernels" that have changed the world, which all people should know and appreciate. Maxwell's equations were one. Perhaps in honor of the upcoming U.S. elections, he spent some time talking about the U.S. Constitution as one such kernel. You can hold it in the palm of your hand, yet it thrives still after 225 years. It is an example of great system design -- it's not law-based or case-based, but principle-based. The Founding Fathers created a kernel of ideas that remains not only relevant but also effective.
I learned a lot hearing Alan tell some of the history of objects and OOP. In the 1960s, an "object" was simply a data structure, especially one containing pointers. This usage predates object-oriented programming. Alan said that his key insight was that an object could act as a miniature software computer -- not just a data structure, not just a procedure -- and that software scales to any level of expression.
He also reminded us of something he has said repeatedly in recent years: Object-oriented programming is about messages, not the objects. We worry about the objects, but it's the messages that matter.
How do we make messages the centerpiece of our introductory courses in computing?
Periodically throughout the talk, Alan dropped in small hints about programming languages and features. He said that programming language design has a large UI component that we technical folks sometimes forget. A particular example he mentioned was inheritance. While inheritance is an essential part of most OOP, Alan said that students should not encounter it very soon in their education, because it "doesn't pay for the complexity it creates".
As we design languages and environments for beginners, we can apply lessons from Mihaly Csikszentmihalyi's idea of "flow". Our goal should be to widen the path of flow for learners. One way to do that is to add safety to the language so that learners do not become anxious. Another is to create attention centers to push away potential boredom.
Kay's talks were full of little nuggets that I jotted down and wanted to share:
Alan closed both talks on an inspirational note, to wrap up the inspiration of what he had already said and shown us. He told us that Xerox PARC was so successful not because the people were smart, but because they had big ideas and had the inclination to pursue them. They pursued their ideas simple-mindedly. Each time they built something new, they asked themselves, "What does the complexity in our system buy us?" If it didn't buy enough, they strove to make the thing simpler.
People love to quote Alan's most famous line, "The best way to predict the future is to invent it." I leave you today with the lines that follow this adage in Alan's paper "Inventing the Future" (which appears in The AI Business: The Commercial Uses of Artificial Intelligence, edited by Patrick Henry Winston and Karen Prendergast). They tell us what Alan wants us all to remember: that the future is in our hands.
The future is not laid out on a track. It is something that we can decide, and to the extent that we do not violate any known laws of the universe, we can probably make it work the way that we want to.
-- Alan Kay, 1984
Guys like Alan set a high bar for us. But as I noted last time, we have something of a responsibility to set high goals when it comes to computing. We are creating the medium that people of the future -- and today? -- will use to create the next Renaissance.
Some other folks have blogged on their experiences at OOPSLA last week. These two caught my attention, for different reasons.
The trade-off between power and complexity in language
SeaSide is able to present such a simple API because it takes advantage of "esoteric" features of its implementation language/platform, Smalltalk, such as continuations and closures. Java, in comparison to Smalltalk, is designed on the assumption that programmers using the Java language are not skilled enough use such language features without making a mess of things, and so the language should only contain simple features to avoid confusing our poor little heads. Paradoxically, the result is that Java APIs are overly complex, and our poor little heads get confused anyway. SeaSide is a good demonstration that powerful, if complex, language features make the job of everyday programming easier, not harder, by letting API designers create elegant abstractions that hide the complexity of the problem domain and technical solution.
There is indeed great irony in how choosing the wrong kind of simplicity in a language leads to unnecessary complexity in the APIs and systems written in the language. I don't have an opportunity to teach students Smalltalk much these days, but I always hope that they will experience a similar epiphany when programming in Scheme.
Not surprisingly, Alan Kay has a lot to say on this topic of simplicity, complexity, and thinking computationally, too. I hope to post my take on Alan's two OOPSLA talks later this week.
Making Software in a Joyous World
You gotta love a blog posting subtitled with a line from a John Mellencamp song.
Brian Marick writes about three talks that he heard on the first day at OOPSLA. He concludes wistfully:
I wish the genial humanists like Ward [Cunningham] and the obsessive visionaries like Alan Kay had more influence [in the computing world]. I worry that the adolescence of computers is almost over, and that we're settling into that stagnant adulthood where you just plod on in the world as others made it, occasionally wistfully remembering the time when you thought endless possibility was all around you.
In the world where Ward and Alan live, people use computing to make lives better. People don't just consume ideas; they also produce them. Ward's talk described how programmers can change their world and the world of their colleagues by looking for opportunities to learn from experience and creating tools that empower programmers and users. Alan's talk urged us to take that vision out into the everyone's world, where computers can serve as a new kind of medium for expressing new kinds of ideas -- for everyone, not just software folks.
This are high goals to which we can aspire. And if we don't then who else will be able to?
As many of you know, I turned forty last week. If you are the last to know, I apologize. You must not know Robert Duvall.
If I should have suffered through a mid-life crisis by now, I am sorry to disappoint you. To be honest, I hope that I have not yet reached the middle of a long and productive life.
I am not now in the midst of a crisis, but recent events bring such thoughts to mind. I spent last week at OOPSLA amidst the intellectual, professional and personal passion of folks like Brian Marick, David Ungar, and Alan Kay. Then, on the flight home, I finally got around to reading Malcolm Gladwell's article on group think, which describes the passion that often infuses groups of creative minds working in fertile intimacy.
I certainly crave that sort of passion but often find it lacking in my daily life.
Mid-life crises may well happen when people realize that they've lost their passion. Perhaps they come to fear that they've lost their capacity to feel passionately. The steady drip-drip-drip of real life has a way of wearing down our sharp edges, leaving us just tiredly waiting for tomorrow.
One way to combat this erosion is to surround yourself with the right people.
Gladwell reports (from the work of such folks as Randall Collins and Jenny Uglow) and OOPSLA reminds of the power -- and critical need -- of groups for nurturing passion and driving greatness. Many people, especially Americans, subscribe to the myth of the solitary genius, the lone pioneer. But history shows that nearly all of the great advances attributed to individuals grew out of remarkable circles of creative people pushing each other, driving and feeding off of each other's passion.
For academics, large universities have an advantage over smaller skills like UNI when it comes to gathering the critical mass of the right people in the right place at the right time. Academic centers like Boston and technological centers like Silicon Valley offer the same possibilities. But such groups can form and grow over space and time, too, especially in this era of easy travel and electronic communication. For me, in the last decade the software community that grew out of the Hillside Group has played that role. So have amorphous groups of creative and ambitious software developers and educators in the OOPSLA and SIGCSE communities. These groups intersect in interesting ways, with enough folks outside the core to inject new ideas occasionally.
Unconsciously, I drew my program committee for the recent OOPSLA Educators Symposium from these groups. Their ideas and passion helped me to shepherd the symposium to success.
But sometimes the passion of distributed groups wilts in the heat of a long semester. At OOPSLA, a friend shared his recent bout with this malaise, and I know the feeling well. But it's good to know that making and maintaining connections with good people -- which I have been fortunate to do throughout my career -- is one way to keep passion within my reach. I need to work to develop and maintain relationships if I wish to develop and maintain passion.