The last week or so I've been trying to steal a few minutes each day to clean up the closet in my home work area. One of the big jobs has been to get rid of several years of journals and proceedings that built up from 1998 to 2002, when it seems I had time only to skim my incoming periodicals.
I seem genetically unable to simply through these into a recycling bin; instead, I sit on the floor and thumb through each, looking at least at the table of contents to see if there is anything I still want to read. Most of the day-to-day concerns in 2000 are of no particular interest now. But I do like to look at the letters to the editor in Communications of the ACM, IEEE Computer, and IEEE Spectrum, and some of the standing columns in SIGPLAN Notices, especially on Forth and on parsing. Out of every ten periodicals or so, I would guess I have saved a single paper or article for later reading.
One of the unexpected joys has been stumbling upon all of the IEEE Spectrum issues. It's one of the few general engineering generals I've ever received, and besides it has the bimonthly Reflections column by Robert Lucky, which I rediscovered accidentally earlier this month. I had forgotten in the off-months of Reflections, Spectrum runs a column called Technically Speaking, which I also enjoy quite a bit. According to its by-line, this column is "a commentary on technical culture and the use and misuse of technical language". I love words and learning about their origin and evolution, and this column used to feed my habit.
Most months, Technically Speaking includes a sidebar called "Worth repeating", which presents a quote of interest. Here are a couple that struck me as I've gone through my old stash.
From April 2000:
Engineering, like poetry, is an attempt to approach perfection. And engineers, like poets, are seldom completely satisfied with their creations.... However, while poets can go back to a particular poem hundreds of times between its first publication and its final version in their collected works, engineers can seldom make major revision in a completed structure. But an engineer can certainly learn from his mistakes.
This is from Henry Petroski, in To Engineer is Human. The process of discovery in which an engineer creates a new something is similar to the poet's process of discovery. Both lead to a first version by way of tinkering and revision. As Petroski notes, though, when engineers who build bridges and other singular structures publish their first version, it is their last version. But I think that smaller products which are mass produced often can be improved over time, in new versions. And software is different... Not only can we grow a product through a conscious process of refactoring, revision, and rewriting from scratch, but after we publish Version 1.0 we can continue to evolve the product behind its interface -- even while it is alive, servicing users. Software is a new sort of medium, whose malleability makes cleaving too closely to the engineering mindset misleading. (Of course, software developers should still learn from their mistakes!)
From June 2000:
You cannot have good science without having good science fans. Today science fans are people who are only interested in the results of science. They are not interested in a good play in science as a football fan is interested in a good play in football. We are not going to be able to have an excellent scientific effort unless the man in the street appreciates science.
This is reminiscent of an ongoing theme in this blog and in the larger computer science community. It continues to be a theme in all of science as well. How do we reform -- re-form -- our education system so that most kids at least appreciate what science is and means? Setting our goal as high as creating fans as into science as into football or NASCAR would be ambitious indeed!
Oh, and don't think that this ongoing theme in the computer science and general scientific world is a new one. The quote above is from Edward Teller, taken off the dust jacket of a book named Rays: Visible and Invisible, published in 1958. The more things change, the more they stay the same. Perhaps it should comfort us that the problem we face is at least half a century old. We shouldn't feel guilty that we cannot solve it over night.
And finally, from August 2000:
To the outsider, science often seems to be a frightful jumble of facts with very little that looks human and inspiring about it. To the working scientist, it is so full of interest and so fascinating that he can only pity the layman.
I think the key here is make moire people insiders. This is what Alan Kay urges us to do -- he's been saying this for thirty years. The best way to share the thrill is to help people to do what we do, not (just) tell them stories.
I recently mentioned again Seth Godin's All Marketers are Liars in the context of teachers as liars. One last mention -- this time, for researchers and students.
As I read pages 29 and 30 of the book, I was struck by how much Godin's advice for marketers matches my experience as a researcher, first as a graduate student, then as a young faculty member, and now as a grizzled veteran. Consider:
There are only two things that separate success from failure in most organizations today:
- Invent stuff worth talking about.
- Tell stories about what you've invented.
That is the life of the academic researcher: invent cool stuff, and talk about the inventions. Some of my best professors were people who invented cool stuff and loved to talk about their inventions. They relished being in the lab, creating, and then crafting a story that shared their excitement. As a student, undergrad and grad alike, I was drawn to these profs, even when they worked in areas that didn't interest me much. When they did -- wow.
Many people get into research because we want to do #1, and #2 is just part of the deal. Whether the young researcher wants to or not, telling the stories is essential. It is how we spread our ideas and get the feedback that helps us to improve them. But on a more mercenary level it's also how we get folks interested in offering us tenure-track positions, and then offering us tenure.
Over the course of my career, I have come to realize how many people go into research because they want to do #2. As strange as it might sound, Getting a Ph.D. is one of the more attractive routes to becoming a professional story-teller, because it is the de facto credential for teaching at universities. Sometimes these folks continue to invent cool stuff to talk about. But some ultimately fall away from the research game. They want to tell stories, but without the external pressure to do #1. Maybe they lose the drive to invent, or never really had it in the first place. These folks often become great teachers, too, whether as instructors at research schools or as faculty at so-called "teaching universities". Many of those folks still have a passion for something like #1, but it tends toward learning about the new stuff that others create, synthesizing it, and preparing it for a wider audience. Then they tell the stories to their students and to the general public.
As I've written before, CS needs its own popular story teller, working outside the classroom, to share the thrill... I don't think that has to be an active researcher -- think about the remarkable effect that Martin Gardner had on the world by sharing real math with us in ways that made us want to do mathematics -- and even computer science! But having someone who continues to invent be that person would work just fine. Thank you, Mr. Feynman.
So, to my grad students and to graduate students everywhere, this is my advice to you: Invent stuff worth talking about, and then tell stories about what you've invented.
But this advice is not just for graduate students. Consider this passage from Godin, which I also endorse wholeheartedly:
On a personal level, your resume should be about inventing remarkable things and telling stories that register--not about how good you are at meeting specs. Organizations that are going to be around tomorrow will be those that stop spending all their time dealing with the day-to-day crises of shipping stuff out the door or reacting to emergencies. Instead the new way of marketing will separate winners from losers.
This is where the excitement and future of computer science in industry lie, too. Students who can (only) meet specs are plentiful and not always all that valuable. The real value comes in creating and integrating ideas. This is advice that I've been sharing with entrepreneurially-minded students for a while, and I think as time goes by it will apply to more and more students. Paul Graham has spent a lot of time spreading this message, in articles such as What You'll Wish You'd Known, and I've written about Graham's message here as well. The future belongs to people who are asking questions, not people who can deliver answers to other peoples' questions.
So, this advice is not just for students. It is for everyone.
I love to hear from readers who have enjoyed an article. Often, folks have links or passages to share from their own study of the same issue. Sometimes, I feel a need to share those links with everyone. Here are three, in blog-like reverse chronological order:
Geoff Wozniak pointed me in the direction of Gilad Bracha's work on pluggable type systems. I had heard of this idea but not read much about it. Bracha argues that a type system should be a wrapper around the dynamically typed core of a program. This makes it possible to expose different views of a program to different readers, based on their needs and preferences. More thinking to do...
On Robert Lucky and Math Blues
Chris Johnson, a former student of mine, is also a fan of Bob Lucky's. As a graduate student in CS at Tennessee, though, he qualifies for a relatively inexpensive IEEE student membership and so can get his fix of Lucky each month in Spectrum. Chris took pity on his old prof and sent me a link to Lucky's Reflections on-line. Thank you, thank you! More reading to do...
Seth Godin's thesis is that all good marketers "lie" because they tell a story tailored to their audience -- not "the truth, the whole truth, and "nothing but the truth". I applied his thesis to CS professors and found it fitting.
As old OOSPLA friend and colleague Michael Berman reminds us, this is not a new idea:
Another noteworthy characteristic of this manual is that it doesn't always tell the truth.... The author feels that this technique of deliberate lying will actually make it easier for you to learn the ideas.
That passage was written by Donald Knuth in the preface to The TEXbook, pages vi-vii. Pretty good company to be in, I'd say, even if he is an admitted liar.
Keep those suggestions coming, folks!
... choose any two.
In a recent blog entry, JRuby developer Charles Nutter claimed that, in general, "[g]ood authors do not have time to be good developers". This post has engendered a fair amount of discussion, but I'm not sure why. It shouldn't surprise anyone that staying on top of your game in one time-consuming discipline makes it hard, if not impossible to stay on top of your game in a second time-consuming discipline. There are so many hours in a day, and only so many brain cycles to spend learning and doing.
I face this trade-off, but between trying to be a good teacher and trying to be a good developer. Like authors, teachers are in a bind: To teach a topic well, we should do it well. To do it well takes time. But the time we spend learning and doing it well is time that we can't spend teaching well. The only chance we have to do both well is to spend most of our time doing only these two activities, but that can mean living a life out of balance.
If I had not run for 3-1/2 hours last Sunday, I suppose that I could have developed another example for my class or written some Ruby. But would I have been better off? Would my students? I don't know.
While I have considered the prospect of writing a textbook, I've not yet found a project that made me want to give up time from my teaching, my programming, my running, and my family. Like Nutter, I like to write code. This blog gives me an opportunity to write prose and to reach readers with my ideas, while leaving me the rest of the day to teach, to perhaps to help others learn to like to write code by my example.
... for the first time since last September.
In the two weeks since my last training update, I have run 99 miles, in weeks of 43 and 56. The 56 is not an extraordinary number during marathon training, though for me it's a signal that I am reaching the peak of my plan. But after the last eleven months, 56 miles seems amazing. And it feels great.
The 99 miles culminated in a long run of 23 miles on Sunday morning. Rather than run a 23-mile route, I pieced together two passes around an 8-mile loop followed by a 7-mile loop. This allowed me to stop by my house twice during the run, grab a power gel, and take any other breaks (ahem) that I might need.
I didn't run fast -- just a bit under 9:00 minutes per mile -- but that's a good pace for a long run when my marathon goal pace is 8:00 or even 8:30. (I ran last Sunday's 12-miler in a sub-8:30/mile pace.) This run challenged me not only with its distance but also its hills. The 8-mile loop has several long rises and falls, and running down the hills left me with sore quadriceps. But its a soreness I am happy to carry into this week.
And before you tell me that Iowa is flat and has no hills, let me remind you that hills are relative. When I run mostly flat routes, a few miles of hills in a row affects the legs. When compounded with distance, the hills matter more. I invite anyone who runs mostly flat ground to join me for a week. Then we'll see who thinks eastern Iowa is flat!
I think I am back in the groove, or close. My last five weeks have been 44, 46, 48, 43, and 56 miles. The next two will tell; they call for 44 and 60 miles, respectively, ending with a 25-mile long run. Then comes my taper, when I progressively cut mileage, convert stamina into speed, and let my body recover a bit before the race.
I don't have any grand analogies between running and agile software development right now. Sustainable pace and continuous feedback have been instrumental in building my mileage back up. But when push comes to shove, it's mostly about running -- just as software development ultimately comes down to programming. At the end of the day, all you have to show are the code you wrote, or the miles you ran.
A couple of my recent entries (here and here) have questioned the value of data types, at least in relation to a corresponding set of unit tests. While running this weekend, I played devil's advocate with myself a bit, thinking, "But what would a person who prefers programming with manifest types say?"
One thing that types provide that tests do not is a sense of universality. I can write a test for one case, or two, or ten, but at the end of the day I will have only a finite number of test cases. A type checker can make a more general statement, of a more limited scope. The type checker can say, "I don't know the values of all possible inputs, but I do know that all of the values are integers." That information can be quite useful. A compiler can use it to generate more efficient target code. A programmer can use it to generalize more confidently from a small set of tests to the much larger set of all possible tests.
In the terms of unit testing, types give us a remarkable level of test coverage for a particular kind of test case.
This is a really useful feature of types. I'd like to take advantage of it, even if I don't want my language to get in my way very much while I'm writing code. One way I can have both is to use type inference -- to let my compiler, or more generally my development environment, glean type information from my code and use that in ways that help me.
There is another sense in which we object-oriented programmers use types without thinking about them: we create objects! When I write a class, I define a set of objects as an abstraction. Such an object is specified in terms of its behavioral interface, which is public, and its internal state, which is private. This creates a kind of encapsulation that is just like what a data type provides. In fact, we often do think of classes as abstract data types, but with the twist that we focus on an object's behavioral responsibility, rather than manipulating its state.
That said, newcomers to my code benefit from manifest types because the types point them to the public interface expected of the objects that appear in the various contexts of my program.
I think this gets to the heart of the issue. Type information is incredibly useful, and helps the reader of a program in ways that a set of tests does not. When I write a programs with a data-centric view of my abstractions, specifying types up front seems not only reasonable but almost a no-brainer. But when I move away from a data-centric view to a behavioral perspective, tests seem to offer a more flexible, comfortable way to indicate the requirements I place on my objects.
This is largely perception, of course, as a Java-style interface allows me to walk a middle road. Why not just define an interface for everything and have the benefits of both worlds? When I am programming bottom-up, as I often do, I think the answer comes down to the fact that I don't know what the interfaces should look like until I am done, and fiddling with manifest types along the way slows me down at best and distracts me from what is important at worst. By the time I know what my types should look like, they are are of little use to me as a programmer; I'm on to the next discovery task.
I didn't realize that my mind would turn to type inference when I started this line of questioning. (Thinking and writing can be like that!) But now I am wondering how we can use type inference to figure out and display type information for readers of code when it will be useful to them.
The software world always seems to have a bandwagon du jour, which people are either riding or rebelling against. When extreme programming became the rage a while back, all I seemed to hear from some folks was that "agile" was a buzzword, a fad, all hype and no horse. Object-oriented programming went through its bandwagon phase, and Java had its turn. Lately it seems Ruby is the target of knowing whispers, that its popularity is only the result of good marketing, and it's not really all that different.
But what's the alternative? Let's see what Turing Award winner Niklaus Wirth has to say:
Why, then, did Pascal capture all the attention, and Modula and Oberon got so little? Again I quote Franz: "This was, of course, partially of Wirth's own making". He continues: "He refrained from ... names such as Pascal-2, Pascal+, Pascal 2000, but instead opted for Modula and Oberon". Again Franz is right. To my defense I can plead that Pascal-2 and Pascal+ had already been taken by others for their own extensions of Pascal, and that I felt that these names would have been misleading for languages that were, although similar, syntactically distinct from Pascal. I emphasized progress rather than continuity, evidently a poor marketing strategy.
But of course the naming is by far not the whole story. For one thing, we were not sufficiently active -- today we would say aggressive -- in making our developments widely known.
Good names and aggressive dissemination of ideas. (Today, many would call that "marketing".)
Wirth views Pascal, Modula, and Oberon as an ongoing development of 25 years that resulted in a mature, elegant, and powerful language, a language who couldn't even imagine back in 1969. Yet for many software folks, Modula was a blip on the scene, or maybe just a footnote, and Oberon was, well, most people just say "Huh?" And that's a shame, because even if we choose not to program in Oberon, we lose something by not understanding what it accomplished as a language capable of supporting teams and structured design across the full array of system programming.
I never faulted Kent Beck for aggressively spreading XP and the ideas it embodied. Whatever hype machine grew up around XP was mostly a natural result of people becoming excited by something that could so improve their professional practice. Yes, I know that some people unscrupulously played off the hype, but the alternative to risking hype is anonymity. That's no way to change the world.
I also applaud Kent for growing as he watched the results of XP out in the wild and for folding that growth back into his vision of XP. I wonder, though, if the original version of XP will be Pascal to XP2e's Modula.
By the way, the Wirth quote above comes from his 2002 paper Pascal and Its Successors. I enjoy hearing scientists and engineers tell the stories of their developments, and Wirth does a nice job conveying the context in which he developed Pascal, which had a great many effects in industry but more so in the academic world, and its progeny. As I read, I reacted to several of his remarks:
Its foundations reached far deeper than simply "programming without go to statements" as some people believed. It is more closely related to the top-down approach to problem solving.
Yes, and in this sense we can more clearly see the different mindset between the Structured Programming crowd and the bottom-up Lisp and Smalltalk crowd.
Data typing introduces redundancy, and this redundancy can be used to detect inconsistencies, that is, errors. If the type of all objects can be determined by merely reading the program text, that is, without executing the program, then the type is called static, and checking can be performed by the compiler. Surely errors detected by the compiler are harmless and cheap compared to those detected during program execution in the field, by the customer.
Well, yeah, but what if I write tests that let me detect the errors in house -- and tell more about my program and intentions than manifest types can?
The goal of making the language powerful enough to describe entire systems was achieved by introducing certain low-level features.... Such facilities ... are inherently contrary to the notion of abstraction by high-level language, and should be avoided. They were called loopholes, because they allow to break the rules imposed by the abstraction. But sometimes these rules appear as too rigid, and use of a loophole becomes unavoidable. The dilemma was resolved through the module facility which would allow to confine the use of such "naughty" tricks to specific, low-level server modules. It turned out that this was a naive view of the nature of programmers. The lesson: If you introduce a feature that can be abused, then it will be abused, and frequently so!
This is, I think, a fundamental paradox. Some rules, especially in small, elegant languages, don't just appear too rigid; they are. So we add accommodations to give the programmer a way to breach the limitation. But then programmers use these features in ways that circumvent the elegant theory. So we take them out. But then...
The absence of loopholes is the acid test for the quality of a language. After all, a language constitutes an abstraction, a formal system, determined by a set of consistent rules and axioms. Loopholes serve to break these rules and can be understood only in terms of another, underlying system, an implementation.
Programming languages are not (just) formal systems. They are tools used by people. An occasional leak in the abstraction is a small price to pay for making programmers' lives better. As Spolsky says, "So the abstractions save us time working, but they don't save us time learning."
A strong culture is a better way to ensure that programmers don't abuse a feature to their detriment than a crippled language.
All that said, we owe a lot to Wirth's work on Pascal, Modula, and Oberon. It's worth learning.
A good friend sent me a note today that ended like this:
I feel like this has opened up a whole new world, a whole new way of thinking about programming. In fact, I haven't had such a feeling ... since my early days of first learning Pascal and the days of discovering data structures, for loops, recursion, etc...
I know this sensation and hope every person -- student and professional -- gets to feel it every so often. Students, seek it out by taking courses that promise more than just more of the same. Professionals, seek it out by learning a new kind of language, a new kind of web framework, a new kind of programming. This feeling is worth the effort, and makes your life better as a result.
In an XP mailing list thread "Are current "popular" programming languages enterprise grade?", someone raised a concern that some languages result in "buggier" code. William Pietri responded to a more general concern:
From the "enterprise" perspective, I think there's some legitimate worry about more flexible languages.
If I have to inherit a great code base, I'm sure I'd be happy if it were in Ruby. But if I have to inherit a bad one, I'd rather it be in Java. Its surly and restrictive nature makes some sorts of archeology easier, partly because it prevents some of Ruby's beautiful magic.
Now personally, I'd solve this problem by making sure all code bases are great ones. But if one already has a culture of tolerance for mediocrity and/or building one's house on sand, then restricting people to "safe" tool choices isn't crazed.
Maybe I haven't been paying attention, but this is the first time I recall seeing someone say in quite this way why "better" languages are risky for general use. A more powerful language enables beautiful magic that makes digging into a new codebase more difficult. The claim seems to be that it is easier to understand bad code written in a less powerful language.
I'm not sure how I feel about this claim just yet. Is it really easier for a Scheme programming expert to understand bad Scheme code than bad Java code? Is difficulty more a function of the beautiful magic a language allows, or more a function of the programmer's skill. William speaks of his inheriting someone else's bad code, but maybe our concern should be a weak programmer inheriting someone else's bad code? This is the heart of many people's concerns with using powerful but non-mainstream languages in production systems, that there just aren't enough good programmers prepared to work in those languages.
But William's answer has me thinking. It provoked another interesting message, Phlip's response that took the claim in a different direction:
That's just a way to say this
...static typing is a form of unit tests
...Java enforces static typing viciously
...I'd rather inherit a project with any unit tests.
This is a great way to think about manifest typing: Types are a degenerate form of unit test, and languages that enforce manifest types require programmers to write degenerate tests. In Phlip's idea, William is happy to receive a bad Java codebase because at least it has types as tests. But if the Ruby codebase you inherit comes with unit tests...
Many agile developers extol the virtue of programming in more flexible -- and powerful -- languages, and most know that by writing tests they mitigate the loss of manifest typing and end up with a net win: code whose tests tell us much more than the manifest types would have anyway, and they benefited from using a more powerful language. I'd argue that for a large many programmers, using a more powerful language can be a lot more fun, too.
Ultimately, the issue comes back to an old question: Can we prepare the majority of programmers to use Scheme, Ruby, Smalltalk, or Haskell effectively enough to make them suitable for the mainstream. I still think yes, and believe that the drifting of powerful concepts into popular languages such as Ruby is a sign of gradual gains. Whether this pans out in the long run, we'll have to see.
We often speak of metaphors for software development, but what we are really talking about is a mythology. Until this morning, I forgot that I had already written about this topic back in September 2004, when I wrote Myth and Patterns about a session led by Norm Kerth at PLoP that year. I just re-read that piece and had two thoughts. One, what a great day that was! And two, I don't have much new to say on this topic after all. I was jazzed this morning to talk about the fact that software engineering is more a mythology of software development than a meaningful metaphor for doing it.
Why jazzed? Recently I ran across a satirical fable entitled Masterpiece Engineering by Thomas Simpson, which appears as an appendix in Brian Randell's reminiscence of having edited the reports of the 1968-196969 NATO Software Engineering conferences. Simpson's satire made light of the software engineering metaphor by comparing it the engineering of artistic masterpieces. He wrote it in response to the unexpected tone of the second NATO conference, held in 1969. The first conference is famous for being where the term "software engineering" was coined, more as a thought experiment about software developers might strive for than anything else. But, as Randell writes, the second conference took some participants by surprise:
Unlike the first conference, at which it was fully accepted that the term software engineering expressed a need rather than a reality, in Rome there was already a slight tendency to talk as if the subject already existed. And it became clear during the conference that the organizers had a hidden agenda, namely that of persuading NATO to fund the setting up of an International Software Engineering Institute.
Show me the money. Some things never change.
However things did not go according to their plan. The discussion sessions which were meant to provide evidence of strong and extensive support for this proposal were instead marked by considerable scepticism...
I'm glad that I found Simpson's satire and Randell's report. I enjoyed both, and they caused me to look back to my blog on Kerth's PLoP session. Reading it again caused me to think again about what I had planned to write this morning. That old piece reminded me of something I've forgotten: myths embody truth. I was set to diss the engineering metaphor as "nothing but a story", but I had forgotten that we create myths to help us explain the ways of the universe. They contain some truths, sometimes filled out with fanciful details whose primary purpose are to make us want to hear the story -- and tell it to others. "Deep truths often lie inside stories that are themselves not strictly factual."
So rather than to speak ill of software engineering, the worst I can say this morning that we became too self-important in the software engineering myth too soon. We came to believe that all the filler -- all of the details we borrowed from engineering to make the story more complete -- were true, too. We created an orthodoxy out of what was really just a great source of inspiration to find our own way in the challenging world of creating software.
Maybe we have learned enough in the last forty years to know that much of the software engineering myth no longer matches our understanding of the world, but we aren't willing to give up the comfort of the story.
What we need to do is to identify the deep truths that lie at the heart of our myth, and use them to help us move on to something better. I think Ward and Kent did that when they created XP. But it, too, has now reached an age at which we should deconstruct it, identify its essential truths, and create something better. Fortunately, the culture of XP and other agile approaches not only allows this growth; it encourages it. The agile myth is meant "to be shaped to its environment, retold in as many different forms as there are tellers, as we all work together to find deeper truths about how to build better software better."
I'm still learning from a conference I attended three years ago this weekend.
... or When the Seventh Commandment Doesn't Apply
I have had Warren's Question, a paper by Sally Fincher and Josh Tenenberg, in my to-read/ folder for a while, since soon after it hit the ICER 2007 conference web site. (The paper will be presented on Saturday.) I grabbed it on my way to get a sandwich yesterday, for a nice lunchtime read. I am glad I did, and that surprises me. I didn't expect to enjoy it as much as I did.
Why wouldn't I enjoy it? Well, it's a CS education paper. That sounds wrong, I know, but "education papers" usually turn me off before I can get to the value. Section titles such as "The political context of self-disclosure" make my technical self cringe. I am sympathetic to the goals of CS education research, but the jargon and methodological baggage usually overwhelm me.
By skimming through the paper I was able to find the basic story, and it drew me in, and I soon realized that the paper makes some good points and raises some interesting questions.
The story of Warren's question is one of teachers sharing their practical knowledge. Fincher and Tenenberg deconstruct several messages on a mailing list of CS instructors, starting with a couple of questions from Warren seeking help on teaching interactively and on fixing a broken lab grading procedure. He receives answers off-list to some questions, and some answers on-list that answer obliquely but offer advice of a more general nature.
This is how most teaching knowledge is shared: one-on-one, or in small groups of friends. Many people refer to this style exchange as stealing. There is almost a badge of honor among some of my friends and colleagues to steal from as many accomplished, creative teachers as possible. I steal shamelessly from people such as Owen Astrachan, Robert Duvall, Joe Bergin, and Mike Clancy , and I've heard from others that they have stolen ideas from me. Good ideas spread through networks of friends and colleagues, sometimes jumping into new pools through journal publications and conference presentations.
Fincher and Tenenberg point out that this means of sharing knowledge has several shortcomings. It's not that stealing is wrong, or that most folks mind that their ideas have been lifted. I'm happy to hear that someone has found one of my ideas or techniques so useful that he has adapted it for his own use. My friends and I share ideas with the expectation that others might find them useful -- that's the point.
But informal sharing of this share results in a loss of provenance. In the arts, a work's provenance is the record of the work's history, in particular its ownership. This "audit trail" offers some measure of confidence in the authenticity of the work, which reduces the risk of being defrauded in exchange. When it comes to teaching practice, provenance isn't really about the risk of being sold a fake; it's more about trusting the efficacy of the technique in practice:
If we do not know the history of the practice we examine, then we take it as if new. We cannot tell whether this is long-established and well-evolved, worked on by respected educators over time, or whether it was fresh-minted yesterday. Not only that, but we cannot know why any adaptations, or changes, have been made.
If I knew a technique originated with Owen Astrachan, has been adapted by Nick Parlante for use in an object-oriented course, and has been "ported" to interactive classes by Robert Duvall, then I'll feel a lot more confident about using the technique in my interactive OOP classroom than if the technique comes to me blind, from a source I don't know -- and so, out of safety, trust less. In a network of friends, provenance is part of the group's collective memory. But groups evolve over time, and ideas leak out into into groups who do not share the memory.
This loss of provenance leads to a "rootlessness and reinvention of practice" that hampers the improvement of teaching everywhere. We all keep reinventing the wheel.
How does this differ from the research side of academia? In the research culture, stealing from others is strictly forbidden. One must cite the work of others and acknowledge the source of ideas. This plays several important roles in the culture, among them giving credit for the development of theories, authenticating the trail of ideas, differentiating work done in different contexts, and evaluating the efficacy of theories.
When I speak of lifting ideas as standard practice, I am exaggerating a bit. Stealing isn't really okay. Whenever I see someone using an idea I think I developed, I feel a violated, much as I do when someone steals my treasure. When we appropriate an idea, we should share credit. I may steal shamelessly, but I tell everyone where I got the candy. But when we share credit informally, we still lose the provenance of the idea. Publication and citing the work of others is still the primary way to document teaching practice formally. When there is no archival publication to cite, the author becomes the first line of history and has to document her sources in the new paper.
Documenting teaching practice, beyond documenting research results, would help us to teach computer science better, and that would help us to improve computer science and help the CS community serve the world better. This is one of the insights of the software patterns community: that documenting software development practice from the trenches can improve the state of software development. The writers' workshops of PLoP both help to share practice and to help practitioners communicate their practice more effectively.
The patterns culture came to the teaching world in the form of pedagogical patterns, which draw their inspiration from software patterns but aim to record the practice of experienced instructors. This movement started in the domains of software development and computer science, but that is an artifact of history. The techniques so far documented as pedagogical patterns are more general.
The CS education community has non-pattern venues for sharing teaching practice. In addition to refereed papers, SIGCSE has for several years offered nifty assignments sessions that open what once was a closed network of colleagues to the broader community, and that now offer a platform to a wider set of instructors. Likewise, at OOPSLA one once found workshops on teaching OO design and recently has offered a series of "killer examples" workshops.
Fincher and Tenenberg argue that, even if we had a more complete literature of teaching practice, the task of finding what one needs would be difficult. It is hard to specify all of the variables in play within most teaching scenarios, which makes it hard to index resources and then traverse the literature. In the language of patterns, teachers have a hard time characterizing the problem that a technique solves and the context in which the technique applies. I experienced this difficulty when trying to write pedagogical patterns, and observed it when I read pedagogical patterns written by others. How does one define the context and problem concretely enough to be useful? Without such boundaries, a pattern has more the character of a sweet sentiment than a teaching technique.
Researchers engage in abstraction and instantiation. So do teachers. They tell stories. Together, the story teller and the hearer generalize the story out of the teller's context and adapt it to work in other contexts.
I gain from being a part of networks of colleagues who discuss and share practice. I'm on a couple of mailing lists like the one described in "Warren's Question", and they all enrich my teaching, as well as my appreciation for computer science. I play a slightly different role on each list and enjoy a different status. On one particular, I am a peripheral member of a core group of friends who are master CS teachers. Over the years, I have become friends with several of them through various ways and thus came to be attached to the fringe of the group. On another, I am a core member and a leader. Whatever my role, I find that I learn much from my interactions with the group, as we share ideas and their results.
That's one of the reasons I like to blog. It gives me a venue in which to tell an occasional story about what I do and how it turns out in the trench with my students. Many of those ideas are "stolen" -- from friends one-on-one, from other blogs, and even from from other contexts. For example, I love to make connections between software development and teaching, and between software development and running. I like to look for ways to improve how I teach from writing, other arts, running and other sports, almost any source. But my blog gives me a way to give credit to the source of my ideas -- colleagues, authors, and whole other disciplines. A blog may not be as formal as archival publication, or as highly respected, but it does allow a more immediate exchange of ideas and a far wider conversation than I could ever have with just my friends.
As usual for me, reading "Warren's Question" is just a starting point. I have long known about Donald Schon's books on the reflective practitioner and heard of how good they are. But I must confess... I have never read Schon. Given the applicability of his ideas to so many ideas that have occupied me over the years -- apprenticeship and studio instruction among them, I really should be embarrassed. I was really intrigued by Fincher and Tenenberg's discussion of Schon's "hall of mirrors" and how one's ideas are reflected back when adopted and modified by others, and so I think I'll be making a short trip to the library soon!
I have just completed the fifth week of my 12-week training plan for the Marine Corps Marathon. Not quite halfway, but when I throw in the "bonus week" in Indiana and California that interrupted the start of the plan, I am through six of thirteen weeks -- and halfway to that last week before the race, when we let our bodies rest and our minds prepare for the big day.
Today I ran twenty-one miles, one of my standard fourteen-mile routes followed by a standard seven-miler. After five miles, then seven, then nine, my legs were balking. My mind was thinking, "Maybe you should just do fourteen today. That's a good run. You deserve a break. You can do twenty-one next week."
My mind was right. At the start of the program, this week called for a twelve-miler, recovering from a build-up to twenty-one miles last week. But early in August I had a bad 15-miler, which I attributed in part to my lack of mileage this summer. I listened to my body and made an adjustment in program, to build up to 21 more gradually. But one outcome of this change is that I haven't had a "drop-back week" since July. That's a week where I reduce my long-run mileage so that my body can adjust to the increased miles from the previous week or two. Usually I "drop back" every third week during training; my training plan drops back every other week.
So I would have been justified to hold on fourteen and live to run another week. But I just didn't want to, sore legs or not. Running a marathon is about pushing your body -- and your mind -- to run when it wants to stop, when the wise move is to stop. This morning held a moment of challenge for me. The weather is perfect. My body is well-fueled. My mind wants to stop. But I want to run more.
At twelve miles, I took a mocha mocha Clif Shot -- 100 calories, 70mg of electrolytes, and 50mg of caffeine. Have you ever baked chocolate chip cookies, taken one right from the oven, and indulged in that decadent sensation? This gel tasted just like that. I felt as if I should stop at the nearest Catholic church and make confession. For the rest of the run, my mind felt good as it commanded my legs to do what they must do: just keep moving.
Some days hold moments of challenge that are at the same time moments of promise. I sense that this morning's challenge held such a promise. I've been struggling more mentally since my last marathon than at any time in the preceding three years. Today's run asked me, "Do you mean business?" My legs are sore, but my answer was "Yes".
I used to be a member of the IEEE Computer Society. A few years, a combination of factors (including rising dues, a desire to cut back on paper journal subscriptions, a lack of time to read all the journals I was receiving) led me to drop my membership. In some ways, I miss receiving the institute's flagship publication, Spectrum. It was aimed more at "real" engineers than software types, but it was a great source of general information across a variety of engineering disciplines. My favorite column in Spectrum was Robert Lucky's "Reflections". It is written in a blog-like fashion, covering whatever neat ideas he has been thinking about lately in a conversational tone.
For some reason, this week I received a complimentary issue of Spectrum, and I immediately turned to "Reflections", which graced the last page. In this installment, Lucky writes about how math is disappearing from the practice of engineering, and this makes him sad. In the old days engineers did more of their own math, while these days they tend to focus on creating and using software to do those computations for them. But he misses the math, both doing it and thinking about it. Once he came to appreciate the beauty in the mathematics that underlies his corner of engineering, and now it is "as if my profession had slipped away while I wasn't looking". Thus his title, "Math Blues".
I appreciate how he must feel, because a lot of what used to be "fundamental" in computer science now seems almost quaint these days. I especially feel for folks who seem more attached to the old fundamentals, because today's world must seem utterly foreign to them. Of course, we can always try to keep our curriculum focused on those fundamentals, though students sometimes realize that we are living in the past.
I felt some math blues as I read Lucky's column, too, but of a different sort. Here is the passage that made me saddest:
I remember well the day in high school algebra class when I was first introduced to imaginary numbers. The teacher said that because the square root of a negative number didn't actually exist, it was called imaginary. That bothered me a lot. I asked, If it didn't exist, why give it a name and study it? Unfortunately, the teacher had no answers for these questions.
What great questions young Bob asked, and what an opportunity for his teacher to open the student to a world of possibilities. But it was a missed opportunity. Maybe the teacher just did not know how to explain such an advanced idea in a way that his young student could grasp. But I think it is likely that the teacher didn't understand, appreciate, or perhaps even know about that world.
Maybe you don't have to be a mathematician, engineer, or scientist to be able to teach math well. But one thing is for certain: not knowing mathematics at the level those professionals need creates a potential for shallowness that is hard to overcome. How much more attractive would a college major in science, math, or engineering look to high school students if they encountered deep and beautiful ideas in their courses -- even ideas that matter when we try to solve real problems?
Outreach from the university into our school systems can help. Many teachers want to do more and just need more time and other resources to make it happen. I think, though, that a systemic change in how we run our schools and in what we expect of our teaching candidates would go even farther.