October 31, 2005 7:19 PM


In my recent post on Gerry Sussman's talk at OOPSLA, I quoted Gerry Sussman quoting concert pianist James Boyk, and then commented:

A work of art is a machine with an aesthetic purpose.

(I am uncomfortable with the impression these quotes give, that artistic expression is mechanistic, though I believe that artistic work depends deeply on craft skills and unromantic practice.)

Thanks to the wonders of the web, James came across my post and responded with to my parenthetical:

You may be amused to learn that fear of such comments is the reason I never said this to anyone except my wife, until I said it to Gerry! Nevertheless, my remark is true. It's just that word "machine" that rings dissonant bells for many people.

I was amused... I mean, I am a computer scientist and an old AI researcher. The idea of a program, a machine, being beautiful or even creating beauty has been one of the key ideas running through my entire professional life. Yet even for me the word "machine" conjured up a sense that devalued art. This was only my initial reaction to Sussman's sentiment, though. I also felt an almost immediate need to mince my discomfort with a disclaimer about the less romantic side of creation, in craft and repetition. I must be conflicted.

James then explained the intention underlying his use of the mechanistic reference in way that struck close to home for me:

I find the "machine" idea useful because it leads the musician to look for, and expect to find, understandable structures and processes in works of music. This is productive in itself, and at the same time, it highlights the existence and importance of those elements of the music that are beyond this kind of understanding.

This is an excellent point, and it sheds light on other domains of creation, including software development. Knowing and applying programming patterns helps programmers both to seek and recognize understandable structures in large programs and to recognize the presence and importance of the code that lies outside of the patterns. This is true even -- especially!? -- for novice programmers, who are just beginning to understand programs and their structure, and the process of reading and writing them. Much of the motivation for work on the use of elementary patterns in instruction, as we try to help learn to comprehend masses of code that at first glance may seem but a jumble but which in fact bear a lot of structure within them. Recognizing code that is and isn't part of recurring structure, and understanding the role both play, is an essential skill for the novice programmer to learn.

Folks like Gerry Sussman and Dick Gabriel do us all a service by helping us to overcome our discomfort when thinking of machines and beauty. We can learn something about science and about art.

Thanks to James for following up on my post with his underlying insight!

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 27, 2005 7:56 PM

OOPSLA Day 5: Grady Booch on Software Architecture Preservation

Grady Booch, free radical

Grady Booch refers to himself as an "IBM fellow and free radical". I don't know if 'free radical' part of his job description or only self-appellation, but it certainly fits his roving personality. He is a guy with many deep interests and a passion for exploring new lands.

His latest passion is Handbook of Software Architecture, a project that many folks thinks is among the most important strategic efforts for the history and future of software development.

Booch opened his invited talk at OOPSLA by reminding everyone that "classical science advances via the dance between quantitative observation and theoretical construction." The former is deliberate and intentional; the latter is creative and testable. Computer science is full of empirical observation and the construction of theories, but in the world of software we often spend all of time building artifacts and not enough time doing science. We have our share of theories, about process and tools, but much of that work is based on anecdote and personal experience, not the hard, dispassionate data that reflects good empirical work.

Booch reminisced about a discussion he had with Ralph Johnson at the Computer Museum a few years ago. They did a back-of-envelope calculation that estimated the software industry had produces approximately 1 trillion lines of code in high-level languages since the 1950s -- yet little systematic empirical study had been done of this work. What might we learn from digging through all that code? One thing I feel pretty confident of: we'd find surprises.

In discussing the legacy of OOPSLA, Booch mentioned one element of the software world launched at OOPSLA that has taken seriously the attempt to understand real systems: the software patterns community, of which Booch was a founding father. He hailed patterns as "the most important contribution of the last 10-12 years" in the software world, and I imagine that his fond evaluation rests largely on patterns community's empirical contribution -- a fundamental concern for the structure of real software in the face of real constraints, not the cleaned up structures and constraints of traditional computer science.

We have done relatively little introspection into the architecture of large software systems. We have no common language for describing architectures, no discipline that studies software in a historical sense. Occasionally, people publish papers that advance the area -- one that comes to mind immediately is Butler Lampson's Hints for Computer System Design -- but these are individual efforts, or ad hoc group efforts.

The other thing this brought to my mind was my time in an undergraduate architecture program. After the first-year course, every archie took courses in the History of Housing, in which students learned about existing architecture, both as historical matter and to inform current practice. My friends became immersed in what had been done, and that certainly gave them a context in which to develop their own outlook on design. (As I look at the program's current curriculum, I see that the courses have been renamed History of Architecture, which to me replaces the rich flavor of houses for the more generic 'architecture', even if it more accurately reflects the breadth of the courses.)

Booch spent a next part of his talk comparing software architecture to civil architecture. I can't do justice to this part of his talk; you should read the growing volume of content on his web site. One of his primary distinctions, though, involved the different levels of understanding we have about the materials we use. The transformation from vision to execution in civil systems is not different in principle from that in software, but we understand more about the physical materials that a civil architect uses than we do about a software developer's raw material. Hence the need to study existing software systems more deeply.

Civil architecture has made tremendous progress over the years in its understanding of materials, but the scale of its creations has not grown commensurately other what the ancients built. But the discipline has a legacy of studying the works of the masters.

Finally he listed a number of books that document patterns in physical and civil systems, including The Elements of Style -- not the Strunk and White version -- and the books of Christopher Alexander, the godfather of the software patterns movement.

Booch's goal is for the software community to document software architectures in as great detail, both for history's sake and for the patterns that will help us create more and more beautiful systems. His project is one man's beginning, and an inspirational one at that. In addition to documenting classic systems such as MacPaint, he aims to preserve our classic software as well. That will enable us to study and appreciate it in new ways as our understanding of computing and software grow.

He closed his talk with inspiration but also a note of warning... He told the story of contacting Edsger Dijkstra to tell him about the handbook project and seek his aid in the form of code and papers and other materials from Dijkstra's personal collection. Dijkstra supported the project enthusiastically and pledged materials from his personal library -- only to die before the details had been formalized. Now, Booch must work through the Dijkstra estate in hopes of collecting any of the material pledged.

We are a young discipline, relatively speaking, but time is not on our side.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

October 26, 2005 5:23 PM

On Being Yourself

From my bedside reading this week:

In an age where there is much talk of "being yourself" I reserve to myself the right to forget about being myself, since in any case there is very little chance of my being anybody else. Rather it seems to me that when one is intent on "being himself" he runs the risk of impersonating a shadow.

-- Thomas Merton, The True Solitude

Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

October 25, 2005 8:43 PM

OOPSLA This and That 3: Geek Jargon

Get a bunch of technology folks together for any time and they are bound to coin some interesting words, or use ones they've coined previously, either out of habit or to impress their friends. The Extravagaria gang was no exception.

Example 1: When someone asked how many of us were left-handed, Dick Gabriel said that he was partially ambidextrous, to which Guy Steele volunteered that he was ambimoustrous. I like.

Example 2: At lunch, Guy Steele asked us if we ever intentionally got lost in a town, perhaps a town new to us, so that we had to learn the place in order to get back to a place we knew. Several people nodded vigorous agreement, and John Dougan noted that he and his colleagues use a similar technique to learn a new legacy code base. They call this air-drop programming. This is a colorful analogy for a common pattern among software developers. Sometimes the best way to learn a new framework or programming language is to parachute behind enemy lines, surrender connection to any safety nets outside, and fight our way out. Or better, not fight, but methodically conquer the new terrain.

But the biggest source of neologisms at the workshop was our speaking stick. At a previous Extravagaria workshop, Dick used a moose as the communal speaking stick, in honor of Vancouver as the host city. (Of course, there are probably as many moose in San Diego as in Vancouver, but you know, the Great White North and all.) He had planned to bring the moose to this workshop but left it at home accidentally. So he went to a gift shop and bought a San Diego-themed palm tree to use in its place. The veterans of the workshop dubbed it "the moose" out of one year's worth of tradition, and from there we milked the moose terminology with abandon.

Some of my favorites from the day:

  • on the moose -- back in session; on the clock
  • moose cycles -- a measure of the speed of communication around the group, signified in the passing of the moose
  • virtual moose -- speaking without the moose, but with implicit group permission (We didn't always follow our own rules!)
  • "Moose! Moose!" -- "Me next!"

Even computer professionals, even distinguished computing academics, surrender to the silliness of a good game. Perhaps we take joy in binding objects to names and growing systems of names more than most.

I suppose that I should be careful reporting this, because my students will surely hold it over my head at just the right -- or wrong -- moment!

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

October 24, 2005 7:36 PM

OOPSLA Day 3: Sussman on Expressing Poorly-Understood Ideas in Programs

Gerald Sussman is renown as one of the great teachers of computing. He co-authored the seminal text Structure and Interpretation of Computer Programs, which many folks -- me included -- think is the best book ever written about computer science. Along with Guy Steele, he wrote an amazing series of papers, collectively called the "Lambda papers", that taught me as much as any other source about programming and machines. It also documented the process that created Scheme, one of my favorite languages.

Richard Gabriel introduced on Sussman before his talk with an unambiguous statement of his own respect for the presenter, saying that when Sussman speaks, "... record it at 78 and play it back at 33." In inimitable Gabriel fashion, he summarized his career as, "He makes things. He thinks things up. He teaches things."

Sussman opened by asserting that programming is, at its foundation, a linguistic phenomenon. It is a way in which we express ourselves. As a result, computer programs can be both prose and poetry.

In programs, we can express different kinds of "information":

  • knowledge of world as we know it
  • models of possible worlds
  • structures of beauty
  • emotional content

The practical value that we express in programs sometimes leads to the construction of intellectual ideas, which ultimately makes us all smarter.

Sussman didn't say anything particular about why we should seek to express beauty and emotional content in programs, but I can offer a couple of suggestions. We are more likely to work harder and deeper when we work on ideas that compel us emotionally. This is an essential piece of advice for graduate students in search of thesis topics, and even undergrads beginning research. More importantly, I think that great truths possess a deep beauty. When we work on ideas that we think are beautiful, we are working on ideas that may ultimately pay off with deeper intellectual content.

Sussman then showed a series of small programs, working his way up the continuum from the prosaic to the beautiful. His first example was a program he called "useful only", written in the "ugliest language I have ever used, C". He claimed that C is ugly because it is not expressive enough (there are ideas we want to express that we cannot express easily or cleanly) and because "any error that can be made can be made in C".

His next example was in a "slightly nicer language", Fortran. Why is Fortran less prosaic than C? It doesn't pretend to be anything more than it is. (Sussman's repeated use of C as inelegant and inexpressive got a rise out of at least one audience member, who pointed out afterwards that many intelligent folks like C and find it both expressive and elegant. Sussman agreed that many intelligent folks do, and acknowledged that there is room for taste in such matters. But I suspect that he believes that folks who believe such things are misguided or in need of enlightenment. :-)

Finally Sussman showed a program in a language we all knew was coming, Scheme. This Scheme program was beautiful because it allows us to express the truth of the domain, that states and differential states are computed by functions that can be abstracted away, that states are added and subtracted just like numbers. So, the operators + and must be generic across whatever value set we wish to compute over at the time.

In Scheme, there is nothing special about + or . We can define them to mean what they mean in the domain where we work. Some people don't like this, because they fear that in redefining fundamental operations we will make errors. And they are right! But addition can be a fundamental operation in many domains with many different meanings; why limit ourselves? Remember what John Gribble and Robert Hass told us: you have to take risks to create something beautiful.

This point expresses what seemed to be a fulcrum point in Sussman's argument: Mathematics is a language, not a set of tools. It is useful to us to the extent we we can express the ideas that matter to us.

Then Sussman showed what many folks consider to be among the most beautiful pieces of code ever written, if not the most beautiful: Lisp's eval procedure written in Lisp. This may be as close to Maxwell's equations in computer science as possible.

This is where Sussman got to the key insight of his talk, the insight that has underlay much of his intellectual contribution to our world:

There are some things we could not express until we invented programming.

Here Sussman distinguished two kinds of knowledge about the world, declarative knowledge and imperative knowledge. Imperative knowledge is difficult to express clearly in an ambiguous language, which all natural languages are. A programming language lets us express such knowledge in a fundamentally new way. In particular, computer programs improve our ability to teach students about procedural knowledge. Most every computer science student has had the experience of getting some tough idea only after successfully programming it.

Sussman went further to state baldly, "Research that doesn't connect with students is a waste." To the extent that we seek new knowledge to improve the world around us, we must teach it to others, so I suppose that Sussman is correct.

Then Sussman clarified his key insight, distinguishing computer programs from traditional mathematics. "Programming forces one to be precise and formal, without being excessively rigorous." I was glad that he then said more specifically what he means here by 'formal' and 'rigorous'. Formality refers to lack of ambiguity, while rigor referd to what a particular expression entails. When we write a program, we must be unambiguous, but we do not yet have to understand the full implication of what we have written.

When we teach students a programming language, we are able to have a conversation with them of the sort we couldn't have before -- about any topic in which procedural knowledge plays a central role. Instead of trying to teach students to abstract general principles from the behavior of the teacher, a form of induction, we can now give them a discursive text that expresses the knowledge directly.

In order to participate in such a conversation, we need only know a few computational ideas. One is the lambda calculus. All that matters is that you have a uniform system for naming things. "As anyone who has studied spirituality knows, if you give a name to a spirit, you have power over it." So perhaps the most powerful tool we can offer in computing is the ability to construct languages quickly. (Use that to evaluate your favorite programming language...)

Sussman liked the Hass lecture, too. "Mr. Hass thinks very clearly. One thing I've learned is that all people, if they are good at what they do, whatever their area -- they all think alike." I suspect that this accounts for why many of the OOPSLA crowd enjoyed the Hass lecture, even if they do not think of themselves as literary or poetic; Hass was speaking truths about creativity and beauty that computer scientists know and live.

Sussman quoted two artists whose comments echoed his own sentiment. First, Edgar Allan Poe from his 1846 The Philosophy of Composition:

... it will not be regarded as a breach of decorum on my part to show the modus operandi by which some one of my own works was put together. I select "The Raven" as most generally known. It is my design to render it manifest that no one point in its composition is referable either to accident or intuition -- that the work proceeded step by step, to its completion with the precision and rigid consequence of a mathematical problem.

And then concert pianist James Boyk:

A work of art is a machine with an aesthetic purpose.

(I am uncomfortable with the impression these quotes give, that artistic expression is mechanistic, though I believe that artistic work depends deeply on craft skills and unromantic practice.)

Sussman considers himself an engineer, not a scientist. Science believes in a "talent theory" of knowledge, in part because the sciences grew out of the upper classes, which passed on a hereditary view of the world. On the other hand, engineering favors a "skill theory" of knowledge; knowledge and skill can be taught. Engineering derived from craftsmen, who had to teach their apprentices in order to construct major artifacts like cathedrals; if the product won't be done in your lifetime, you need to pass on the skills needed for others to complete the job!

The talk went on for a while thereafter, with Sussman giving more examples of using programs as linguistic expressions in electricity and mechanics and mathematics, showing how a programming language enables us -- forces us -- to express a truth more formally and more precisely than what our old mathematical and natural languages did.

Just as most programmers have experienced the a-ha! moment of understanding after having written a program in an area we were struggling to learning, nearly every teacher has had an experience with a student who has unwittingly bumped into the wall at which computer programming forces us to express an idea more precisely than our muddled brain allows. Just today, one of my harder-working students wrote me in an e-mail message, "I'm pretty sure I understand what I want to do, but I can't quite translate it into a program." I empathize with his struggles, but the answer is: You probably don't understand, or you would be able to write the program. In this case, examination of the student's code revealed the lack of understanding that manifests itself in a program far more complex than the idea itself.

This was a good talk, one which went a long way toward helping folks see just how important computer programming is as an intellectual discipline, not just as a technology. I think that one of the people who made a comment after the talk said it well. Though the title of this talk was "Why Programming is a Good Medium for Expressing Poorly Understood and Sloppily Formulated Ideas", the point of this talk is that, in expressing poorly-understood and sloppily-formulated ideas in a computer program, we come to understand them better. In expressing them, we must eliminate our sloppiness and really understand what we are doing. The heart of computing lies in the computer program, and it embodies a new form of epistemology.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 20, 2005 6:46 PM

OOPSLA This and That, Part 2

As always, OOPSLA has been a constant font of ideas. But this year's OOPSLA seems to have triggered even more than its usual share. I think that is a direct result of the leadership of Dick Gabriel and Ralph Johnson, who have been working for eighteen months to create a coherent and focused program. As much as I have already written this week -- and I know that some of my articles have been quite long; sorry for getting carried away... -- I have plenty of raw material to keep me busy for a while, including the rest of the Educators Symposium, Gerald Sussman's invited talk, my favorite neologism of the week, and the event starting as I type this: Grady Booch's conference-closing talk on his ambitious project to build a handbook of software architecture.

For now, I'd like to share just a few ideas from the two panels I attended in the middle part of this fine last day of OOPSLA.

The Echoes panel was aimed at exploring the echoes of the structured design movement of the late 1970s. It wasn't as entertaining or as earth-shaking as it might have been given its line-up of panelists, but I took away two key points:

  • Kent Beck said that he recently re-read Structured Design and was amazed how much of the stuff we talk about today is explained in that book. I remember reading that book for the first time back in 1985, after reading Structured Analysis and System Specification in my software engineering senior sequence. They shaped how I thought about software construction.

    I plan to re-read both books in the next year.

  • Grady Booch said that no one reads code, not like folks in other disciplines read the literature of their disciplines. And code is in many ways the real literature that we are creating. I agree with Grady and have long thought about how the CS courses we teach could encourage students to read real programs -- say, in operating systems, where students can read Linux line by line if they want. Certainly, I do this my compilers course, but not with a non-trivial program. (Well, my programming languages students usually read a non-trivial interpreter, a Scheme interpreter written in Scheme modeled on McCarthy's original Lisp interpreter. That program is small but not trivial. It is the Maxwell equations of computing.)

    I am going to think about how to work this idea more concretely into the courses I teach in the next year.

Like Echoes, the Yoshimi Battles the Pink Robots panel -- on the culture war between programmers and users -- didn't knock my socks off, but Brian Foote was in classic form. I don't think that he was cast in the role of Yoshimi, but he defended the role of the programmer qua programmer.

  • His position statement quoted Howard Roark, the architect in Ayn Rand's The Fountainhead: "I don't build in order to have clients. I have clients in order to build."

    I immediately thought of a couple of plays on the theme of this quote:

    I don't teach to have students. I have students to teach.
    I don't blog to have readers. I have readers to blog.
  • Brian played on words, too, but not Howard Roark's. He read, in his best upper-crust British voice, the lyrics of "I Write the Code", with no apologies at all to Barry Manilow.
    I am hacker, and I write the code.
  • And this one was Just Plain Brian:
    You know the thing that is most annoying about users is that they have no appreciation for the glory, the grandeur, or the majesty of living inside the code. It is my cathedral.

Oh: on his way into the lecture to give his big talk, Grady Booch walked by, glanced at my iBook, and said, "Hey, another Apple guy. Cool."

It's been a good day.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

October 20, 2005 3:57 PM

OOPSLA Day 5: Martin Fowler on Finding Good Design

Sadly... The last day of OOPSLA is here. It will be a good day, including Onward! Films (or Echoes -- could I possibly skip a panel with Kent and so many Big Names?) (later note: I didn't), Lightning Talks (or the Programmers versus Users panel) (later note: I am at this panel now), and finally, of course, Grady Booch. But then it's over. And I'll be sad.

On to Martin or, as he would say, Mah-tin. Ralph Johnson quoted Brian Foote's epigrammatic characterization of Martin: an intellectual jackal with good taste in carrion. Soon, Ralph began to read his official introduction of Martin, written by ... Brian. It was a play on the location of conference, in Fashion Valley. In the fashion world, good design is essential, and we know our favorite designers by one name: Versace, Halston, Kent, Ward -- and Mah-tin. Martin lives a life full of beautiful models, and he is "a pretty good writer, for an Englishman".

On came Martin. He roamed the full stage, and then sat down in a cushy chair.

When asked to keynote at a conference like OOPSLA, one is flush with pride. Then you say 'yes', which is your first mistake. What can you say? Keynoters are folks who make great ideas, who invent important stuff. Guy's talks was the best. But I'm not a creator or deep thinker, says... I'm that guy who goes to a fashion show, steals the ideas he sees, knocks off the clothing, and then mass-markets them at the mall. I just want to know where we are, says Martin; predicting the future is beyond me.

To shake things up, he called on George Platts to the stage. Those of you who have attended a PLoP conference know George as a "lateral thinking consultant" and games leader. Earlier, Martin had asked him to create a game for the whole crowd, designed to turn us into intellectual jackals. For this game, George drew his inspiration from the magnificent book Life of Pi. (Eugene says: If you have not read this book, do it. Now.) George read a list of animal sounds from one page in the book and told each of us to choose one. He then told each of us to warm up by saying our own name in that fashion. (I grunted "Eugene"). He then asked everyone whose first name started with A to stand and do it for the crowd. Then I and R; D, M, V; G, P, Y; ... and finally E, N, and W. Finally, the whole room stood and hissed, grunted, growled "OOPSLA".

Back came Martin to the forefront, for the talk itself.

Martin's career is aimed at answering the question, "What is good design?" He seeks an answer that is more than just fuzzy words and fuzzy ideas.

At the beginning of his career, Martin was an EE. In that discipline, designers drawing a diagram that delineate a product and then passes it on to the person who constructs the thing. So when Martin moved on to software engineering, which had adopted this approach. But soon he came to reject this approach. Coding as construction fails. He found that designs -- other people's design and his own -- always turned out to be wrong. Eventually, he met folks he recognized as good designers who design and code at the same time. Martin came to see that the only way to design well is to program at the same time, and the only way to program well is to design at the same time.

That's the philosophy underlying all of Martin's work.

How do we characterize good designs? A set of principles.

One principle central to all good designs is to eliminate duplication. He remembers when Kent told him this, but at the time Martin dismissed it as not very deep. But then he saw that there is much more to this than he first\ thought. It turns out that when you eliminate duplication, your designs end up good, almost irrespective of anything else. He noted that this has been pattern in his life: see an important idea, dismiss it, and eventually come around. (And then write a book about it. :-)

Another principle: orthogonality, the notion that different parts of a system should do different kinds of things.

Another principle: separate the user interface from the guts of the program. One way to approach this is to imagine you have to add a new UI to your system. Any code you would have to duplicate is in the wrong place.

Philosophy. Principles. What is the next P?

Patterns -- the finding of recurring structures. Martin turned to Ralph Johnson, who has likened pattern authors to Victorian biologists, who went off to investigate a single something in great detail -- cataloging and identifying and dissecting and classifying. You learn a lot about the one thing, but you also learn something bigger. Consider Darwin, who ultimately created the theory of natural selection.

The patterns community is about finding that core something which keeps popping in a slightly different way all over the place. Martin does this in his work. He then did a cool little human demonstration of one of the patterns from his Patterns of Enterprise Application Architecture, an event queue. In identifying and naming the pattern, he found so many nice results, including some unexpected ones, like the ability to rewind and reply transactions.

Patterns people "surface" core ideas, chunk them up into the right scale, and identify when and when not to use them.

Why isn't the patterns community bigger? Martin thinks it should be bigger, and in particular should be a standard part of every academic's life! We academics should go out, look at big old systems for a couple of months, and try to make sense of them.

At the end of the session, James Noble pointed out two reasons why academic programmers don't do this: they do not receive academic credit for such activity, and they do not have access to the large commercial systems that they really need to study. On the first point, Martin agreed. This requires a shift in the academic culture, which is hard to do. (This issue of academic culture, publishing, and so on came up again at the Echoes panel later in the morning.) Martin volunteered to bellow in that direction whenever and wherever asked. On the second point, he answered with an emphatic 'yes!' Open source software opens a new avenue for this sort of work...

Philosophy. Principles. Patterns. What is the last P?

Practices. His favorite is, of course, refactoring. He remembers watching Kent Beck editing a piece of code of Martin Fowler code. Kent kept making these small, trivial changes to the code, but after each step Martin found himself saying, "Hmm, that is better." The lightbulb went off again.

He thanked John Brant and Don Roberts, Bill Opdyke and Ralph Johnson, and Ward and Kent. It's easy to write good books when other people have done the deep thinking.

He then pointed to Mary Beth Rosson's project that asks students to fixing code as a way to learn. (Eugene thinks: But that's not refactoring, right?) Refactoring is a good practice for students as they learn. "Here is some code. Smell it. Come back when it's better."

My students had better get ready...

Another practice that Martin lives and loves is test-driven design. Of course,it embodies the philosophy he began this talk with: You design and program at the same time.

And thus endeth the lesson.

In addition to the comment on academics and patterns, James Noble asked another question. (Martin lamented, "First I was Footed, and now I am Nobled." If you know Foote and Noble, you can imagine why this prospect would have a speaker a bit on edge.) James played it straight, except for a bit of camera play with the mic and stage, and pointed out that separating the UI ought causes an increase in the total amount of code -- he claimed 50%. How is that better? Martin: I don't know. Folks have certainly questioned this principle. But then, we often increase duplication in order to make it go away; maybe there is a similarity here?

Or, as Martin closed, maybe he is just wrong. If so, he'll learn that soon enough.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 19, 2005 8:17 PM

More on Safety and Freedom in the Extreme

giving up liberty for safety

In my entry on Robert Hass's keynote address, I discussed the juxtaposition of 'conservative' and 'creative', the tension between the desire to be safe and the desire to be free, between memory and change. Hass warned us against the danger inherent in seeking safety, in preserving memory, to an extreme: blindness to current reality. But he never addressed the danger inherent in seeking freedom and change to the exclusion of all else. I wrote:

There is a danger in safety, as it can blind us to the value of change, can make us fear change. This was one of the moments in which Hass surrendered to a cheap political point, but I began to think about the dangers inherent in the other side of the equation, freedom. What sort of blindness does freedom lead us to?

giving up safety for liberty

During a conversation about the talk with Ryan Dixon, it hit me. The danger inherent in seeking freedom and change to an extreme untethered idealism. Instead of "Ah, the good old days!", we have, "The world would be great if only...". When we don't show proper respect to memory and safety, we become blind in a different way -- to the fact that the world can't be the way it is in our dreams, that reality precludes somehow our vision.

That doesn't sound so bad, but people sometimes forget not to include other people in their ideal view. We sometimes become so convinced by our own idealism that we feel a need to project it onto others, regardless of their own desires. This sort of blindness begins to look in practice an awful lot like the blindness of overemphasizing safety and memory.

Of course, when discussing creative habits, we need to be careful not to censor ourselves prematurely. As we discussed at Extravagaria, most people tend toward one extreme. They need encouragement to overcome their fears of failure and inadequacy. But that doesn't mean that we can divorce ourselves from reality, from human nature, from the limits of the world. Creativity, as Hass himself told us, thrives when it bumps into boundaries.

Being creative means balancing our desire for safety and freedom. Straying too far in either way may work in the short term, but after too long in either land we lose something essential to the creative process.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 19, 2005 6:14 PM

OOPSLA Day 4: Mary Beth Rosson on the End of Users

As you've read here in the past, I am one of a growing number of CS folks who believe that we must expand the purview of computer science education far beyond the education of computer scientists and software developers. Indeed, our most important task may well lie in the education of the rest of the educated world -- the biologists and sociologists, the economists and physicists, the artists and chemists and political scientists whose mode of work has been fundamentally altered by the aggregation of huge stores of data and the creation of tools for exploring data and building models. The future of greatest interest belongs not to software development shops but to the folks doing real work in real domains.

Mary Beth Rosson

So you won't be surprised to know how excited I was to come to Mary Beth Rosson's Onward! keynote address called "The End of Users". Mary Beth has been an influential researcher across a broad spectrum of problems in OOP, HCI, software design, and end-user programming, all of which have had prominent places at OOPSLA over the years. The common theme to her work is how people relate to technology, and her methodology has always had a strong empirical flavor -- watching "users" of various stripes and learning from their practice how to better support them.

In today's talk, Mary Beth argued that the relationship between software developers and software users is changing. In the old days, we talked about "end-user programming", those programming-like activities done by those without formal training in programming. In this paradigm, end users identify requirements on programs and then developers produce software to meet the need. This cycle occurs at a relatively large granularity, over a relatively long time line.

But the world is changing. We now find users operating in increasingly complex contexts. In the course of doing their work, they frequently run into ad hoc problems for their software to solve. They want to integrate pieces of solution across multiple tools, customize their applications for specific scenarios, and appropriate data and techniques from other tools. In this new world, developers must produce components that can be used in an ad hoc fashion, integrated across apps. Software developers must create knowledge bases and construction kits that support an interconnected body of problems. (Maybe the developers even strive to respond on demand...)

These are not users in the traditional sense. We might call them "power users", but that phrase is already shop-worn. Mary Beth is trying out a new label: use developers. She isn't sure whether this label is the long-term solution, but at least this name recognizes that users are playing an increasingly sophisticated role that looks more and more like programming.

What sorts of non-trivial tasks do use developers do?

An example scenario: a Word user redefining the 'normal' document style. This is a powerful tool with big potential costs if done wrong.

Another scenario: an Excel user creates a large spreadsheet that embodies -- hides! -- a massive amount of computation. (Mary Beth's specific example was a grades spreadsheet. She is a CS professor after all!)

Yet another: an SPSS defines new variables and toggles between textual programming and graphical programming.

And yet another: a FrontPage user does visual programming of a web page, with full access to an Access database -- and designs a database!

Mary Beth summarized the characteristics of use developers as:

  • comfortable with a diverse array of software apps and data sources
  • work with them multiple apps in parallel and so want to pick and choose among functionality at any time, hooking components up and configuring custom solutions on demand.
  • working collaboratively, with group behaviors emerging
  • see the computer as a tool, not an end; it should not get in their way

Creating use developers has potential economic benefits (more and quicker cycles getting work done) and personal benefits (more power, more versatility, higher degree of satisfaction).

But is the idea of a use developer good?

Mary Beth quoted an IEEE Software editor whose was quite dismissive of end users. He warned that they do not systematically test their work, that they don't know to think about data security and maintainability, and -- when they do know to think about these issues -- they don't know *how* to think about them. Mary Beth thinks these concerns are representative of what folks in the software world and that we need to be attentive to them.

(Personally, I think that, while we certainly should be concerned about the quality of the software produced by end users, we also must keep in mind that software engineers have a vested interested in protecting the notion that only Software Engineers, properly trained and using Methods Anointed From On High are capable of delivering software of value. We all know of complaints from the traditional software engineering community about agile software development methods, even when the folks implementing and using agile methods are trained in computing and are, presumably, qualified to make important decisions about the environment in which we make software.)

Mary Beth gave an example to illustrate the potential cost inherent in the lack of dependability -- a Fannie Mae spreadsheet that contained a $1.2B error.

As the base of potential use developers grows so do the potential problems. Consider just the spreadsheet and database markets... By 2012, the US Department of Labor estimates that there will be 55M end users. 90% of all spreadsheets contain errors. (Yes, but is that worse or better than in programs written by professional software developments?) The potential costs are not just monetary; they can be related to the quality of life we all experience. Such problems can be annoying and ubiquitous: web input forms with browser incompatibilities; overactive spam filters that lose our mail; Word styles that break the other formatting in user documents; and policy decisions based on research findings that themselves are based on faulty analysis due to errors in spreadsheets and small databases.

Who is responsible for addressing these issues? Both! Certainly, end users must take on the responsibility of developing new habits and learning the skills they need to use their tools effectively and safely. But we in the software world need to recognize our responsibilities:

  • to build better tools, to build the scaffolding users need to be effective and safe users. The tools we build should offer the right amount of help to users who are in the moment of doing their jobs.
  • to promote a "quality assurance" culture among users. We need to develop and implement new standards for computing literacy courses.

How do we build better tools?

Mary Beth called them smarter tools and pointed to a couple of the challenges we must address. First, much of the computation being done in tools is invisible, that is, hidden by the user interface. Second, people do not want to be interrupted while doing their work! (We programmers don't want that; why should our users have to put up with it?)

Two approaches that offer promise are interactive visualization of data and minimalism. By minimalism, she means not expanding the set of issues that the user has concern herself with by, say, integrating testing and debugging into the standard usage model.

The NSF is supporting a five-school consortium called EUSES, End Users Shaping Effective Software, who are trying these ideas out in tool and experiment. Some examples of their work:

  • CLICKS is a drag-and-drop, design-oriented web development environment.
  • Whyline is a help system integrated directly into Alice's user environment. The help system monitors the state of the user's program and maintains a dynamic menu of problems they may run into.
  • WYSIWYT is a JUnit-style interface for debugging spreadsheets, in which the system keeps an eye on what cells have and have not been verified with tests.

How can we promote a culture of quality assurance? What is the cost-benefit trade-off involved for the users? For society?

Mary Beth indicated three broad themes we can build on:

  • K-12 education: making quality a part of schoolchildren's culture of computer use
  • universal access: creating tools aimed at specific populations of users
  • communities of practice: evolving reflective practices within the social networks of users

Some specific examples:

  • Youngsters who learn by debugging in Alice. This is ongoing work by Mary Beth's group. Children play in 3D worlds that are broken, and as they play the child users are invited to fix the system as they play. You may recognize this as the Fixer Upper pedagogical pattern, but in a "non-programming" programming context.
  • Debugging tools that appeal to women. Research shows that women take debugging seriously, but they tend to use strategies in their heads more than the tools available in the typical spreadsheet and word processing systems. How do we invite women with lower self-confidence to avail themselves of system tools? One experimental tool does this by letting users indicate "not sure" when evaluating correctness of a spreadsheet cell formula.
  • Pair programming community simulations. One group has has a Sim City-like world in which a senior citizen "pair programs" with a child. Leaving the users unconstrained led to degeneration, but casting the elders as object designers and the children as builders led to coherent creations.
  • Sharing and reuse in a teacher community. The Teacher Bridge project has created a collaborative software construction tool to support an existing teacher community. The tool has been used by several groups, including the one that created PandapasPond.org. This tool combines a wiki model for its dynamic "web editor" and more traditional model for its static design tool (the "folder editor"). Underneath the service, the system can track user activity in a variety of ways, which allows us to explore the social connections that develop within the user community over time.

The talk closed with a reminder that we are just beginning the transition from thinking of "end users" to thinking of "use developers", and one of our explicit goals should be to try to maximize the upside, and minimize the downside, of the world that will result.

For the first time in a long time, I got up to ask a question after one of the big talks. Getting up to stand in line at an aisle mic in a large lecture hall, to ask a question in front of several hundred folks, seems a rather presumptuous act. But my interest in this issue is growing rapidly, and Mary Beth has struck on several issues close to my current thinking.

My question was this: What should university educators be thinking about with regard to this transition? Mary Beth's answer went in a way I didn't anticipate: We should be thinking about how to help users develop the metacognitive skills that software developers learn within our culture of practice. We should extend cultural literacy curricula to focus on the sort of reflective habits and skills that users need to have when building models. "Do I know what's going on? What could be going wrong? What kinds of errors should I be watching for? How can I squeeze errors out of my program?"

After the talk, I spent a few minutes discussing curricula issues more specifically. I told her about our interest in reaching out to new populations of students, with the particular example of a testing certificate that folks in my department are beginning to work on. This certificate will target non-CS students, the idea being that many non-CS students end up working as testers in software development for their domain, yet they don't understand software or testing or much anything about computing very deeply. This certificate is still aimed at traditional software development houses, though I think it will bear the seeds of teaching non-programmers to think about testing and software quality. If these folks ever end up making a spreadsheet or customizing Word, the skills they learn here will transfer directly.

Ultimately, I see some CS departments expanding their computer literacy courses, and general education courses, to aim at use developers. Our courses should treat them with the same care and respect as we treat Programmers and Computer Scientists. The tasks users do are important, and these folks deserve tools of comparable quality.

Three major talks, three home runs. OOPSLA 2005 is hot.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 19, 2005 10:11 AM

OOPSLA Day 1: Writing Exercises at Extravagaria

I am nobody:
A red sinking autumn sun
Took my name away.

-- Richard Wright

As I noted before, I nearly blew off Sunday, after a long and tiring two days before. As you might have gathered from that same entry, I am happy that I did not. The reasons should be obvious enough: cool ideas happen for me only when I am engaged with ideas, and the people and interactions at Extravagaria were a source of inspiration that has remained alive with me throughout the rest of the conference.

In the afternoon of the workshop, we did two group exercises to explore issues in creativity -- one in the realm of writing poetry, and one in the realm of software design.

Gang-Writing Haiku

Haiku is a simple poetic form that most of us learn as schoolchildren. It is generally more involved than we learn in school, with specific expectations on the content of the poems, but at its simplest it is a form of three lines, consisting of 5, 7, and 5 syllables, respectively.

If I understood correctly, a tonka is a poem constructed by following a haiku with a couplet in which line is 7 syllables. We can go a step further yet, by connecting a sequence of tonkas into a renga. John called the renga the "stretch limo" of haiku. Apparently, the Japanese have a traditional drinking game that requires each person to write a verse of a growing renga in turn, taking a drink with each verse. The poems may degenerate, but the evening is probably not a complete loss...

Our first exercise after lunch was a variation of this drinking game, only without the drinking. We replaced the adult beverages with two features intended to encourage and test our creativity. First, we were given one minute or less to write each verse. Second, when we passed the growing poem on to the next writer, we folded it over so that the person could see only the verse we had just written.

Rather than start with scratch, John seeded our efforts with haiku written by the accomplished American novelist Richard Wright. In the last eighteen months of his life, Wright became obsessed with haiku, writing dozens a day. Many of these works were published in a collection after his death. John gave each of person a haiku from this collection. One of them, my favorite, appears at the top of this essay.

Then we were off, gang-writing poetry. My group consisted of Brian Foote, Joe Yoder, Danny Dig, Guy Steele, and me. Each of us started with a Wright haiku, wrote a couplet in response to it, folded Wright's stanza under, and passed the extended poem on to continue the cycle. After a few minutes, we had five renga. (And yet we were sober, though the quality of the poetry may not have reflected that. :-)

The second step of the exercise was to select our favorite, the one we thought had the highest quality. My group opted for a two-pass process. Each of us cast a vote for our two favorites, and the group then deliberated over the the top two vote-getters. We then had the opportunity to revise our selection before sharing it with the larger group. (We didn't.) Then each of the groups read its best product to the whole group.

My group selected the near-renga we called Syzygy Matters (link to follow) as our best. This was not one of my two favorites, but it was certainly in line with my choices. One poem I voted for received only my vote, but I won't concede that it wasn't one of our best. I call it Seasons Cease.

Afterwards, we discussed the process and the role creativity played.

  • Most of us tried to build on the preceding stanza, rather than undo it.
  • This exercise resembles a common technique in improvisational theater. There, the group goes through rounds of one sentence per person, building on the preceding sentences. Sometimes, the participants cycle through these conjunctions in order: "Yes, and...", "No, and...", "Yes, but...", and "No, but...".
  • Time pressure matters.
  • Personally, I noticed that by moving so fast that I had no chance to clear my mind completely, a theme developed in my mind that carried over from renga to renga. So my stanzas were shaped both by the stanza I was handed and by the stanza I wrote in the previous round.
  • Guy was optimistic about the process but pessimistic about the products. The experience lowered his expectations for the prospects for groups writing software by global emergence from local rules.
  • We all had a reluctance to revise our selected poems. The group censored itself, perhaps out of fear of offending whoever had written the lines. (So much for Common Code Ownership.) Someone suggested that we might try some similar ideas for the revision process. Pass all the poems we generated to another group, which would choose the best of the litter. Then we pass the poem on to a third group, which is charged with revising the poem to make it better. This would eliminate the group censorship effect mentioned above, and it would also eliminate the possibility that our selection process was biased by personal triggers and fondness.
  • Someone joked that we should cut the first stanza, the one written by Wright!, because it didn't fit the style of the rest of the stanzas. Joke aside, this is often a good idea. Often, we need to let go of the triggers that initially caused us to write. That can be true in our code, as well. Sometimes a class that appears early in a program ultimately outlives its utility, its responsibilities distributed across other more vital objects. We shouldn't be afraid of cutting the class,but sometimes we hold an inordinate attachment to the idea of the class.
  • To some, this exercise felt more like a white-board design session than a coding exercise. We placed a high threshold on revisions, as we often do for group brainstorm designs.
  • Someone else compared this to design by committee, and to the strategy of separating the coding team from the QA team.

Later, we discussed how, in technical writing and other non-fiction, our goal is to make the words we use match the truth as much as possible, but sometimes an exaggeration can convey truth even better. Is such an exaggeration "more true" than the reality, by conveying better the feel of a situation than pure facts would have? Dick used the re-entry season from Apollo 13 as an example.

(Aside: This led to a side discussion of how watching a movie without listening to its soundtrack is usually avery different experience. Indeed, most directors these days use the music as an essential story-telling device. What if life were like that? Dick offered that perhaps we are embarking on a new era in which the personal MP3 player does just that, adding a soundtrack to our lives for our own personal consumption.)

A good story tells the truth better than the truth itself. This is true in mathematical proofs, where the proof tells a story quite different from the actual process by which a new finding is reached. It is true of papers on software system designs, of software patterns. this is yet another way in which software and computer science are like poetry and Mark Twain.

A Team Experiment with Software Design

The second exercise of the afternoon asked four "teams" -- three of size four, and the fourth being Guy Steele alone -- to design a program that could generate interesting Sudoku puzzles. Half way through our hour, two teams cross-pollinated in a Gabriel-driven episode of crossover.

I don't have quite as much to save about this exercise. It was fun thinking about Sudoku, a puzzle I've started playing a bit in the last few weeks. It was fun watching working with Sudoku naifs wrap their games around the possibilities of the game. It was especially fun to watch a truly keen mind describe how he attacked and solved a tough problem. (I saved Guy's handwritten draft of his algorithm. I may try to implement it later. I feel like a rock star groupie...)

The debrief of this exercise focused on whether this process felt creative in the sense that writing haiku did, or was it more like the algorithm design exercise one might solve on a grad school exam, taken from Knuth. Guy pointed out that these are not disjoint propositions.

What feels creative is solving something we don't yet understand -- creativity lies in exploring what do not understand, yet. For example, writing a Sudoku solver would have involved little or no creativity for most of us, because it would be so similar to backtracking programs we have written before, say, to solve the 8-queens puzzle.

In many ways, these exercises aren't representative of literary creativity, in several significant ways. Most writers work solo, rather than in groups. Creators may work under pressure, but not often in 1-minute compressions. But sprints of this sort can help jolt creativity, and they can expose us to models of work,models we can adopt and adapt.

One thing seems certain: Change begets creativity. Robert Hass spoke of the constant and the variable, and how -- while both are essential to creativity -- it is change and difficulty that are usually the proximate causes of the creative act. That's why cross-pollination of teams (e.g., pair programmers) works, and why we should switch tools and environments every so often, to jog the mind to open itself to creating something new.

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

October 18, 2005 5:20 PM

OOPSLA This and That

Some miscellaneous thoughts I have had over the last few days...

  • My student Sergei is posting pictures from OOPSLA at http://www.lordofthewebs.com/oopsla/. I'm in there...
  • I made a revealing typo in my entry on the software development apprenticeship demo from yesterday's Educators' Symposium. I was writing about CS professors and the idea of a studio-based curriculum. Here is the final quote:
    For example, I think that the biggest adjustment most professors need to make in order to move to the sort of studio approach advocated by West and Rostal is from highly-scripted lectures and controlled instructional episodes to extemporaneous lecturing in response to student needs in real-time.

    In my first draft, I said highly-scriptured, not highly-scripted. This, I believe, was a Freudian slip that exposes the religious fervor with which we professors regard our lectures.

    I keep hearing educated people say the word 'processes' with a long second "e" -- processEs -- rather than the soft-e schwa sound -- processes --that I regard as correct. The long-e is a phonetic characteristic of plurals of words that end in -is, for example, 'emphasis' and 'emphases'. But 'process' doesn't end in -is... So it's just 'process-us'.

    An interesting convergence: I link to Wikipedia on the concept of schwa above, and Wikipedia creator Jimmy Wales's (mis)pronunciation this morning was the proverbial final straw for me that led to this rant.

    After having this grate in my ears for years, I finally checked the pronunciation in the dictionary. It seems that my so-called mispronunciation is listed as the 3rd pronunciation for the word. I do not know whether this means that this pronunciation is correct as an alternate, or if our dictionaries are yet again acceding to the downward swirl of linguistic evolution. I know, I know -- language is alive, blah, blah, blah. That doesn't mean that we have to give up on perfectly good words, definitions, and pronunciations without a fight! Besides -- three acceptable definitions? That seems excessive.

    Maybe I should just get over it. Or maybe not.

  • For the second time in two days, someone has just said that Maxwell's equations constitute the most beautiful set of equations one can fit on a single page of text. Today it was Gerry Sussman. Yesterday, it was Ward Cunningham. Last year, it was Alan Kay. I really need to go study these equations so that I can appreciate their deep beauty as well as these folks. Any suggested reading? (Should I be embarrassed by my need to study this now?)
  • And speaking of Alan Kay last year, please permit me a little rant deja vu: Turn off the cell phones, people!

Posted by Eugene Wallingford | Permalink | Categories: General

October 18, 2005 4:04 PM

OOPSLA Day 3: Robert Hass on Creativity

Robert Hass, former poet laureate of the US

With Dick Gabriel and Ralph Johnson leading OOPSLA this year, none of us were that the themes of the conference were creativity and discovery. This theme presented itself immediately in the conference's opening keynote speaker, former poet laureate Robert Hass. He gave a marvelous talk on creativity.

Hass began his presentation by reading a poem (whose name I missed) from Dick's new chapbook, Drive On. Bob was one of Dick's early teachers, and he clearly reveled in the lyricism, the rhythm of the poem. Teachers often form close bonds with their students, however long or short the teaching relationship. I know the feeling from both sides of the phenomenon.

He then described his initial panic at thought of introducing the topic of creativity to a thousand people who develop software -- who create, but in a domain so far from his expertise. But a scholar can find ways to understand and transmit ideas of value wherever they live, and Hass is not only a poet but a first-rate scholar.

Charles Dickens burst on scene with publication of The Pickwick Papers. With this novel, Dickens essentially invented the genre of the magazine-serialized novel. When asked how he created a new genre of literature, he said simply, "I thought of Pickwick."

I was immediately reminded of something John Gribble said in his talk at Extravagaria on Sunday: Inspiration comes to those already involved in the work.

Creativity seems to happen almost with cause. Hass consulted with friends who have created interesting results. One solved a math problem thought unsolvable by reading the literature and "seeing" the answer. Another claimed to have resolved the two toughest puzzles in his professional career by going to sleep and waking up with the answer.

So Hass offered his first suggestion for how to be creative: Go to sleep.

Human beings were the first animals to trade instinct for learning. The first major product of our learning was our tools. We made tools that reflected what we learned about solving immediate problems we faced in life. These tools embodied the patterns we observed in our universe.

We then moved on to broader forms of patterns: story, song, and dance. These were,according to Hass, the original forms of information storage and retrieval, the first memory technologies. Eventually, though, we created a new tool, the printing press, that made these fundamentally less essential -- less important!? And now the folks in this room contribute to the ultimate tool, the computer, that in many ways obsoletes human memory technology. As a result, advances in human memory tech have slowed, nearly ceased.

The bulk of Hass's presentation explored the interplay between the conservative in us (to desire to preserve in memory) and the creative in us (the desire to create anew). This juxtaposition of 'conservative' and 'creative' begets a temptation for cheap political shots, to which even Hass himself surrendered at least twice. But the juxtaposition is essential, and Hass's presentation repeatedly showed the value and human imperative for both.

Evolutionary creativity depends on the presence having a constant part and a variable part, for example, the mix of same and different in an animal's body, in the environment. The simultaneous presence of constant and variable is the basis of man's physical life. It is also the basis of our psychic life. We all want security and freedom, in an unending cycle Indeed, I believe that most of us want both all the time, at the same time. Conservative and creative, constant and variable -- we want and need both.

Humans have a fundamental desire for individuation, even while still feeling a oneness with our mothers, our mentors, the sources of our lives. Inspiration, in a way, is how a person comes to be herself -- is in part a process of separation.

"Once upon a time" is linguistic symbol, the first step of the our separation from the immediate action of reading into a created world.

At the same time, we want to be safe and close to, free and faraway. Union and individuation. Remembering and changing.

Most of us think that most everyone else is more creative than we are. This is a form of the fear John Gribble spoke about on Sunday, one of the blocks we must learn to eliminate from our minds -- or at least fool ourselves into ignoring. (I am reminded of John Nash choosing to ignore the people his mind fabricates around him in A Beautiful Mind.)

Hass then told a story about the siren song from The Odyssey. It turns out that most of the stories in Homer's epics are based in "bear stories" much older than Homer. Anyway, Odysseus's encounter with the sirens is part of a story of innovation and return, freedom on the journey followed by a return to restore safety at home. Odysseus exhibits the creativity of an epic hero: he ties himself to the mast so that he can hear the sirens' song without having to take ship so close to the rocks.

According to Hass, in some versions of the siren story, the sirens couldn't sing -- the song was only a sailors' legend. But they desire to hear the beautiful song, if it exists. Odysseus took a path that allowed him both safety and freedom, without giving up his desire.

In preparing for this talk,hass asked himself, "Why should I talk to you about creativity? Why think about it all?" He identified at least four very good reasons, the desire to answer these questions:

  • How can we cultivate creativity in ourselves?
  • How can we cultivate creativity in our children?
  • How can we identify creative people?
  • How can we create environments that foster creativity?

So he went off to study what we know about creativity. A scholar does research.

Creativity research in the US began when academic psychologists began trying to measure mental characteristics. Much of this work was done at the request of the military. As time went by, the number of characteristics, perhaps in correlation of research grants awarded by the government. Creativity is, perhaps, correlated with salesmanship. :-) Eventually, we had found several important characteristics, including that there is little or no correlation between IQ and creativity. Creativity is not a province of the intellectually gifted.

Hass cited the research of Howard Gardner and Mihaly Csikszentmihalyi (remember him?), both of whom worked to identify key features of the moment of a creative change, say, when Dickens thought to publish a novel in serial form. The key seems to be immersion in a domain, a fascination with domain and its problem and possibilities. The creative person learns the language of the domain and sees something new. Creative people are not problems solvers but problem finders.

I am not surprised to find language at the center of creativity! I am also not surprised to know that creative people find problems. I think we can save something even stronger, that creative people often create their own problems to solve. This is one of the characteristics that biases me away from creativity: I am a solver more than a finder. But thinking explicitly about this may enable me to seek ways to find and create problems.

That is, as Hass pointed out earlier, one of the reasons for thinking about creativity: ways to make ourselves more creative. But we can use the same ideas to help our children learn the creative habit, and to help create institutions that foster the creative act. He mentioned OOPSLA as a social construct in the domain of software that excels at fostering creative. It's why we all keep coming back. How can we repeat the process?

Hass spoke more about important features of domains. For instance, it seems matter how clear the rules of the domain are at the point that a person enters it. Darwin is a great example. He embarked on his studies at a time when the rules of his domain had just become fuzzy again. Geology had recently expanded European science's understanding of the timeline of the earth; Linnaeus had recently invented his taxonomy of organisms. So, some of the knowledge Darwin needed was in place, but other parts of the domain were wide open.

The technology of memory is a technology of safety. What are the technologies of freedom?

Hass read us a funny poem on story telling. The story teller was relating a myth of his people. When his listener questioned an inconsistency in his story, the story teller says, "You know, when I was a child, I used to wonder that..." Later, the listener asked the same question again, and again, and each time the story teller says, "You know, when I was a child, I used to wonder that..." When he was a child, he questioned the stories, but as he grew older -- and presumably wiser -- he came to accept the stories as they were, to retell them without question.

We continue to tell our stories for their comfort. They make us feel safe.

They is a danger in safety, as it can blind us to the value of change, can make us fear change. This was one of the moments in which Hass surrendered to a cheap political point, but I began to think about the dangers inherent in the other side of the equation, freedom. What sort of blindness does freedom lead us to?

Software people and poets have something in common, in the realm of creativity: We both fall in love with patterns, with the interplay between the constant and the variable, with infinite permutation. In computing, we have the variable and the value, the function and the parameter, the framework and the plug-in. We extend and refactor, exposing the constant and the variable in our problem domains.

Hass repeated an old joke, "Spit straight up and learn something." We laugh, a mockery of people stuck in same old patterns. This hit me right where I live. Yesterday at the closing panel of the Educators' Symposium, Joe Bergin said something that I wrote about a while back: CS educators are an extremely conservative lot. I have something to say about that panel, soon...

First safety, then freedom -- and with it the power to innovate.

Of course, extreme danger, pressure, insecurity can also be the necessity that leads to the creative act. As is often the case, opposites turn out to be true. As Thomas Mann said,

A great truth is a truth whose opposite is also a great truth.

Hass reminds us that there is agony in creativity -- a pain at stuckness, found in engagement with the world. Pain is unlike pleasure, which is homeostatic ("a beer and ballgame"). Agony is dynamic, ceasing to cling to safe position. There is always an element of anxiety, consciousness heightened at the moment of insight, gestalt in face of incomplete pattern.

The audience asked a couple of questions:

  • Did he consult only men in his study of creativity? Yes, all but his wife, who is also a poet. She said, "Tell them to have their own strangest thoughts." What a great line.
  • Is creativity unlimited? Limitation is essential to creativity. If our work never hits an obstacle, then we don't know when it's over. (Sounds like test-driven development.) Creativity is always bouncing up against a limit.

I'll close my report with how Hass closed the main part of his talk. He reached "the whole point of his talk" -- a sonnet by Michelangelo -- and he didn't have it in his notes!! So Hass told us the story in paraphrase:

The pain is unbearable, paint dripping in my face, I climb down to look at it, and it's horrible, I hate it, I am no painter...

It was the ceiling of the Sistine Chapel.


UPDATE (10/20/05): Thanks again to Google, I have tracked down the sonnet that Hass wanted to read. I especially love the ending:

Defend my labor's cause,
good Giovanni, from all strictures:
I live in hell and paint its pictures.

-- Michelangelo Buonarroti

I have felt this way about a program before. Many times.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 17, 2005 9:54 PM

OOPSLA Day 2: Morning at The Educators' Symposium

This was my second consecutive year to chair the OOPSLA Educators' Symposium, and my goal was something more down to earth yet similar in flavor: encouraging educators to consider Big Change. Most of our discussions in CS education are about how to do the Same Old Thing better, but I think that we have run the course with incremental improvements to our traditional approaches.

We opened the day with a demonstration called Agile Apprenticeship in Academia, wherein two professors and several students used a theatrical performance to illustrate a week in a curriculum built almost entirely on software apprenticeship. Dave West and Pam Rostal wanted to have a program for developing software developers, and they didn't think that the traditional CS curriculum could do the job. So they made a Big Change: they tossed the old curriculum and created a four-year studio program in which students, mentors, and faculty work together to create software and, in the process, students learn how to do create software.

West and Rostal defined a set of 360 competencies that students could satisfy at five different levels. Students qualify to graduate from the program by satisfying each competency at at least the third level (the ability to apply the concept in a novel situation) and some number at higher levels. Students also have to complete the standard general education curriculum of the university.

Thinking back to yesterday's morning session at Extravagaria, we talked the role of fear and pressure in creativity. West and Rostal put any fear behind them and acted on their dream. Whatever difficulties they face in making this idea work over the long run in a broader setting -- and I believe that the approach faces serious challenges -- at least they have taken a big step forward could make something work. Those of us who don't take any big steps forward are doomed to remain close to where we are.

I don't have much to say about the paper sessions of the day except that I noticed a recurring theme: New ideas are hard on instructors. I agree, but I do not think that they are hard in the NP-hard sense but rather in the "we've never done it that way before" sense. Unfamiliarity makes things seem hard at first. For example, I think that the biggest adjustment most professors need to make in order to move to the sort of studio approach advocated by West and ROstal is from highly-scripted lectures and controlled instructional episodes to extemporaneous lecturing in response to student needs in real-time. The real hardness in this is that faculty must have a deep, deep understanding of the material they teach -- which requires a level of experience doing that many faculty don't yet have.

This idea of professors as practitioners, as professionals practiced in the art and science we teach, will return in later entries from this conference...

Like yesterday's entry, I'll have more to say about today's Educators' Symposium in upcoming entries. I need some time to collect my thoughts and to write. In particular, I'd like to tell you about Ward Cunningham's keynote address and our closing panel on the future of CS education. The panel was especially energizing but troubling at the same time, and I hope to share a sense of both my optimism and my misgivings.

But with the symposium over, I can now take the rest of the evening to relax, then sleep, have a nice longer run, and return to the first day of OOPSLA proper free to engage ideas with no outside encumbrances.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 16, 2005 9:52 PM

OOPSLA Day 1: The Morning of Extravagaria

OOPSLA 2005 logo

OOPSLA has arrived, or perhaps I have arrived at OOPSLA. I almost blew today off, for rest and a run and work in my room. Some wouldn't have blamed me after yesterday, which began at 4:42 AM with a call from Northwest Airlines that my 7:05 AM flight had been cancelled, included my airline pilot missing the runway on his first pass at the San Diego airport, and ended standing in line for two hours to register at my hotel. But I dragged myself out of my room -- in part out of a sense of obligation to having been invited to participate, and in part out of a schoolboy sense of propriety that I really ought to go to the events at my conferences and make good use of my travels.

My event for the day was an all-day workshop called Extravagaria III: Hunting Creativity. As its title reveals, this workshop was the third in a series of workshops initiated by Richard Gabriel a few years ago. Richard is motivated by the belief that computer science is in the doldrums, that what we are doing now is mostly routine and boring, and that we need a jolt of creativity to take the next Big Step. We need to learn how to write "very large-scale programs", but the way we train computer scientists, especially Ph.D. students and faculty, enforce a remarkable conservatism in problem selection and approach. The Extravagaria workshops aim to explore creativity in the arts and sciences, in an effort to understand better what we mean by creativity and perhaps better "do it" in computer science.

The workshop started with introductions, as so many do, but I liked the twist that Richard tossed in: each of us was to tell what was the first program we ever wrote out of passion. This would reveal something about each of us to one another, and also perhaps recall the same passion within each storyteller.

My first thought was of a program I wrote as a high school junior, in a BASIC programming course that was my first exposure to computers and programs. We wrote all the standard introductory programs of the day, but I was enraptured with the idea of writing a program to compute ratings for chessplayers following the Elo system used for chessplayers. This was much more complex than the toy problems I solved in class, requiring input in the form of player ratings and a crosstable showing results of games among the players and output in the form of updated ratings for each player. It also introduced new sorts of issues, such as using text files to save state between runs and -- even more interesting to me -- the generation of an initial set of ratings through a mechanism of successive approximations, process that may never quite converge unless we specified an epsilon larger than 0. I ultimately wrote a program of several hundred lines, a couple of orders of magnitude larger than anything I had written before. And I cared deeply about my program, the problem it solved, and its usefulness to real people.

I enjoyed everyone else's stories, too. They reminded us all about the varied sources of passion, and how a solving a problem can open our eyes to a new world for us to explore. I was pleased by the diversity of our lot, which included workshop co-organizer John Gribble, a poet friend of Richard's who has never written a program; Rebecca Rikner, the graphic artist who designed the wonderful motif for Richard's book Writers' Workshops and the Work of Making Things, and Guy Steele, one of the best computer scientists around. The rest of us were computer science and software types, including one of my favorite bloggers, Nat Pryce. Richard's first passionate program was perhaps a program to generate "made-up words" from some simple rules, to use in naming his rock-and-roll-band. Guy offered three representative, if not first, programs: a Lisp interpreter written in assembly language, a free verse generator written in APL, and low chart generator written in RPG. This wasn't the last mention of APL today, which is often the sign of a good day.

Our morning was built around an essay written by John Gribble for the occasion, called "Permission, Pressure, and the Creative Process". John read his essay, while occasionally allowing us in the audience to comment on his remarks or previous comments. John offered as axioms two beliefs that I share with him:

  • that all people are creative, that is, possess the potential to act creatively, and
  • that there is no difference of kind between creativity in the arts and creativity in the sciences.

What the arts perhaps offer scientists is the history and culture of examining the creative process. We scientists and other analytical folks tend to focus on product, often to the detriment of how well we understand how we create them.

John quoted Stephen King from his book On Writing, that the creator's job is not to find good ideas but to recognize them when they come along. For me, this idea foreshadows Ward Cunningham's keynote address at tomorrow's Educators' Symposium. Ward will speak on "nurturing the feeble simplicity", on recognizing the seeds of great ideas despite their humility and nurturing them into greatness. As Brian Foote pointed out later in the morning, this sort of connection is what makes conferences like OOPSLA so valuable and fun -- plunk yourself down into an idea-rich environment, soak in good ideas from good minds, and your own mind has the raw material it needs to make connections. That's a big part of creativity!

John went on to assert that creativity isn't rare, but rather so common that we are oblivious to it. What is rare is for people to act on their inspirations. Why do we not act? We have so low an opinion of our selves that we figure the inspiration isn't good enough or that we can't do it justice in our execution. Another reason: We fear to fail, or to look bad in front of our friends and colleagues. We are self-conscious, and the self gets in the way of the creative act.

Most people, John believes, need permission to act creatively. Most of us need external permission and approval to act, from friends or colleagues, peers or mentors. This struck an immediate chord with me in three different relationships: student and teacher, child and parent, and spouse and spouse. The discussion in our workshop focused on the need to receive permission, but my immediate thought was of my role as potential giver of permission. My students are creative, but most of them need me to give them permission to create. They are afraid of bad grades and of disappointing me as their instructor; they are self-conscious, as going through adolescence and our school systems tend to make them. My young daughters began life unself-conscious, but so much of their lives are about bumping into boundaries and being told "Don't do that." I suspect that children grow up most creative in an environment where they have permission to create. (Note that this is orthogonal to the issue of discipline or structure; more on that later.) Finally, just as I find myself needing my wife's permission to do and act -- not in the henpecked husband caricature, but in the sense of really caring about what she thinks -- she almost certainly feels the need for *my* permission. I don't know why this sense that I need to be a better giver of permission grew up so strong so quickly today, but it seemed like a revelation. Perhaps I can change my own behavior to help those around me feel like they can create what they want and need to create. I suspect that, in loosing the restrictions I project onto others, I will probably free myself to create, too.

When author Donald Ritchie is asked how to start writing, he says, "First, pick up your pencil..." He's not being facetious. If you wait for inspiration to begin, then you'll never begin. Inspiration comes to those already involved in the work.

Creativity can be shaped by constraints. I wrote about this idea six months or so ago in an entry named Patterns as a Source of Freedom. Rebecca suggested that for her at least constraints are essential to creativity, that this is why she opted to be a graphic designer instead of a "fine artist". The framework we operate in can change, across projects or even within a project, but the framework can free us to create. Brian recalled a song by the '80s punk-pop band Devo called Freedom Of Choice:

freedom of choice is what you got
then if you got it you don't want it
seems to be the rule of thumb
don't be tricked by what you see
you got two ways to go
freedom from choice is what you want

Richard then gave a couple of examples of how some artists don't exercise their choice at the level of creating a product but rather at the level of selecting from lots of products generated less self-consciously. In one, a photographer for National Geographic, put together a pictorial article containing 22 pictures selected from 40,000 photos he snapped. In another, Francis Ford Coppolla shot 250 hours of film in order to create the 2-1/2 hour film Apocalypse Now.

John then told a wonderful little story about an etymological expedition he took along the trail of ideas from the word "chutzpah", which he adores, to "effrontery", "presumptuous", and finally "presumption" -- to act as if something were true. This is a great way to free oneself to create -- to presume that one can, that one will, that one should. Chutzpah.

Author William Stafford had a particular angle he took on this idea, what he termed the "path of stealth". He refused to believe in writer's block. He simply lowered his standards. This freed him to write something and, besides, there's always tomorrow to write something better. But as I noted earlier, inspiration comes to those already involved in the work, so writing anything is better than writing nothing.

As editor John Gould once told Stephen King, "Write with the door closed. Revise with the door open." Write for yourself, with no one looking over your shoulder. Revise for readers, with their understanding in mind.

Just as permission is crucial to creativity, so is time. We have to "make time", to "find time". But sometimes the work is on its own time, and will come when and at the rate it wants. Creativity demands that we allow enough time for that to happen! (That's true even for the perhaps relatively uncreative act of writing programs for a CS course... You need time, for understanding to happen and take form in code.)

Just as permission and time are crucial to creativity, John said, so is pressure. I think we all have experienced times when a deadline hanging over our heads seemed to give us the power to create something we would otherwise have procrastinated away. Maybe we need pressure to provide the energy to drive the creative act. This pressure can be external, in the form of a client, boss, or teacher, or internal.

This is one of the reasons I do not accept late work for a grade in my courses; I believe that most students benefit from that external impetus to act, to stop "thinking about it" and commit to code. Some students wait too long and reach a crossover point: the pressure grows quite high, but time is too short. Life is a series of balancing acts. The play between pressure and time is, I think, fundamental. We need pressure to produce, but we need time to revise. The first draft of a paper, talk, or lecture is rarely as good as it can be. Either I need to give myself to create more and better drafts, or -- which works better for me -- I need to find many opportunities to deliver the work, to create multiple opportunities to create in the small through revision, variation, and natural selection. This is, I think, one of the deep and beautiful truths embedded in extreme programming's cycle "write a test, write code, and refactor".

Ultimately, a professional learns to rely more on internal pressure, pressure applied by the self for the self, to create. I'm not talking about the censoriousness of self-consciousness, discussed earlier, which tells us that what we produce isn't good enough -- that we should not act, at least in the current product. I'm talking about internal demands that we act, in a particular way or time. Accepting the constraints of a form -- say, the line and syllable restrictions of haiku, or the "no side effects" convention of functional programming style -- puts pressure on us to act in a way, whether it's good or bad. John gave us two other kinds of internal pressure, ones he applies to himself: the need to produce work to share at his two weekly writers' workshops, and the self-discipline of submitting work for publication every month. These pressures involve outside agents, but they are self-imposed, and require us do something we might otherwise not.

John closed with a short inspiration. Pay attention to your wishes and dreams. They are your mind's way of telling you to do something.

We spent the rest of the morning chatting as a group on whatever we were thinking after John's talk. Several folks related an experience well-known to any teacher: someone comes to us asking for help with a problem and, in the act of explaining the problem to us they discover the answer for themselves. Students do this with me often. Is the listener essential to this experience, or could we just ask if we were speaking to someone? I suspect that another person is essential for this to work for the learner, both because having a real person to talk to makes us explain things (pressure!) and because the listener can force us to explain the problem more clearly ("I don't understand this yet...")

A recurring theme of the morning was the essential interactivity of creativity, even when the creator works fundamentally alone. Poets need readers. Programmers need other folks to bounce ideas off of. Learners need someone to talk to, if only to figure things out for themselves. People can be sources of ideas. They can also be reflectors, bouncing our own ideas back at us, perhaps in a different form or perhaps the same, but with permission to act on them. Creativity usually comes in the novel combination of old ideas, not truly novel ideas.

This morning session was quite rewarding. My notes on the whole workshop are, fittingly, about half over now, but this article has already gotten quite long. So I think I'll save the afternoon sessions for entries to come. These sessions were quite different from the morning, as we created things together and then examined our processes and experiences. They will make fine stand-alone articles that I can write later -- after I break away for a bite at the IBM Eclipse Technology Exchange reception and for some time to create a bit on my own for what should take over my mind for a few hours: tomorrow's Educators' Symposium, which is only twelve hours and one eight- to ten-mile San Diego run away!

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 14, 2005 6:11 PM

Rescued by Google

Okay, so I know some people don't like Google. They are getting big and more ambitious. Some folks even have Orwellian nightmares about Google. (If that link fails, try this one.) But, boy, can Google be helpful.

Take today, for instance. I was scping some files from my desktop machine to the department server, into my web space. Through one part sloppiness and one part not understanding how scp handles sub-directories, I managed to overwrite my home page with a different index.html.

What to do now? I don't keep a current back-up of that web space, because the college backs it up regularly. But recovering back-up files is slow, it's Friday morning, I'm leaving for OOPSLA at sunrise tomorrow, and I don't have time for this.

What to do?

I google myself. Following the first hit doesn't help, because it goes to the live page. But click on Cached link takes me to Google's cached copy my index. The only difference between it and the Real Thing is that they have bolded the search terms Eugene and Wallingford. Within seconds, my web site is as good as new.

Maybe I should be concerned that Google has such an extensive body of data. We as a society need to be vigilant when it comes to privacy in this age of aggregation and big search tools and indexes of God, the universe, and everything. We need to be especially vigilant about civil rights in an age when our governments could conceivably gain access to such data. But the web and Google have changed how we think about data storage and retrieval, search and research. These tools open doors to collective goods we could hardly imagine before. Let's be vigilant, but let's look for paths forward, not paths backward.

Another use of Google data that I am enjoying of late is gVisit, a web-based tool for tracking visitors to web sites. I use a bare-bones blogging client, NanoBlogger, which doesn't come with fancy primitive features like comments and hit counters. (At least the version I use didn't; there are more recent releases.) But gVisit lets me get a sense of at least where people have been reading my blog. Whip up a little Javascript, and I can see the last N unique cities from which people have read Knowing and Doing, where I choose N. I love seeing that someone from Indonesia or Kazakhstan or Finland has read my blog. I also love seeing names of all the US cities in which readers live. Maybe it's voyeurism, but it reminds me that people really do read.

No, I haven't tried Google Reader yet. I'm still pretty happy with NetNewsWire Lite, and then there's always the latest version of Safari...

Posted by Eugene Wallingford | Permalink | Categories: Computing, General

October 13, 2005 6:36 PM

A Good Day

Some days, I walk out of the classroom and feel, "Man, I am good." Today was such a day. It wasn't a perfect class, though I've felt that way some days, too. But today everything seemed to go just right: The problem we solved was challenging, and the ideas we needed to solve it flowed naturally. Students asked questions at the right times, which let me address important issues just in time. Unsolicited, students also made comments that added lightness to our work, and comments that indicated they were seeing the beauty in the approach and the code.

I leave the classroom enough days feeling much less, so this sensation stands out. It's a nice way to end the week before I head off to OOPSLA.

Now, I have no idea that the students felt the same way leaving class as I did, other than the well-placed questions and comments. For all I know, they left saying "Man, that guy is a @#$%^?." I hope not but, you know, somedays it just doesn't matter.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 11, 2005 6:54 PM

Something New Every Day

I'm not a Way Old-Timer, but I have been teaching at my current university for thirteen years. In that time, I have seen a lot of odd student programs: wacky bugs, quirky algorithms, outlandish hacks, and just plain funny code. But today I saw a new one...

One of my colleagues is teaching our intro majors course. In the lab today, he asked students to write some code, including a three-argument Java method to return the minimum of the three. The goal of this exercise was presumably to test the students' ability to consider multiple decisions, to write compound boolean expressions, and most probably to write nested if statements.

Here is my rendition of one student's extraordinary solution:

    public int min( int x, int y, int z )
        for (int i = Integer.MIN_VALUE; i <= Integer.MAX_VALUE; i++)
            if ( i == x )
                return x;
            else if ( i == y )
                return y;
            else if ( i == z )
                return z;

Sadly, the student didn't do this quite right, as right would be for this approach. The program did not use Integer.MIN_VALUE and Integer.MAX_VALUE to control its loop; it used hardcoded negative and positive 2,000,000,000. As a result, it had to throw in a back-up return statement after the for-loop to handle cases where the absolute of all three numbers were greater than 2,000,000,000. So, the solution loses points for correctness, and a bit of luster on style.

But still -- wow. No matter what instructors think their students will think, the students can come up with something out of left field.

I have to admit... In a certain way, I admire this solution. It demonstrates creativity, and even applies a pattern that works well in other contexts. If the student had been asked to write a method to find the smallest value in a three-element unsorted array, then brute force would not only have seemed like a reasonable thing to do; it would have been the only option. Why not try it for ints? (For us XP mavens: Is this algorithm the simplest thing that will work?)

One answer to the "Why not?" question comes at run time. This method uses 12 seconds or so on average to find its answer. That's almost 12 seconds more than an if-based solution uses. :-)

Students can bring a smile to an instructors face most every day.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 11, 2005 8:12 AM

Alive Again

Finally, nine days later, I finally felt like a runner again this morning. My legs felt strong, my lungs felt strong, and I was able to pick up my pace and hold it strong for 40+ minutes. Now my legs tingle in that way they always do after a good workout -- not sore, just there.

I am alive again!

Posted by Eugene Wallingford | Permalink | Categories: Running

October 07, 2005 4:34 PM

Teaching and Administration as Running

Over the life of this blog, I have used running as a metaphor for software development, for example, in this entry about pace and expectations. But I recently came across running as a metaphor for another part of my professional life: teaching versus administration. This writer compares teaching to sprinting, and administration to marathoning. On first glance, the analogy is attractive.

A teacher spends hours upon hours preparing for a scant few hours in front of the class, and those hours are high intensity and quite draining. I've rarely in other situations been as tired as I am at the end of a day in which I teach three 75-minute courses.

An administrator has to save up energy for use throughout a week. A meeting here, a phone call from a parent there, encounters with deans and faculty and university staff and students... Administrators have to pace themselves for a longer haul, as they have to be up and ready to go more frequently over most or all of their time on duty.

The real test of an analogy's value is in the questions it helps us ask about what we do. So I'll have to think more about this "sprinting versus marathoning" analogy to before I know whether it is a really good one.

I do know one thing, though. If my administrative duties ever make me feel like this, I will return to my full-time faculty gig faster than my dean can say, "Are you sure?"

Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Teaching and Learning

October 06, 2005 6:48 PM

More Mathematics, More Coincidence

When I started writing my recent piece on math requirements for CS degrees, I had intended to go in a different direction than the piece ultimately went. My original plan was to discuss how it is that students who do take courses like algebra in high school still seem so uncomfortable or ill-prepared to do basic arithmetic in their CS courses. But a related thread on the SIGCSE mailing list took my mind, and so the piece, elsewhere. You know what they say... You never know what you will write until you write.

So I figured I would try to write an article on my original topic next. Again before I could write it, another math coincidence occurred, only this time nearer to my intended topic. I read Tall, Dark, and Mysterious's detailed exploration of the same problem. She starts by citing a survey that found 40% of university professors believe that *most* of their students lack "the basic skills for university-level work", explores several possible causes of the problem, and then discusses in some detail what she believes the problem to be: an emphasis in education these days on content over skill. I think that this accounts for at least part of the problem.

Whether we overemphasize content at the expense of skill, though, I think that there is another problem at play. Even when we emphasize skill, we don't always require that students master the skills that they learn.

For many years, I had a short article hanging on my office wall that asked the question: What is the difference between a grade of C and a grade of A in a course? Does it mean that a C student has learned less content than the A student? The same content, but not as deeply? Something else?

Several popular jokes play off this open question. Do you want your medical doctor to have been a C student? Your lawyer? The general leading your army into battle?

In my experience as a student and educator, the difference between a C and an A indicates different things depending on the teacher involved and, to a lesser extent, the school involved. But it's not clear to me that even for these teachers and schools the distinction is an intentional one, or that the assigned grades always reflect what is intended.

Learning theory gives us some idea of how we might assign grades that reflect meaningful distinctions between different levels of student performance. For example, Bloom's taxonomy of educational objectives includes six levels of of learning: knowledge, comprehension, application, analysis, synthesis, and evaluation. These levels give us a way to talk about increasingly more masterful understanding and ability. Folks in education have written a lot in this sphere of discussion which, sadly, I don't know as well as I probably should. Fortunately, some CS educators have begun to write articles applying the idea to computer science curricula. We are certainly better off if we are thinking explicitly about what satisfactory and unsatisfactory performance means in our courses, and in our curricula overall.

I heard about an interesting approach to this issue this morning at a meeting of my college's department heads. We were discussing the shortcomings of a course-by-course approach to assessing the learning outcomes of our university's liberal arts core, which purports by cumulative effect to create well-rounded, well-educated thinkers. One head described an assessment method she had read about in which the burden was shifted to the student. In this method, each student was asked to offer evidence that they had achieved the goal of being a well-rounded thinker. In effect, the student was required to "prove" that there were, in fact, educated. If we think in terms of the Bloom taxonomy discussed above, each student would have to offer evidence of that they had reached each of the six cognitive levels of maturity. Demonstrating knowledge might be straightforward, but what of comprehension, application, analysis, synthesis, and evaluation? Students could assemble a portfolio of projects, the development of which required them to comprehend, apply, analyze, synthesize, and evaluate.

This reminded me very much of how my architecture friends had to demonstrate their readiness to proceed to the next level of the program, and ultimately to graduate: through a series of juried competitions. These projects and their juried evaluation fell outside the confines of any particular course. I think that this would be a marvelous way for computer science students, at least the ones focused on software development as a career path, to demonstrate their proficiency. I have been able to implement the idea only in individual courses, senior-level project courses required of all our majors. The result has been some spectacular projects in the intelligent systems area, including one I've written about before. I've also seen evidence that some of our seniors manage to graduate without having achieved a level of proficiency I consider appropriate. As one of their instructors, I'm at least partly responsible for that.

This explains why I am so excited about one of the sessions to be offered as a part of the upcoming Educators Symposium at OOPSLA 2005. The session is called Apprenticeship Agility in Academia. Dave West and Pam Rostal, formerly of New Mexico Highlands University, will demonstrate "a typical development iteration as practiced by the apprentices of the NMHU Software Development Apprenticeship Program". This apprenticeship program was an attempt to build a whole software development curriculum on the idea that students advance through successive levels of knowledge and skill mastery. Students were able to join projects based on competencies already achieved and the desire to achieve further competencies offered by the projects. Dave and his folks enumerated all of the competencies that students must achieve prior to graduation, and students were then advanced in the program as they achieved them at successive levels of mastery. It solves the whole "C grade versus A grade" problem by ignoring it, focusing instead on what they really wanted students to achieve.

Unfortunately, I have to use the past tense when describing this program, because NMHU canceled the program -- apparently for reasons extraneous to the quality or performance of the program. But I am excited that someone had the gumption and an opportunity to try this approach in practice. I'd love to see more schools and more CS educators try to integrate such ideas into their programs and courses. (That's one of the advantages of chairing an event like the OOPSLA Educators Symposium... I have some ability to shape or direct the discussion that educators have at the conference.)

For you students out there: To what extent do you strive for mastery of the skills you learn in your courses? When do you settle for less, and why? What can you do differently to help yourselves become effective thinkers and practitioners?

For you faculty out there: To what extent do you focus your teaching and grading efforts on student mastery of skills? What techniques work best, and why? When doyou settle for less, and why? I myself know that I settle for less all too often, sometimes due to the impediments placed in my way by our curriculum and university structure, but sometimes due to my own lack of understanding or even laziness. Staying in an intellectual arena in which I can learn from others such as West and Rostal is one way I encourage myself to think Big Thoughts and try to do better.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 02, 2005 10:19 PM

Marathon Signage

Marathoners often adorn themselves with signs. Some are pinned on shirts and shorts, others are written on clothes, and still other are written directly on their bodies. Some just give the runners' names, so that members of the crowd can call out personalized encouragement. Others list team names or other affiliations, as a way of acknowledging why they run. Still others broadcast messages of inspiration, for their own benefit and the benefit of others around them.

I wasn't entirely coherent for much of my Twin Cities Marathon, but a couple of inspirational message caught my eye.

Best inspirational violation of "Once and Only One":

I may not have a good time,
but I will have a good time.

Emphasize 'time' in the first sentence, and 'have' in the second.

I know that this quote is trite, but I appreciate the Power of Positive Thinking exhibited by this middle-aged woman. I hope she two good times and, if not, at least the latter.

Best inspirational use of punctuation:

Heart attack.


Heart attack,

Marvelous. The survivor in that shirt ran strong early, like me, and faded late, like me. I hope he finished, and felt strong doing so. In any case, training for a marathon demonstrates that he wants the tale of his life to feature a comma, not a period. Bravo.

I did not have a good time for me, and all in all I can't say that I had a good time. But in the future I hope to tell this story with a comma. I owe that to people who persevere in the face of what could be much more significant periods.

Posted by Eugene Wallingford | Permalink | Categories: Running

October 02, 2005 9:48 PM

Not a Great Race by Me

Like the week that came before, my Twin Cities Marathon did not go as planned.

For a while, it went as well as I could have hoped. I ran with then 3:30 pace team from the beginning, and I felt rested and strong. At about 8 miles, things felt tougher, but I stayed on pace. We were running slightly sub-8:00 miles in anticipation of several uphill miles in the last eight. And our pace was incredibly even. We had banked about 80 seconds by the 4-mile split, and we then ran mile after mile at an 8:00 pace. At the 15-mile marker, I felt good.

But I don't think I was. I was struggling. By 18 miles, I was thinking positively about the rest of the race, but I think my body was near its end.

Still, at the 20-mile marker, I was nearly on target -- 2:40 and few seconds. But that marker effectively ended my race and began my attempt to survive the remaining distance. I was out of gas.

I'll save you the full story and leave you with this: I needed to walk 2, 3, or (at least once) 4 minutes at a time. I consumed a lot of fluid. My thighs cramped -- the first time I've ever cramped in a race. When running, I slowed to a crawl.

Finally, at about 24.2 miles, I finished my last stint walking. I really wanted to finish on the move so I jogged, ever so slowly to the 26-mile marker, at which point I enough to accelerate to a 9:10 pace for the last .2 miles.

My official time was around 3:57, though my chip time will be closer to 3:55. (I haven't seen race results on-line yet.)

Within a few minutes of crossing the finish line, I was ill. I don't often seek out medical attention, but I knew I needed to this time. After an exam on the green, the EMT crew sent me to the medical tent for some real treatment. I spent nearly an hour there, because I wasn't getting any better. They finally discharged me, but even then I wasn't much better. So much so that I decided not to risk the drive back to Cedar Falls tonight. I went back to my hotel for plenty of fluids, some rest, a big meal, and more rest. Tomorrow, I'll give it a go.

The diagnosis: dehydration. I thought I drank more than enough during the race, but I probably hadn't drunk or eaten enough during my week ill leading up to the race. I've never been dehydrated before or felt that bad after any physical activity in my life. And I don't want to ever again.

Marathoners often say "Respect the distance." Before this race, a marathoner friend told me, "Anything can happen in the last six miles." This race reminded me that anything can happen in the last six miles of a marathon and that I must respect the distance. Given my situation heading into this race, I should have set out more conservatively, even though I felt good early. Respect the distance.

I can say this. The Twin Cities Marathon is a great race. The course is as beautiful as advertised. The organizing team creates a great environment for the runners. The crowds offered great support from the first mile to the last. And the people in the trenches, the race volunteers, more than live up to the reputation most have of folks from the Midwest: friendly and out-of-their-way helpful.

I may have to run this marathon again some day, so that I can enjoy the route and people more from beginning to end. But I don't want to think about another marathon yet for a while.

Posted by Eugene Wallingford | Permalink | Categories: Running