A month hardly deserves its own review, especially when it contains no Running on the Road reports. But after so many months off and on, and a November/December stretch of downtown that killed the momentum gained from running my sixth marathon, I am happy simply to have a month of running to report. Five consecutive weeks of increasing mileage, with steady 27-29 mile weeks the last three. The cold snap last week drove me inside for lap running, which means faster miles and more fatigue. Fortunately, they have not led to a return of the symptoms behind my months off and on.
My goal for February is steady running; my hope is to increase my mileage slowly but steadily into the mid-30s, where I like to be in the winter. My next big trip is planned for SIGCSE 2010 in Milwaukee, which has a river and a riverwalk. I love to explore new cities on foot.
Over the last week, there has been a long thread on SIGCSE listserv about writing textbooks. Most interesting to me was Kim Bruce's note, "Thinking about electronic books". Kim noted Apple's announcement of the iPad, which comes with software for reading electronic books. Having written dead-tree books before, he wondered how the evolution of technology might help us to enhance our students' learning experience.
If we can provide books on a general-purpose computer, we have so many options available. Kim mentions one: replacing graphics with animations. Rather than seeing a static picture of the state of some computation, they could watch the computation unfold, with a direct connection to the code that produces it. This offers a huge improvement in the way that students can experience the ideas we want them to learn. You can see this difference in examples Kim posted of his printed textbook and his on-line lecture notes.
Right now, authors face a challenging practical obstacle: the lack of a standard platform. If a book requires features specific, say, to an iPad or to Windows, then its audience is limited. Even if it doesn't but a particular computer doesn't provide support for some near-standard technology, such as Flash on the Apple products, then users of those products are unable to access the book their devices. It would be nice to have an authoring system that runs across platforms, transparently, so that writers can focus on what they want to write, not on compatibility issues.
As Kim points out, we can accomplish some of this already on the web, writing for a browser. This isn't good enough, though. Reading long-ish documents at a desktop computer through a browser changes the reading experience in important ways. Our eyes -- and the rest of our bodies -- need something more.
With the evolution of handheld devices toward providing the full computational power we see on the desktop, our ability to write cross-platform books grows. The folks working on Squeak, Croquet, Sophie, and other spin-off technologies have this in mind. They are creating authoring systems that run across platforms and that rely less and less on underlying OS and application software for support.
As we think about how to expand the book-reading experience using new technologies, we can also see a devolution from the other side. Fifteen years ago, I spent a few years thinking about intelligent tutoring systems (ITS). My work on knowledge-based systems in domains such as engineering and business had begun to drift toward instruction. I hoped that we could use what we'd learned about knowledge representation and generic problem-solving patterns to build programs that could help people learn. These systems would encode knowledge from expert teachers in much the way that our earlier systems encoded knowledge from expert tax accountants, lawyers, and engineered.
Intelligent tutoring systems come at learning from the AI side of things, but the goal is the same as that of textbooks: to help people learn. AI promised something more dynamic than what we could accomplish on the printed page. I have not continued in that line of work, but I keep tabs on the ITS community to see what sort of progress they have been making. As with much of AI, the loftiest goals we had when we started are now grounded better in pragmatics, but the goal remains. I think Mark Guzdial has hit upon the key idea is his article Beat the book, not the teacher. The goal of AI systems should not be (at least immediately) to improve upon the perfomance of the best human teachers, or even to match it; the goal should be to improve upon the perfomance of the books we ask our students to read. This idea is the same one that Kim Bruce encourages us to consider.
As our technology evolves in the direction of reasonably compact mobile devices with the full computational power and high-fidelity displays, we have the ability to evolve how and what we write toward the dream of a dynabook. We should keep in mind that, with computation and computer programming, we are creating a new medium. Ultimately, how and what we write may not look all that much like a traditional book! They may be something new, something we haven't thought of yet. There is no reason to limit ourselves to producing the page-turning books that have served us so well for the last few centuries. That said, a great way to move forward is to try to evolve our books to see where our new technology can lead us, and to find out where we come up short.
A friend sent me a link to a New York Times book review, Odysseus Engages in Spin, Heroically, by Michiko Kakutani. My friend and I both enjoy the intersection of different disciplines and people who cross boundaries. The article reviews "The Lost Books of the Odyssey", a recent novel Kakutani calls "a series of jazzy, post-modernist variations on 'The Odyssey'" and "an ingeniously Borgesian novel that's witty, playful, moving and tirelessly inventive". Were the book written by a classicist, we might simply add the book to our to-read list and move on, but it's not. Its author, Zachary Mason is a computer scientist specializing in artificial intelligence.
I'm always glad to see examples of fellow computer scientists with interests and accomplishments in the humanities. Just as humanists bring a fresh perspective when they come to computer science, so do computer scientists bring something different when they work in the humanities. Mason's background in AI could well contribute to how he approaches Odysseus's narrative. Writing programs that make it possible for computers to understand or tell stories causes the programmer to think differently about understanding and telling stories more generally. Perhaps this experience is what enabled Mason to "[pose] new questions to the reader about art and originality and the nature of storytelling".
Writing a program to do any task has the potential to teach us about that task at a deeper level. This is true of mundane tasks, for which we often find our algorithmic description is unintentionally ambiguous. (Over the last couple of weeks, I have experienced this while working with a colleague in California who is writing a program to implement a tie-breaking procedure for our university's basketball conference.) It is all the more true for natural human behaviors like telling stories.
In one of those unusual confluences of ideas, the Times book review came to me the same week that I read Peter Merholz's Why Design Thinking Won't Save You, which is about the value, even necessity, of bringing different kinds of people and thinking to bear on the tough problems we face. Merholz is reacting to a trend in the business world to turn to "design thinking" as an alternative to the spreadsheet-driven analytical thinking that has dominated the world for the last few decades. He argues that "the supposed dichotomy between 'business thinking' and 'design thinking' is foolish", that understanding real problems in the world requires a diversity of perspectives. I agree.
For me, Kakutani's and Merholz's articles intersected in a second way as I applied what they might say about how we build software. Kakutani explicitly connects author Mason's CS background to his consideration of narrative:
["Lost Books" is] a novel that makes us rethink the oral tradition of entertainment that thrived in Homer's day (and which, with its reliance upon familiar formulas, combined with elaboration and improvisation, could be said to resemble software development) ...
When I read Merholz's argument, I was drawn to an analogy with a different kind of writing, journalism:
Two of Adaptive Path's founders, Jesse James Garrett and Jeffrey Veen, were trained in journalism. And much of our company's success has been in utilizing journalistic approaches to gathering information, winnowing it down, finding the core narrative, and telling it concisely. So business can definitely benefit from such "journalism thinking."
So can software development. This passage reminded of a panel I sat on at OOPSLA several years ago, about the engineering metaphor in software development. The moderator of the panel asked folks in the audience to offer alternative metaphors for software, and Ward Cunningham suggested journalism. I don't recall all the connections he made, but they included working on tight deadlines, having work product reviewed by an editor, and highly stylized forms of writing. That metaphor struck me as interesting then, and I have since written about the relationship between software development and writing, for example here. I have also expressed reservations about engineering as a metaphor for building software, such as here and here.
I have long been coming to believe that we can learn a lot about how to build software better by studying intensely almost every other discipline, especially disciplines in which people make things -- even, say, maps! When students and their parents ask me to recommend minors and double majors that go well with computer science, I often mention the usual suspects but always make a pitch for broadening how we think, for studying something new, or studying intensely an area that really interests the students. Good will come from almost any discipline.
These days, I think that making software is like so many things and unlike them all. It's something new, and we are left to find our own way home. That is indeed part of the fun.
William Caputo channels the pragmatists:
These days, I believe the key difference between practice, value and principle (something much debated at one time in the XP community and elsewhere) is simply how likely we are to adjust them if things are going wrong for us (i.e., practices change a lot, principles rarely). But none should be immune from our consideration when our actions result in negative outcomes.
To the list of practice, value, and principle, pragmatists like Peirce, James, Dewey, and Meade would add knowledge. When we focus on their instrumental view of knowledge, it easy to forget one of the critical implications of the view: that knowledge is contingent on experience and context. What we call "knowledge" is not unchanging truth about the universe; it is only less likely to change in the face of new experience than other elements of our belief system.
Caputo reminds us to be humble when we work to help others to become better software developers. The old pragmatists would concur, whether in asking us to focus on behavior over belief or to be open to continual adaptation to our environment. This guidance applies to teaching more than just software development.
Time to blog has been scarce, with the beginning of an unusual semester. I am teaching two courses instead of one, and administrative surprises seem to be arriving daily, both inside the department and out. Teaching gives me energy, but most days I leave for home feeling a little humbler than I started -- or a little less satisfied with state of affairs.
Perhaps this is why a particular passage from an entry on urban planning policy at The Urbanophile keeps coming to mind. It offers a lesson for urban policy based on the author's reading of Dietrich Dörner's The Logic of Failure (a new addition to my must-read list):
The first [lesson] is simply to approach urban policy and urban planning with humility and rich understanding of the limits of what we can accomplish. This I think is desperately needed. There are so many policies out there that are promoted with almost messianic zeal by their advocates.
One person's messianic zeal, unfettered from reality, is a dangerous force. It can wear out even a resolute team; when coupled with normal human frailty, the results can destroy opportunities for progress.
Another passage from the same blog has had a more personal hold on me of late:
People with talent, with big dreams and ambitions, want to live in a place where the civic aspiration matches their personal aspirations.
Sense of place and sense of self are hard to separate. This is true for cities -- the great ones capitalize on the coalescence of individual and communal aspiration -- and for academic departments.
I've written occasionally here about programming as a new communications medium and the need to empower as many people as possible with the ability to write little programs for themselves. So it's probably not surprising that I read Clay Shirky's The Shock of Inclusion, which appears in Edge's How Has The Internet Changed The Way You Think?, with a thought about programming. Shirky reminds us that the revolution in thought created by the Internet is hardly in its infancy. We don't have a good idea of how the Internet will ultimately change how we think because the most important change -- to "cultural milieu of thought" -- has not happened yet. This sounds a lot like Alan Kay on the computer revolution, and like Kay, Shirky makes an analogy to the creation of the printing press.
When we consider the full effect of the Internet, as Shirky does in his essay, we think of its effect on the ability of individuals to share their ideas widely and to connect those ideas to the words of others. From the perspective of a computer scientist, I think of programming as a form of writing, as a medium both for accomplishing tasks and for communicating ideas. Just as the Internet has lowered the barriers to publishing and enables 8-year-olds to become "global publishers of video", it lowers the barriers to creating and sharing code. We don't yet have majority participation in writing code, but the tools we need are being developed and communities of amateur and professional programmers are growing up around languages, tools, and applications. I can certainly imagine a YouTube-like community for programmers -- amateurs, people we should probably call non-programmers who are simply writing for themselves and their friends.
Our open-source software communities have taught us not only that "collaboration between loosely joined parties can work at scales and over timeframes previously unimagined", as Shirky notes, but other of his lessons learned from the Internet: that sharing is possible in ways far beyond the 20th-century model of publishing, that "post-hoc peer review can support astonishing creations of shared value", that whole areas of human exploration "are now best taken on by groups", that "groups of amateurs can sometimes replace single experts", and that the involvement of users accelerates the progress of research and development. The open-source software is a microcosm of the Internet. In its own way, with some conscious intent by its founders, it is contributing to creation of the sort of Invisible College that Shirky rightly points out is vital to capitalizing on this 500-year advance in man's ability to communicate. The OSS model is not perfect and has much room for improvement, but it is a viable step in the right direction.
All I know is, if we can put the power of programming into more people's hands and minds, then we can help more people to have the feeling that led Dan Meyer to write Put THAT On The Fridge:
... rather than grind the solution out over several hours of pointing, clicking, and transcribing, for the first time ever, I wrote twenty lines of code that solved the problem in several minutes.
I created something from nothing. And that something did something else, which is such a weird, superhuman feeling. I've got to chase this.
We have tools and ideas that make people feel superhuman. We have to share them!
This semester, I am teaching my programming languages course, in which students and I program using Scheme.
Due to an unexpected but welcome uptick in enrollment across the department, I will also be team-teaching a 10-week course on Cobol. We are one of the few CS programs that still try to offer Cobol, and when our attempt this time resulted in a class large enough to run, we didn't want to cancel it.
01 SWITCHES. 05 EOF-SWITCH PIC X VALUE "F". 88 AT-END-OF-FILE VALUE "T", "t".
It's been a long time since I have spanned two such different languages in the same semester. I'll have to resist the urge to implement curried paragraphs in Cobol, though try to replicate Cobol's Data Division magic in Scheme might be fun. It will certainly underscore just how different a couple of Cobol's features are from what students encounter in modern languages. It will be interesting to see how my time thinking about Cobol will affect what I say and do in Programming Languages.
Last spring, a colleague commented that he didn't think our department spent enough time trying to be great. This made me sad, but it struck me as true. At the time, I wasn't sure how to respond.
All groups have their internal politics. Some political situations are short-lived; others are persistent, endemic. We are no different, and maybe even above average. (Someone has to be!) Political struggles take time and energy. They steal focus.
I think everyone in our group desires to be great. Unfortunately, that's the easy part. For a group to achieve greatness, individuals must work together in a common direction. In our group, it is hard to build consensus on a shared vision. I don't pretend that once we share a vision that greatness will come easily, but it's hard to get anywhere unless everyone is trying to go to the same place -- or at least is using the same criteria for progress.
As for me, in my role as department head, I have not always found -- or created -- the will, the energy, or the tools I need to help us move confidently in the direction of greatness. So, at times, we seem to settle, working locally but not globally.
This train of thought reminds me of a couple of comments James Shore made about stumbling through mediocrity in the context of agile software development:
The emphasis [in the software world] has shifted from "be great" to "be Agile." And that's too bad. As much as I like it, there's really no point in Agile for the sake of Agile.
The point is to be great, or perhaps more accurately, to do great things. Agile approaches are a path, not a destination.
I want to work with people who want to be great. People who aren't satisfied just fitting in. People who are willing to take risks, rock the boat, and change their environment to maximize their productivity, throughput, and value.
One of the things that has surprised me so much about group dynamics since I joined a faculty and perhaps more so since I've been in the position of head is the enormous role that fear plays in how individuals work and interact with one another. It takes courage to take risks, to rock the boat, and to change the environment in which we live and work. It takes courage to be honest. It takes courage to take an action that may make a colleague or supervisor unhappy.
Without courage, especially at key moments, opportunities pass, sometimes before they are even recognized.
I have experienced this in how I interact with others, and occasionally I observe it how colleagues interact with me and others. I never thought that this would be a major obstacle on my path to greatness, or my department's.
(For what it's worth, Shore's second passage also describes the kind of students I like to work with, too. If it is hard for experienced adults to have this sort of gumption, imagine how much tougher an expectation it is to have of young people who are just learning how to step out into the world. Fortunately, as teachers, we have an opportunity to help students grow in this way.)
As I prepare for a new semester of teaching programming languages, I've been enjoying getting back into functional programming. Over break, someone somewhere pointed me toward a set of blog entries on why functional programming doesn't work. My first thought as I read the root entry was, "Just use Scheme or Lisp", or for that matter any functional language that supports mutation. But the author explicitly disallows this, because he is talking about the weakness of pure functional programming.
This is common but always seems odd to me. Many of the arguments one sees against FP are against "pure" functional programming: all FP, all the time. No one ever seems to talk in the same way about stateful imperative programming in, say, C. No one seems to place a purity test on stateful programming: "Try writing that without any functions!". Instead, we use state and sequencing and mutation throughout programs, and then selectively use functional style in the parts of the program where it makes sense. Why should FP be any different? We can use functional style throughout, and then selectively use state where it makes sense.
Mixing state and functions is the norm in imperative programming. The same should be true when we discuss functional programming. In the Lisp world, it is. I have occasionally read Lispers say that their big programs are about 90% functional and 10% imperative. That ratio seems a reasonable estimate for the large functional programs I have written, give or take a few percent either way.
Once we get to the point of acknowledging the desirability of mixing styles, the question becomes which proportion will serve us best in a particular environment. In game programming, the domain used as an example in the set of blog entries I read, perhaps statefulness plays a larger role than 10%. My own experience tells me that whenever I can emphasize functional style (or tightly-constrained stateful style, a lá objects), I am usually better off. If I have to choose, I'll take 90:10 functional over 90:10 imperative any day.
If we allow ourselves to mix styles, then solving the author's opening problem -- making two completely unrelated functions interdependent -- becomes straightforward in a functional program: define the functions (or doppelgangers for them) in a closure and 'export' only the functions. To me, this is an improvement over the "pure" stateful approach, as it gives us state and dependent behavior without global variables mucking up the namespace of the program or the mindshare of the programmer.
Maybe part of the problem lies in how proponents of functional programming pitch things. Some are surely overzealous about the virtues of a pure style. But I think as much of the problem lies in how limited people with vast experience and deep understanding of one way to think feel when they move outside their preferred style. Many programmers still struggle with object-oriented programming in much the same way.
Long ago, I learned from Ralph Johnson to encourage people to think in terms of programming style rather than programming paradigm. Style implies choice and freedom of thought, whereas paradigm implies rigidity and single-mindedness. I like to encourage students to develop facility with multiple styles, so that they will feel comfortable moving seamlessly in and out of styles, across borders whenever that suits the program they are writing. It is better for what we build to be defined by what we need, not our limitations.
I do take to heart one piece of advice derived from another article in the the author's set of articles on FP. People who would like to see functional programming adopted more widely could help the cause by providing more guidance to people who want to learn. What happens if we ask a professional programmer to rewrite a video game (the author's specialty) in pure FP, or
... just about any large, complex C++ program for that matter[?] It's doable, but requires techniques that aren't well documented, and it's not like there are many large functional programs that can be used as examples ...
First, both sides of the discussion should step away from the call for pure FP and allow a suitable mix of functional and stateful programming. Meeting in the middle better reflects how real programmers work. It also broadens considerably the set of FP-style programs available as examples, as well as the set of good instructional materials.
But let's also give credence to the author's plea. We should provide better and more examples, and do a better job of documenting the functional programming patterns that professional programmer needs. How to Design Programs is great, but it is written for novices. Maybe Structure and Interpretation of Computer Programs is part of the answer, and I've been excited to see so many people in industry turning to it as a source of professional development. But I still think we can do better helping non-FP software developers make the move toward a functional style from what they do now. What we really need is the functional programming equivalent of the Gang of Four book.