For your daily reminder of what computation can do and the future of art might be like, check out Jared Tarbell's Gallery of Computation. The artwork generated by Tarbell's code is complex yet often quite alluring. So many artists use computation as medium these days that this site is "nothing new", but I was struck by the beauty of the images produced.
I was also struck by the fact that all of the code for producing these works is available on-line:
I believe all code is dead unless executing within the computer. For this reason I distribute the source code of my programs in modifiable form to encourage life and spread love. Opening one's code is a beneficial practice for both the programmer and the community. I appreciate modifications and extensions of these algorithms. Please send me your experiences.
Eugene sez: Two thumbs up!
I've had one student do work of this sort. A few years ago, I had a student named David Schmudde. Dave is one of those guys who mixes both technical skills and interests with artistic skills and interests, both music and visual art. For his course project in Intelligent Systems, he created a program called Ardis. This program acts like a set of sophisticated Adobe Photoshop filters. It consists of a set of rules about the features of paintings done in certain artistic styles, such as German Expressionism. Given an image of any sort, it applies the rules of the styles selected by the user to the images in funky ways, as if to say "How would a German expressionist have made this picture?" In the case of German Expressionism, it finds lines that mark objects and exaggerates them. The program uses a bit of randomness in its filtering, which means that you can use Ardis to create a set of images all of a theme.
As his instructor, I was most impressed that Dave wrote almost all of the code that makes up Ardis. At the time, there wasn't all that much in the way of image processing packages in Java, so he went off and learned what he needed to implement and did it himself. The program isn't perfect or polished, not nearly as much so as Tarbell's work on-line, but it was a great result for a semester's work. Dave went on to do a master's degree in music and technology at Northwestern, which he just completed last spring. I'll have to dig out Ardis and see if I can't package it up for folks to play with and extend.
[ Update: I found an old pointer to a description of Ardis on line. Check out David's page http://www.davidshino.com/ardis.html for a bit about his program. ]
Sometimes, a computer scientist can produce a beautiful picture without intending to. One of my current M.S. students, Nathan Labelle, is working on a project involving power laws and open-source software. In the course of displaying a particular relationship among 100 randomly selected Linux packages, he produced the image to the right: a graph that appears to be a wonderful line drawing of a book whose pages are being riffled. I think it's quite beautiful.
I reached my last "round number" milestone of the year running this morning. The sixth mile of my track work-out was my 1900th mile of 2004. To celebrate, I burned it as fast as I could and ran the fastest mile of my life, 6:32. That number seems unreal to me... When I began training for my Chicago Marathon in the spring of 2003, I doubt I could run a mile under 8 minutes. Practice and persistence pay off. So does the good fortune of staying healthy.
I'm at 1902 miles now. Tomorrow is New Year's Eve, so I'll get one more run in for the year, an easy 5-miler outside. I don't remember how many miles I ran last year, but I think it was in the 1500-1600 range. Don't expect me to increase my mileage by the same amount in 2005. :-) I may well increase a bit, but I'd like to continue to get faster. If I could make 8:00 miles feel like a walk in the park, I will be able to reach my next goal for the marathon, circa October of next year: 3 hours, 30 minutes. Wish me luck!
Update: I checked my log for 2003. My mileage that year was 1281.8. I'd forgotten that I didn't run much the first three months of that year, due to weaker habits and a persistent cold. You can be certain that I won't run 2500 miles next year!
Owen Astrachan pointed me in the direction of Jonathan Edward's new blog. I first encountered Edwards's work at OOPSLA 2004, where he gave a talk in the Onward! track on his example-driven programming tool.
In his initial blog entries, Edwards introduces readers to the programming revolution in which he would like to participate, away from more powerful and more elegant languages and toward a programming for the Everyman, where we build tools and models that fit the way the real programmer's mind works. His Onward! talk demoed a initial attempt at a tool of the sort, in which the programmer gives examples of the desired computation and the examples become the program.
Edwards's current position stands in stark contrast to his earlier views as a more traditional researcher in programming languages. As many of us are coming to see, though, "programming is about learning to work effectively in the face of overwhelming complexity" than it is about ever more clever programming languages and compiler tricks. When it comes to taming the complexity inherent in large software, "simplicity, flexibility, and usability are more effective than cleverness and elegance."
The recent trend toward agile development methodologies and programming support tools such as JUnit and Eclipse also draw their inspiration from a desire for simpler and more flexible programs. Most programmers -- and for Edwards, this includes even the brightest -- don't work very well with abstractions. We have to spend a lot of brain power managing the models we have of our software, models that range from execution on a standard von Neumann architecture up to the most abstract designs our languages will allow. Agile methods such as XP aim to keep programmers' minds on the concrete embodiment of a program, with a focus on building supple code that adapts to changes in our understanding of the problem as we code. Edwards even uses one of Kent Beck's old metaphors that is now fundamental to the agile mindset: Listen carefully to what our code is telling us.
But agile methods don't go quite as far as Edwards seems to encourage. They don't preclude the use of abstract language mechanisms such as closures or higher-order procedures, or the use of a language such as Haskell, with its "excessive mathematical abstraction". I can certainly use agile methods when programming in Lisp or Smalltalk or even Haskell, and in those languages closures and higher-order procedures and type inference would be natural linguistic constructs to use. I don't think that Edwards is saying such things are in and of themselves bad, but only that they are a symptom of a mindset prone to drowning programmers in the sort of abstractions that distract them from what they really need in order to address complexity. Abstraction is a siren to programmers, especially to us academic types, and one that is ultimately ill-suited as a universal tool for tackling complexity. Richard Gabriel told us that years ago in Patterns of Software (pdf).
I am sympathetic to Edwards's goals and rationale. And, while I may well be the sort of person he could recruit into the revolution, I'm still in the midst of my own evolution from language maven to tool maven. Oliver Steele coined those terms, as near as a I can tell, in his article The IDE Divide. Like many academics, I've always been prone to learn yet another cool language rather than "go deep" with a tool like emacs or Eclipse. But then it's been a long time since slogging code was my full-time job, when using a relatively fixed base of language to construct a large body of software was my primary concern. I still love to learn a Scheme or a Haskell or a Ruby or a Groovy (or maybe Steele's own Laszlo) to see what new elegant ideas I can find there. Usually I then look to see how those ideas can inform my programming in the language where I do most of my work, these days Java, or in the courses where I do most of my work.
I don't know where I'll ultimately end up on the continuum between language and tool mavens, though I think the shift I've been undergoing for the last few years has taken me to an interesting place and I don't think I'm done yet. A year spent in the trenches might have a big effect on me.
As I read Edwards's stuff, and re-read Steele's, a few other thoughts struck me:
I retain a romantic belief in the potential of scientific revolution ... that there is a "Calculus of Programming" waiting to be discovered, which will ... revolutionize the way we program....
(The analogy is to the invention of the calculus, which revolutionized the discipline of physics.) I share this romantic view, though my thoughts have been with the idea of a pattern language of programs. This is a different sort of 'language' than Edwards means when he speaks of a calculus of programs, but both types of language would provide a new vocabulary for talking about -- and building -- software.
Copy & paste is ubiquitous, despite universal condemnation. ... I propose to decriminalize copy & paste, and even to elevate it into the central mechanism of programming.
Contrary to standard pedagogy, I tell my students that it's okay to copy and paste. Indeed, I encourage it -- so long as they take the time after "making it work" to make it right. This means refactoring to eliminate duplication, among other things. Some students find this to be heresy, or nearly so, which speaks to how well some of their previous programming instructors have drilled this wonderful little practice out of them. Others take to the notion quite nicely but, under the time pressures that school creates for them and that their own programming practices exacerbate, have a hard time devoting sufficient energy to the refactoring part of the process. The result is just what makes copy and paste so dangerous: a big ball of mud with all sorts of duplicated code.
Certainly, copy and paste is a central mechanism of doing the simplest thing that could possibly work. The agile methods generally suggest that we then look for ways to eliminate duplication. Perhaps Edwards would suggest that we look for ways to leave the new code as our next example.
Back when I was a freshman architecture major, I saw more advanced students go out on charrette. This exercise had the class go on site, say, a road trip to a small town, to work as a group to design a solution to a problem facing the folks there, say, a new public activity center, under the supervision of their instructors, who were themselves licensed architects. Charrette was a way for students to gain experience working on a real problem for real clients, who might then use the solution in lieu of paying a professional firm for a solution that wasn't likely to be a whole lot better.
Software engineering courses often play a similar role in undergraduate computer science programs. But they usually miss out on a couple of features of a charrette, not the least of which is the context provided by going to the client site and immersing the team in the problem.
A software institute that worked like a teaching hospital could provide a more authentic experience for students and researchers exploring new ways to build software. Clients would come to the institute, rather than instructors drumming up projects that are often created (or simplified) for students. Clients would pay for the software and use it, meaning that the product would actually have to work and be usable by real people. Students would work with researchers and teachers -- who should be the same people! -- in a model more like apprenticeship than anything our typical courses can offer.
The Software Engineering Institute at Carnegie Mellon may have some programs that work like this, but it's an idea that is broader than the SEI's view of software engineering, one that could put our CS programs in much closer touch with the world of software than many are right now.
There seems to be a wealth of revolutionary projects swirling in the computer science world these days: test-driven development, agile software methods, Croquet and eToys .... That's not all that unusual, but perhaps unique to our time is the confluence of so many of these movements in the notion of simplicity, of pulling back from abstraction toward the more concrete expression of computational ideas. This more general trend is perhaps a clue for us all, and especially for educators. One irony of this "back to simplicity" trend is that it is predicated on increasingly complex tools such as Eclipse and Croquet, tools that manage complexity for us so that we can focus our limited powers on the things that matter most to us.
In two recent articles (here and here), I participated in a conversation with John Mitchell and Erik Meade about the role of speed in developing software development skills. While the three of us agreed and disagreed in some measure, we all seemed to agree that "getting faster" is a valuable part of the early learning process.
Now comes an article from cognitive psychologists that may help us understand better the role of pressure. My old cognitive psychology professor, Tom Carr, and one of his students, Sian Bielock, have written an article, "Why High Powered People Fail: Working Memory and 'Choking Under Pressure' in Math", to appear in the journal Psychological Science. This article reports that strong students may respond worse under the pressure of hard exams than less gifted students. This form of choking seems to result from pressure-induced impairment of working memory, which is one of the primary advantages that stronger students have over others. You can read a short article from the NY Times on the study, or a pre-print of the journal article for full details.
The new study is a continuation of Beilock's doctoral research, which seems to have drawn a lot of interest in the sports psychology world. An earlier study by Beilock and Carr, On the Fragility of Skilled Performance: What Governs Choking Under Pressure? from the Journal of Experimental Psychology: General supports the popular notion that conscious attention to internalized skills hurts performance. Apparently, putting pressure on a strong student increases their self-consciousness about their performance, thus leading to what we armchair quarterbacks call choking.
I'm going to study these articles to see what they might teach me about how not to evaluate students. I am somewhat notorious among my students for having difficult tests, in both content and length. I've always thought that the challenge of a tough exam was a good way for students to find out how far they've advanced their understanding -- especially the stronger students. But I s I need to be careful that my exams not become tests primarily of handling pressure well and only secondarily about understanding course content.
By the way, Tom Carr was one of my all-time favorite profs. I enjoyed his graduate course in cognitive psychology as much as any course I ever took. He taught me a lot about the human mind, but more importantly about how a scientist goes about the business of trying to understand it better. I still have the short papers I wrote for that course on disk (the virtues of plaintext and nroff!). I was glad to see some of his new work.
Last summer, I read E.L. Doctorow's dandy little essay book, Reporting the Universe. The book is a memoir of Doctorow's life as a writer, from heading off to Kenyon College to his approach to writing novels and academic prose. However, it opens with an essay on Ralph Waldo Emerson that I liked so much I had to jot down lots of great quotes. While cleaning up some files today, I ran across this selection:
Emerson's idea of the writer goes right to the heart of the American metaphysic. He is saying we don't have the answers yet. It is a pragmatic thing to say. He knows he is at a culminant point in literary history, where the right of authorship has devolved from gods and their prophets and their priests to everyone. ... A true democracy endows itself with a multiplicity of voices and assures the creation of a self-revising consensual reality that inches forward over the generations to a dream of truth.
Maybe I'm sensitized by recent rumination, but that sounds very much like the state of world these days, with the explosion of the blogosphere as a universal medium for publishing. The right of authorship has devolved from a select few to a much larger population, and the rich interactions among blogs and authors fosters a consensual reality of sorts.
Brian Marick has written a bit about Emerson and how his pragmatic epistemology seems to be in sync with agile practices. I certainly recommend Doctorow's essay to interested readers. If you like to read what writers have to say about writing, as I do, then I can also recommend the entire book.
From Tall, Dark, and Mysterious' professional self-assessment, here's a wonderful nugget:
My Master's thesis was in many ways the serendipitous culmination of three years of near-paralyzing apathy on my part for the academic path I had chosen for myself: a Maple program that may have the worst runtime of any program ever written (O(4^n*n^8n) - it crashed at n=5), thirty-five pages of painstakingly-formatted LaTeX, and a competent but tepid distillation of a subject that fully twenty people in the world give half a crap about.
For you students of algorithms: O(4n*n8n). Of course, that simplifies to O(nn), but still -- ouch! That's some serious computational complexity, my friends.
For you graduate students: Don't be too disturbed. I think that many people feel this way about their research when they get done, especially those in the more abstract disciplines. Research in mathematics and theoretical CS are especially prone to such sentiments. This feeling says as much about how academia works and the state of knowledge these days as it does about any particular research. They key is to sustain your passion for your research area in the face of the external demands your advisor and institutions place on you. Tall, Dark, and Mysterious seems to have lost hers long before completing.
I know the intellectual fraud feeling she describes, too. During the semester, classes and conference travel and conference committees and administrative duties combine to leave me much less time to do computer science than I want. But the feeling passes when I remind myself that I'm doing CS in the small all the time; it's just the sustained periods of doing CS that I miss out on. That's what makes summers and those too-infrequent research leaves so valuable.
If you like to think about teaching, especially at the university level, you should more of Tall, Dark, and Mysterious. She's a reflective teacher and an engaging writer. Her recent blog I like most of my students tickled my fancy. I only wish I'd written such a post first myself. That may be a great topic for grid blogging among university profs...
Education blogger Jenny D. calls for a language of the practice of educators:
We have to start naming the things we do, with real names that mean the same thing to all practitioners.
My intent in what I do is to move the practice of teaching into an environment that resembles medicine. Where practices and procedures are shared, and the language to discuss them is also shared. Where all practices and procedures can and will be tailored to meet specific situations, even though practitioners work hard to maintain high standards and transparency in what they do.
This sounds kind of silly, I know. But what we've got know is a hodgepodge of ideas and practices, teachers working in isolation in individual classrooms, and very little way to begin to straighten out and share the best practices and language to describe it.
This doesn't sound silly to me. When I first started as a university professor, I was surprised the extent to which educators tend to work solo. There's plenty of technical vocabulary for the content of my teaching but little or no vocabulary for the teaching itself. The education establishment seems quite content that educators, their classrooms, and their students are all so different that we cannot develop a standard way to describe or develop what we do in a classroom. I think that much of this reluctance stems from the history of teacher education, but I also suspect that now there is an element of political agenda involved.
Part of what I've done in my years as a faculty member is to work with other to develop the vocabulary we need to talk about teaching computer science. On the technical side, I have long been interested in documenting the elementary patterns that novices must learn to become effective programmers. Elementary patterns give students a vocabulary for talking about the programs they are writing, and a process for attacking problems. But they also give instructors a vocabulary for talking about the content of their courses.
On the teaching side, I have been involved in the Pedagogical Patterns community, which aims to draw on the patterns ideas of architect Christopher Alexander to document best practices in the practice of teaching and learning. Patterns such as Active Student, Spiral, and Differentiated Feedback capture those nuggets of practice that good teachers use all the time, the approaches that separate effective instruction from hit-or-miss time in the classroom. Relating these patterns in a pattern language builds a rich context around the patterns and gives instructors a way to think about designing their instruction and implementing their ideas.
Pedagogical patterns are the sort of thing that Jenny D. is looking for in educational proctice more generally.
In a follow-up post, she extends her analysis of how professionals in other disciplines practice by applying skills to implement protocols. You'll notice in these blog entries the good use of metaphor I blogged about last time. Jenny uses the practices in other professions to raise the question, "Why don't educators have a vocabulary for discussing what they do?" She hasn't suggested that the vocabulary of educators should be like medicine's vocabulary or any other, at least not yet.
I wish her luck in convincing other educators of the critical value of her research program. Every teacher and every student can benefit from laying a more scientific foundation for the everyday practice of education.
Martin Fowler has a dandy blog entry on the good and the bad of playing the "metaphor game" with software development. This was the take-home point for me:
... it all comes down to how you use the metaphor. Comparing to another activity is useful if it helps you formulate questions; it's dangerous when you use it to justify answers.
I've always had an uneasy relationship with metaphors for software development, though I'm a sucker for the new ones I run across. But what do I gain from the metaphors, and how do I avoid being burned by them? Martin's article captures it.
Maybe this is something I should have known along. But I didn't. Thanks, Martin.
Earlier, I blogged about Alan Kay's talks at OOPSLA 2004. Now, courtesy of John Maxwell, a historian at the University of British Columbia, I can offer a transcript of Alan's Turing Award lecture, as plain text and rich text. Enjoy!
Update: And now, thanks to Darius Bacon, we have a lightweight HTML version.
I often read about how blogging will change the world in some significant way. For example, some folks claim that blogs will revolutionize journalism, creating an alternative medium that empowers the masses and devalues the money-driven media through which most of the world sees its world. Certainly, the blogosphere offers a remarkable level of distributivity and immediacy to feedback, see the Chronicle of Higher Education's Scholars Who Blog, which chronicles this phenomenon.
As I mentioned last time, I'm not often a great judge of the effect a new idea like blogging will have on the future. I'm skeptical of claims of revolutionary effect, if only because I respect the Power Law. But occasionally I get a glimpse of how blogging is changing the world in small ways, and I have a sense that something non-trivial is going on.
I had one such glimpse this morning, when I took to reading a blog written by one of my student's blog. First of all, that seems like a big change in the academic order: a student publishes his thoughts on a regular basis, and his professors can read. Chuck's blog is a mostly personal take on life, but he is the kind of guy who experiences his academic life deeply, too, so academics show up on occasion. It's good for teachers to be reminded that students can and sometimes do think deeply about what they do in class.
Second change: apparently he reads my blog, too. Students read plenty of formal material that their instructors write, but blogs open a new door on the instructor's mind. My blog isn't one of those confessional, LiveJournal-style diaries, but I do blog less formally and about less formal thoughts than I ordinarily write in academic material. Besides, a student reading my blog gets to see that I have interests beyond computer science, and even a little whimsy. It's good for students be reminded occasionally that teachers are people, too.
Third, and this is what struck me most forcefully while reading this morning, these blogs make possible a new channel of learning for both students and teachers. Chuck blogged at some length about a program that he wrote for a data structures assignment. In the course thinking through the merits of his implementation relative to another student's, he had an epiphany about how to write more efficient multi-way selection statements -- and "noticed that no one is trying particularly hard to teach me" about writing efficient code.
This sort of discovery happens rarely enough for students, and when it does happen it's likely to evanesce for lack of opportunity to take root in a conversation. Yet here I am privy to this discovery, six weeks after it happened. It would have been nice to talk about it when it happened, but I wasn't there. But through the blog I was able to respond to some of the points in the entry by e-mail. That I can have this peek into a student's mind (in this case, my own) and maybe carry on a conversation about an idea of importance to both of us -- that is a remarkable consequence of the blogosphere.
I'm old enough to remember when Usenet newsgroups were the place to be. Indeed, I have a token prize from our Linux users group commemorating my claim to the oldest Google-archived Usenet post among our local crew. New communities, academic and personal, grew up in the Usenet news culture. (I still participate in an e-mail community spun off from rec.sport.basketball, and we gather once a year in person to watch NCAA tourrnament games.) So the ability of the Internet to support community building long predates the blog. But the culture of blogging -- personal, frequent posts sharing ideas on any topic; comments and trackbacks; the weaving of individual writers into a complex network of publication -- adds something new. And those personal reflections sometimes evolve into something more over the course of an entry, as in Chuck's programming reflection example.
I do hope that there isn't some sort of Heisenberg thing going on here, though. I'd hate to think that students would be less willing to write honestly if they know their professors might be reading. (Feeling some pressure to write fairly and thoughtfully is okay. The world doesn't need any more whiny ranting blogs.) I know that, when I blog, at the back of my mind is the thought that my students might read what I'm about to say. So far, I haven't exercised any prior restraint on myself, at least any more than any thoughtful writer must exercise. But students are in a different position in the power structure than I am, so they may feel differently.
Some people may worry about the fact that blogs lower or erase barriers of formality between students and professors, but I think they can help us get back to the sort of education that a university should offer -- a Church of Reason, to quote Robert Pirsig:
[The real University is] a state of mind which is regenerated throughout the centuries by a body of people who traditionally carry the title of professor, but even that title is not part of the real University. The real University is nothing less than the continuing body of reason itself.
The university is to be a place where ideas are created, evaluated, and applied, a place of dialogue among students and teachers. If the blogosphere becomes a place where such dialogue can occur with less friction -- and where others outside the walls of the church building itself can also join in the conversation, then the blogosphere may become a very powerful medium in our world after all. Maybe even revolutionary.
I added "google" to my OS X spell-checker's dictionary yesterday morning. I'm surprised that it's taken me this long. I'm also reminded of a couple of cool Google services I've been playing with of late.
Their on-line sample shows Sets finding the names of many automobile manufacturers from an initial set of three. Of course, the quality of the sets it can find depends on the existence of web pages containing your terms with rich connections to one another. For example, when I typed in the names of several chess players, including "Bobby Fischer", which gives nearly 800,000 matches, Google couldn't find a set for me.
To be honest, I'm not sure how much I'll use these services after my initial playing phase. I've never been a big fan auto-completion, except when I request it explicitly through, say, emacs's tab key. I've read that the dynamic HTML implementation beneath the hood of Suggest is a valuable attempt to extend the diversity and quality of web app interfaces, but that's outside my domain of expertise.
Indeed, I'm not certain that these particular services will be the ultimate wins that arise from the techniques used. For example, Joel Spolsky says this about Google Suggest:
It's important not for searching, but because it's going to teach web users to expect highly responsive user interfaces.
My thoughts about the things you find at Google Labs were focused more on Google and its vitality as a leading corporation. The idea is that Google can use its massive databases and computing power to gain leverage beyond traditional web search.
I'm not much of a visionary when it comes to predicting what emerging goods, services, and technologies will win big in the future. If I were, I could do better than a professor's salary! But services like Suggest and Sets and Scholar are innovative ways for Google to explore the horizon of the services its offers, and ultimately to push the boundaries of its technology -- and the boundaries of what we can do as users.
While my students are taking their object-oriented programming final exam, I'm listening to Kent Beck's talk on developer testing at the recent Developer Testing Forum held at PARC. The big theme of the talk is that developer testing is a way that an individual programmer can take control of his own accountability -- whether or not he adopts other agile practices, whether or not anyone else in his organization goes agile. Kent's been talking a lot about accountability lately, and I think it captures one of those very human values that underlie the agile methods, one of the non-techie reasons that I am drawn to the agile community.
My favorite idea in the talk is one that Kent introduces right away: the distinction between quality (an instantaneous measure of a system's goodness) and health (a measure over time). I'm not sure I like the use of "quality" to mean an instantaneous measure, but I love the distinction. Many developers, including students, mistake "there are no bugs in my code" with "this is good code". In one sense, I suppose this is true. The code runs. It performs as desired. That is a Good Thing.
But in another sense the implication is just wrong. Can people extend the program? Change it? Use it? Port it? Kent's turns to his health metaphor to explain. In my words: A person may have a good heart rate and normal blood pressure, but if he can't walk around the block without keeling over then he's probably not all that healthy.
Refactoring mercilessly is a practice that recognizes the importance of the distinction between quality and health. Just because my code passes all the tests does not mean that the code is healthy. At least it's not as healthy as it can be. Refactoring is an exercise regimen for my system. It seeks to improve the long-term health of my program by drawing on its strength at the moment.
Rigorous developer testing also recognizes the importance of this distinction. Having tests means that I can extend and change my code -- work akin to walking around the block -- with some confidence that I won't break the system. And, if I do, the system helps me recover from any errors I introduce. The tests are the immune system of my program!
I really like this "system health" metaphor and think it extends quite nicely to many of the principles and practices of the agile methods. Continuous feedback and sustainable pace spring to mind immediately. Others, such as continuous integration and pair programming, require some thought. That will make for a future blog entry!
Oh, and one other thing I liked from the talk... In the Q-n-A session after the talk, someone asked Kent how we know that our tests are correct. They are, after all, software, too. Kent said that you can't ever know for certain, but you can be more or less confident, depending on how thoroughly you test and refactor. It's like cross-checking sums in arithmetic or accounting. Over the long run, the chance that you make the offsetting mistakes in the code and the tests He made an analogy to mathematical proof, saying something to this effect:
There's no such thing as proof in software. Proof of correctness isn't proof of correctness; it's proof of equivalence. "Here is one expression of what I'm trying to compute, and here's another expression of what I'm trying to compute, and they match." That's what you do with a proof of correctness. Tests and code are the same way. You're saying these two expressions are equivalent in some sense. ... That means my confidence in the answer is much, much higher.
Well said. Scientific reasoning, even in the artificial world of mathematics, is about confidence, not certainty; evidence, not proof. We should not expect more of the messy, human enterprise of building software.
Brian Marick's latest post discusses the latest developments in his ongoing exploration of agile methods and the philosophy of science. As always, there's plenty of food for thought there. In particular, he links to an article on behavioral economics that reminds us that people aren't rational in the classical sense.
Most people are more strongly affected in their decision-making by vivid examples than by abstract information, no matter how much more accurate the abstract information is.
This reminds me of my previous post. Most often, a good example will help students grok an idea better than any abstractions I give them. The abstractions will work better for them after they have a strong foundation of experiences and examples.
For most people, the possibility of a loss greatly outweighs the chance of a win. "People really discriminate sharply between gaining and losing and they don't like losing." ....
I think this principle accounts for why students will work from first principles to solve a problem rather than use a more abstract idea they aren't yet comfortable with. They perceive that the risk of failure is smaller by working from small ideas they understand than by working from a bigger idea they don't. I need to think more about risk when I teach.
For most people, first impressions play a remarkably strong role in shaping subsequent judgments.
This reminds me not only of my most recent post but even more so of something Alan Kay said in his OOPSLA Educators Symposium keynote:
Kay reminded us that what students learn first will have a huge effect on what they think, on how they think about computing. He likened the effect to that of duck imprinting, the process in which ducklings latch onto whatever object interacts with them in the first two days of their lives -- even if the object is a human. Teaching university students as we do today, we imprint in them that computing is about arcane syntax, data types, tricky little algorithms, and endless hours spent in front of a text editor and compiler. It's a wonder that anyone wants to learn computing.
The first three quotes above are drawn from the work of psychologist Daniel Kahneman, whose work I first encountered in one of my favorite grad school courses, Cognitive Psychology. Kahneman won a Nobel Prize in Economics for his work with colleague Amos Tversky that showed how humans reason in the face of uncertainty and risk. This work has tremendous implications for the field of artificial intelligence, where my first computing passions resided, but also for how we teach.
As I write final exams and begin to think ahead to next semester, I've been thinking about how I teach programming and software development. Sometimes, I get so busy with all of the extras that can make programming interesting and challenging and more "real-world" -- objects and design and refactoring and GUIs and unit tests and frameworks and ... -- that I lose sight of the fact that my students are just trying to learn to write programs. When the abstractions and extra practices get in the way of learning, they have become counterproductive.
I'd like to streamline my approach to programming courses a bit. First, I'll make some choices about which extras are more distraction than contribution, and eliminate them. Second, I'll make a conscious effort to introduce abstractions when students can best appreciate them: after having concrete experience with problems and techniques for solving them.
My colleagues and students need not fear that I am going back to the Dark Ages with my teaching style. My friend Stuart Reges (more information at his old web page) isn't going quite that far, but he is in the process of redesigning his introductory courses on the model of his mid-1980s approach. He seems to be motivated by similar feelings, that many of the ideas we've added to our intro courses in the last 20 years have gotten in the way of teaching programming. Where Stuart and I differ is that I don't think there is anything more "fundamental" about what we did in Pascal in the '80s than what we should do with objects and messages in, say, Java. The vocabulary and mindset are simply different. We just haven't quite figured out the best way to teach programming in the new mindset.
I wish Stuart well in his course design and hope to learn again from what he does. But I want to find the right balance between the old stuff -- what my colleagues call "just writing programs" -- and the important new ideas that can make us better programmers. For me, the first step is a renewed commitment to having students write programs to solve problems before having them talking about writing programs.
This train of thought was set off by a quote I read over at 43 Folders. The article is about "hacking your way out of writer's block" but, as with much advice about writing, it applies at some level to programming. After offering a few gimmicks, the writer says:
On the other hand, remember Laurence Olivier.
One day on the set of Marathon Man, Dustin Hoffman showed up looking like shit. Totally exhausted and practically delirious. Asked what the problem was, Hoffman said that at this point in the movie, his character will have been awake for 24 hours, so he wanted to make sure that he had been, too. Laurence Olivier shook his head and said, "Oh, Dusty, why don't you just try acting?"
I don't want all the extras that I care about -- and that I want students to care about -- to be affectations. I don't want to be Dustin Hoffman to Stuart's Olivier. Actually that's not quite true... I admire Hoffman's work, have always thought him to be among the handful of best actors in Hollywood. I just don't want my ideas to become merely affectations, to be distractions to students as they learn the challenging and wonderful skill of programming. If I am to be a method teacher, I want the method to contribute to the goal. Ultimately, I want to be as comfortable in what I'm doing in the classroom as Hoffman is with his approach to acting. (Though who could blame him if he felt a little less cocksure when chided by the great Olivier that day?)
The bottom line for now is that there are times when my students and I should just write programs. Programs that solve real problems. Programs that work. We can worry about the abstract stuff after.
I accomplished one short-term goal yesterday morning: I PRed my final race of the year, the Snow Shuffle 5K. My previous best had been a few seconds over 22 minutes, and I was aiming to break 21:42, a 7:00/mile pace. Despite being bundled up on a chilly day, I ran a 21:25. Hurray!
After my marathon last year, I found that this is a great time to go for a personal record in a shorter race. Marathon training helps you build up aerobic fitness and plenty of muscle, and that's a great base for anaerobic training. So, since the Des Moines Marathon, I've been working on my short speed. It paid off. The good news is, I think I can run faster... The weather and clothing certainly weren't ideal for racing, and I even ran the first mile too fast. So watch out for me next April -- I'm going to try to do myself one better!
This was my first race in the 40-49 age group (ack!), and my PR was good for only fifth place in the category. In my neighborhood, the 40-49s are fast. So I have plenty of goals to shoot for.
I went out this morning for an easy 12-miler, but a weather front is moving through the region right now and it threw some serious gusts of wind my way. The most remarkable one came seemingly out of nowhere at what must have been 40-45mph, caught me headlong in mid-stride, and nearly knocked me on my behind. I've never felt such a belt! The universe is reminding me to be humble, I guess. A good thing to be reminded of every now and then.
Recently, jaybaz made an interesting analogy between garbage collection and, well, garbage collection:
What if Garbage Collection was like Garbage Collection?
Every Thursday morning at 6:00am, the garbage truck stops in front of your house. A scruffy man in an orange jumpsuit steps down, walks up to your front door, and lets himself in.
He walks around the house, picking up each item you own, and asks, "Are you still using this?" If you don't say "yes", he carts it away.
What if we turned the analogy around? Be sure to check out the reader comments to his article.
When I teach design patterns, I like to make analogies to real-world examples, like television remote controls (iterator) and various kinds of adapter. I think a set of analogies for all the garbage collection techniques we use in programming language implementation would make a fun teaching tool.
I have a couple of things on my blogging radar, but this quote from Nat Pryce just launched a new train of thought. Maybe that's because I gave my last lecture of the semester an hour and a half ago (hurray!) and am in an introspective mood about teaching now that the pressure is off for a while.
One of the many interesting topics that Dave [Snowden] touched on [in his talks at XP Day 4 was how people naturally turn to raw, personal stories rather than collated research when they want to learn how to solve a problem. Furthermore, people prefer negative stories that show what to avoid, rather than positive stories that describe best or good practices.
This "struck a chord" with Nat, whose blog is called Mistaeks I Hav Made [sic]. It also struck a chord with me because I spend a lot of my time trying to help students learn to design programs and algorithms.
When I teach patterns, whether of design or code, I often try to motivate the pattern with an example that helps students see why the pattern matters. The decorator pattern is cool and all, but until you feel what not using it is like in some situations, it's sometimes hard to see the point. I think of this teaching approach as "failure driven", as the state of mind for learning the new idea arises out of the failure of using the tools already available. It's especially useful for teaching patterns, whose descriptions typically include the forces that make the pattern a Good Thing in some context. Even when the pattern description doesn't include a motivating example, the forces give you clues on how to design one.
I taught the standard junior/senior-level algorithms course for the first time in a long, long time this semester, and I tried to bring a patterns perspective to algorithm design. This was helped in no small part by discussions with David Ginat, who is interested in documenting algorithm patterns. (One of my favorites of his is the Sliding Delta pattern.) But I think some of most effective work in the algorithms class semester -- and also some of the most unstructured -- was done by giving the students a game to play or a puzzle to solve. After they tried to figure out the nugget at the heart of the winning strategy, we'd discuss. We had lots of failures and ultimately, I think, some success at seeing invariants and finding efficient solutions. The students were sometimes bummed by their inability to see solutions immediately, but I assured them that the experience of trying and failing was what would give rise to the ability to solve problems.
(I still have more work to do finding the patterns in the algorithms we designed this semester, and then writing them up. Busy, busy!)
Negative stories -- especially the student's own stories -- do seem to be the primary catalyst for learning. But I have to be careful to throw in some opportunities to create positive stories, too, or students can become discouraged. The "after" picture of the pattern isn't enough; they need to phase a new problem and feel good because they can solve it. I suppose that one of the big challengers we teachers face is striking the right balance between the failures that drive learners forward and the successes that keep them wanting to be on the road.
I've been reading about XML a bit lately, and I can't help but be reminded of a wonderful T-shirt I saw Philip Wadler wearing at the 2002 International Conference on Functional Programming. I know that this is an oldie, but it's still a goodie.
(LAMBDA (X) (* 2 X)) (McCarthy)
<?xml version="1.0"?> (W3C) <LAMBDA-TERM> <VAR-LIST> <VAR> X </VAR> </VAR-LIST> <EXPR> <APPLICATION> <EXPR> <CONST> * </CONST> </EXPR> <ARGUMENT-LIST> <EXPR> <CONST> 2 </CONST> </EXPR> <EXPR> <VAR> X </VAR> </EXPR> </ARGUMENT-LIST> </APPLICATION> </EXPR> </LAMBDA-TERM>
Philip has a PDF version on line.
We are in the last week of classes here. For the last couple of weeks, I've noticed a group of students who have started working out in the mornings. If they are working out as a part of a lifestyle change, I applaud them! However, from the timing of the new workouts, I suspect that these folks are trying to get ready for an upcoming final exam in their physical fitness classes.
Folks, here's a small hint from someone who's been there: You can't cram for physical fitness. Your bodies don't work that way. Slow and steady win this race.
Our brains don't work that way, either, though in this season of cramming for final exams you wouldn't think that anyone knows so. Sadly, for many academic courses, it seems to work in the short-term. If a final exam emphasizes facts and definitions, then you may be able to study them all in a massive all-nighter and remember them long enough to get through the exam. But the best-case scenario is that you do well enough on the exam, only in a year or so to find that you have not mastered any of the ideas or skills from the course. For a CS professor, there are fewer things sadder than encountering a senior, about to graduate, who did well enough in all his courses but who can't seem to program or drive at a command line.
Learning, like training, is an enterprise best done over time. Slow and steady wins the race.
Yesterday, someone reminded me of my friend Rich Pattis's Quotations for Learning Programming. I jumped, randomly as near as I can tell, to the Os and had not scanned too far down the page when I saw this quote:
As a rule, software systems do not work well until they have been used, and have failed repeatedly, in real applications.
- D. Parnas
Of course, David Parnas is a famous computer scientist, well known for his work on modularity and software design. Many give him credit for best explaining encapsulation as a design technique. He is revered as an icon of traditional software engineering.
Yet, when I read this quote, I could help but think, "Maybe Parnas is a closet agile developer." :-) Frequent readers may recall that I do this occasionally... See my discussion of Dijkstra as a test-driven developer.
Whether Parnas is sympathetic to the methodological side of agile software development or not, this quote points out a truth of software development that is central to agile approaches: There is benefit in getting to working code sooner. The motive usually given for this is so that developers can learn about the problem and the domain from writing the code. But perhaps a secondary value is that it can begin to fail sooner -- but in the lab, under test-driven conditions, where developers can learn from those *failures* and begin to fix them sooner.
I have to be careful not to misinterpret a person's position by projecting my own views onto a single quote taken out of context. For example, I can surely find a Parnas quote that could easily be interpreted as condemning agile software methodologies (in particular, I'm guessing, their emphasis on not doing big design up front).
I don't worry about such contradictions; I prefer to use them as learning opportunities. When I see a quote like the one above, I like to take a moment to think... Under what conditions can this be right? When does it fail, or not hold? What can I learn from this idea that will make me a better programmer or developer? I don't think I'd ever consciously considered the idea that continuous feedback in XP might be consist of the same sort of failure that occurs when we deploy a live system. Parnas quote has me thinking about it now, and I think that I may learn something as a result.
More subconsciously agile quotes?
As I scanned Rich's list in my agile state of mind, a few others caught my eye...
I have made this letter longer than usual, only because I have not had the time to make it shorter.
The fast approach to software development: Ready, fire, aim (the fast approach to software development).
The slow approach to software development: Ready, aim, aim, aim, aim ...
Microsoft, where quality is job 1.1.
We are what we repeatedly do.
Excellence, then, is not an act, but a habit.
This quote isn't a statement about agile practice, or even more generally about pragmatic practice. It is a hallmark of the reflective professional. It's also a pretty good guide for living life.
Last evening, I commented on the idea of speed training for software developers, raised in Erik Meade's blog. John Mitchell also commented on this idea. Check out what John has to say. I think he makes a useful distinction between pace and rhythm. You'll hear lots of folks these days talk about rhythm in software development; much of the value of test-driven development and refactoring lie in the productive rhythm they support. John points out that speed work isn't a good idea for developers, because that sort of stress doesn't work in the same way that physical stress works on the muscles. He leaves open the value of intensity in learning situations, more like speed play, which I think is where the idea of software fartleks can be most valuable.
Be sure to check out the comments to John's article, too. There, he and Erik hash out the differences between their positions. Seeing that comment thread make me again want to add comments to my blog again!
In making analogies between software development and running, I've occasionally commented on sustainable pace, the notion from XP that teams should work at a pace they can sustain over time rather than at breakneck paces that lead to bad software and burn-out. In one entry, I discuss the value of continuous feedback in monitoring pace. In another, I describe what can happen when one doesn't maintain a sustainable pace, in the short tem and over the longer term.
Not unexpectedly, I'm not alone in this analogy. Erik Meade recently blogged on sustainable pace and business practice. I was initially drawn to his article by its title reference to increasing your sustainable pace via the fartlek. While I liked his use of the analogy to comment on standard business practice, I was surprised that he didn't delve deeper into the title's idea.
Fartlek is Swedish for "speed play" and refers to an unstructured way for increasing one's speed while running: occasionally speed up and run faster for a while, then slow down and recover. We can contrast this approach to more structured speed work-outs such as Yasso 800s, which specify speeds and durations for fast intervals and recoveries. In a fartlek, one simply has fun speeding up and slowing down. This works especially well when working out with a friend or friends, because partners can take turns choosing distances and speeds and rewards. In the case of both fartleks and structured interval training, though, the idea is the same: By running faster, you can train your body to run faster better.
Can this work for software development? Can we train ourselves to develop software faster better?
It is certainly the case that we can learn to work faster with practice when at the stage of internalizing knowledge. I encourage students to work on their speed in in-class exercises, as a way to prepare for the time constraints of exams. If you are in the habit of working leisurely on every programming task you face, then an exam of ten problems in seventy-five minutes can seem like a marathon. By practicing -- solving lots of problems, and trying to solve them quickly -- students can improve their speed. This works because the practice helps them to internalize facts and skills. You don't want forever to be in the position of having to look up in the Java documentation whether Vectors respond to length() or size().
I sometimes wonder whether working faster actually helps students get faster or not, but even if it doesn't I am certain that it helps them assess how well they've internalized basic facts and standard tasks.
But fartleks for a software development team? Again, working on speed may well help teams that are at the beginning of their learning curves: learning to pair program, learning to program test-first, learning to use JUnit, ... All benefit from lots of practice, and I do believe that trying to work efficiently, rather than lollygagging as if time were free, is a great way to internalize knowledge and practice. I see the results in the teams that make up my agile software development course this semester. The teams that worked with the intention of getting better, of attempting to master agile practices in good faith, became more skilled developers. The teams that treated project assignments as mostly a hurdle to surmount still struggle with tools and practices. But how much could speed work have helped them?
The bigger question in my mind involves mature development teams. Will occasional speed workouts, whether from deadline pressure on live jobs or on contrived exercises in the studio, help a team perform faster the next time they face time pressure? I'm unsure. I'd love to hear what you think.
If it does work, we agile developers have a not-so-secret advantage... Pair programming is like always training with a friend!
When a runner prefers to run alone rather than with others, she can still do a structured work-out (your stopwatch is your friend) or even run fartleks. Running alone leaves the whole motivational burden on the solo runner's shoulders, but the self-motivated can succeed. I have run nearly every training run the last two years alone, more out of circumstance than preference. (I run early in the morning and, at least when beginning, was unsuitable as a training partner for just about any runner I knew.) I can remember only two group runs: a 7-miler with two faster friends about two weeks before my first marathon last fall, and an 8-mile track workout the day before Thanksgiving two weeks ago. As for the latter, I *know* that I trained better and harder with a friend at my side, because I lacked the mental drive to go all out alone that morning.
Now I'm wondering about how pair programming might play this motivational role sometimes when writing software. But that blog entry will have to wait until another day.
This is why I love the blogosphere so much. Somehow, I stumble across a link to Leonardo, an open-source blogging and wiki engine written in Python. I follow the link and start reading the blog of Leonardo's author, James Tauber. It's a well-written and thoughtful set of articles on an interesting mix of topics, including Python, extreme programming, mathematics, linguistics, New Testament Greek, music theory and composition, record producing and engineering, filmmaking, and general relativity. For example, my reading there has taught me some of the mathematics that underlie recent work on proving the Poincare Conjecture.
But the topic that attracted my greatest attention is the confluence of personal information management, digital lifestyle aggregation, wiki, blogging, comments and trackbacks, and information hosting. I've only recently begun to learn more deeply about the issue of aggregation and its role in information sharing. This blog argues for an especially tight intellectual connection among all of these technologies and cultures. For example, Tauber argues that wiki entries are essentially the same as blog trackbacks, and that trackbacks could be used to share information about software projects among bosses and teammates, using RSS feeds, and to integrate requests with one's PIM. But I'm not limited to reading Tauber's ideas, as he links to other blogs and web pages that present alternative viewpoints on this topic.
Following all these threads will take time, but that I can at all is a tribute to the blogosphere. Certainly, all of this could have been done in the olden days of the web, and indeed many people were publishing there diverse ideas about diverse topics back then. But the advent of RSS feeds and blogging software and wikis has made the conversation much richer, with more power in the hands of both readers and writers. Furthermore, the blogging culture encourages folks to prepare their ideas sooner for public consumption, to link ideas in a way that enables scientific inquiry, to begin a conversation rather than just publish a tract.
The world of ideas is alive and well.
Recently, I ran across an interesting article by David Roundy called The Theory of Patches, which is actually part of the manual for Darcs, a version-control alternative to CVS and the like. Darcs uses an inductive definition to define the state of a software artifact in the repository:
So how does one define a tree, or the context of a patch? The simplest way to define a tree is as the result of a series of patches applied to the empty tree. Thus, the context of a patch consists of the set of patches that precede it.
Starting with this definition, Roundy goes on to develop a set of light theorems about patches, their composition, and their merging. Apparently these theorems guide the design and implementation of Darcs. (I haven't read that far.)
The reason I found this interesting was the parallel between repeated application of patches to an empty tree and
This led me to think about how an IDE such as Eclipse might have version control build right into it. Each time the programmer adds new behavior or applies a refactoring, the change can be recorded as a patch, a diff to the previous system. The theory of patches needed for a version control system like Darcs would be directly useful in implementing such a system.
I used to program a lot in Smalltalk, and Smalltalk's .changes file served as a lightweight, if incomplete, version control system. I suppose that a Smalltalker could implement something more complete rather easily. Someone probably already has!
Thanks to Chad Fowler's bookmarks for the link.