I finally got around to reading Glenn Vanderburg's Buried Treasure. The theme of his article is "our present is our past, and there's more past in our future". Among his conclusions, Glenn offers some wisdom about programming languages and software education that we all should keep in mind:
What I've concluded is that you can't keep a weak team out of trouble by limiting the power of their tools. The way forward is not figuring out how to achieve acceptable results with weak teams; rather, it's understanding how to build strong teams and how to train programmers to be part of such teams.
Let's stop telling ourselves that potential software developers can't learn to use powerful tools and instead work to figure out how to help them learn. Besides, there is a lot more fun in using powerful tools.
Glenn closes his article with a nascent thought of his, an example of how knowing the breadth of our discipline and its history might help a programmer solve a thorny problem. His current thorny problem involves database migration in Rails, and how that interacts with version control. We usually think of version control as tracking static snapshots of a system, but a database migration subsystem is itself a tracking of snapshots of an evolving database schema -- so your version control system ends up tracking snapshots of what is in effect a little version control system! Glenn figures that maybe he can learn something about solving this problem from Smalltalkers, who deal with this sort of this thing all the time -- because their programs are themselves persistent objects in an evolving image. If he didn't know anything about Smalltalk or the history of programming languages, he might have missed a useful connection.
Speaking of Smalltalk, veteran Smalltalker Blaine Buxton wrote recently on a theme you've seen here: better examples. All I can say is what Blaine himself might say, Rock on, Blaine! I think I've found a textbook for my CS 1 course this fall that will help my students see lots of more interesting examples than "Hello, world!" and Fibonacci numbers.
That said, my Inner Geek thoroughly enjoyed a little Scheme programming episode motivated by one of the comments on this article, which taught me about a cool feature of Fibonacci numbers:
This property lends itself to computing Fib very efficiently using binary decomposition and memoizing (caching perviously computed values). Great fun to watch an interminably slow function become a brisk sprinter!
As the commenter writes, simple problems often hide gems of this sort. The example is still artificial, but it gives us a cool way to learn some neat ideas. When used tactically and sparingly, toy examples open interesting doors.
Short answer: Because sometimes I am too cocky for my own good.
If you read the recent entry on my latest half marathon, you may have noticed that my mile 3 time stands out as slower than the rest. What happened? After running two comfortable miles at exactly 7:32, about a half minute faster than planned, I started daydreaming about what a great race I would soon have run. "Let's see, that's 91... plus six-and-a half, which is... Wow!" While patting myself on the back in advance, I forgot to keep running. Too cocky.
Last summer, when I took on the responsibilities of department head, I managed to convince myself that I was the best person for the job. Maybe even the only person. With this attitude, it is all too easy to fall into habits of thought and action where I forget that I have to do the hard work of the job. Why aren't things coming more easily? Too cocky.
So it is with programming. It's quite easy to attack a problem before we fully understand our customer's needs, to start pumping out code before we know where we are going. Get cocky, and pretty soon the problem and the program step up to humble me. Unfortunately, by then, I too often have a big mess on my hands.
I work better when I'm a little nervous, when I'm a bit unsure of whether I can do what I'm trying to do. Maybe that's a trait peculiar to me, but I think not. I've had good friends who thrived on an element of tension, where just enough uncertainty heightens their senses. I am more aware then. When I get cocky, I stop paying attention.
One reason that I like agile approaches to software development is that they encourage me not to get cocky. They tell me to take small steps, so I can't run ahead of my understanding. They tell me to find simple solutions, so that I have a better chance of succeeding (and, when I don't, I won't have erred too badly). They tell me to seek continuous feedback, so that I can't fool myself into thinking that all is going smoothly. The red bar cannot be denied! They tell me to integrate my work continuously, so that I can't fool myself about the system at large. They tell me to interact with other developers and with my customer as frequently as I can, so that others can help me, and keep me honest. The whole culture is one of humility and honesty.
My plan for the Sturgis Falls half marathon yesterday was conservative: 8:00 minutes/mile. Last year I was shooting for 7:00 minutes/mile and fought some humid weather on the way to a finish of 1:34:11, a pace of 7:11. This year, after being under the weather persistently for a couple of months and working through hamstring soreness, I scaled back my expectations to my target marathon pace.
The weather this year was perfect for a race: cloudy, not too breezy, and temperatures just under 60 degrees Fahrenheit. Just before the starting gun, one of my friends said, "I hate when the weather takes away all of my excuses." Yep, because then it's just me out their on the course.
It can be tough to start a big race "on pace" because the adrenalin and the presence of the big crowd usually encourages a fast start. After 1 mile, I had run 7:32 -- a half-minute too fast. But I felt good, and the pack had thinned out, so I kept on. My second mile: 7:32. Still feeling good. The third mile took me 7:44, but I still felt good and actually thought I'd been a little cocky over the last mile. So I kept on running. And I found a very nice rhythm.
This may have been the steadiest ten miles I have ever run:
7:32 - 7:32 - 7:44 - 7:35 -
7:35 - 7:33 - 7:30 - 7:34 -
7:36 - 7:29
There was a glitch in the course's 11th mile due to road construction, so my time for that "mile" (6:13) is certainly inaccurate. Then I ran back-to-back miles of 7:21 and sprinting home the last 1/10-mile in 37 seconds, for a finishing time of 1:37:15 -- only three (or maybe four) minutes slower than last year.
I felt great the whole way, though when I was done I felt as if I had run a marathon. I'm beginning to learn that pain and soreness are not a function of one's time or one's fitness, but a function of the combination of time and fitness. On this day, I ran about as fast as I possibly could have, given my fitness level, and my body told me so afterwards.
The only bad news of the day was that I finished off the medal stand for my age group -- by two seconds! To be honest, though, I don't know if I had two seconds left in me at the end of the race, so I have no regrets.
Now, I'll spend a couple of days doing easy runs to recover and then proceed with a training plan for the Twin Cities Marathon on October 2. I don't have a lot of time, and I have more work to do that I had this time last year. But yesterday's race makes me eager to face the challenge.
I ran across the concept of an "h number" again over at Computational Complexity. In case you've never heard of this number, an author has an h number of h if h of her Np papers have ≥ h citations each, and the rest of her (Np - h) papers have ≤ h citations each.
It's a fun little idea with a serious idea behind it: Simply counting publications or the maximum number of citations to an author's paper can give a misleading picture of a scientist's contribution. The h number aims to give a better indication of an author's cumulative effect and relevance.
Of course, as Lance points out, the h number can mislead, too. This number is dependent on the research community, as some communities tend to publish more or less, and cite more or less frequently, than other. It can reward a "clique" of authors who generously cite each other's work. Older authors have written more papers and so will tend to be cited more often than younger authors. Still, it does give us different information than raw counts, and it has the enjoyability of a good baseball statistic.
Now someone has written an h number calculator that uses Google Scholar to track down papers for a specific researcher and then compute the researcher's index. (Of course, this introduces yet another sort of problem... How accurate is Scholar? And do self-citations count?)
The h-number of Eugene Wallingford is 5 (max citations = 22)
You can put that into perspective by checking out some folks with much larger numbers. (Seventy?) I'm just surprised that I have a paper with 22 citations.
I also liked one of the comments to Lance's post. It suggests another potentially useful index -- (h * maxC)/1000, where maxC is the number of citations to the author's most cited paper -- which seems to combine breadth of contribution with depth. For the baseball fans among you, this number reminds me of OPS, which adds on-base percentage to slugging percentage. The analogy even feels right. h, like on-base percentage, reflects how the performer contributes broadly to the community (team); maxC, like slugging percentage, reflects the raw "power" of the author (batter).
The commenter then considers a philosophical question:
Lastly, it is not so clear that a person who has published a thousand little theorems is truly a worse scientist than one who has tackled two large conjectures. You don't agree? Paul Erdos was accused of this for most of his life, yet for the last two decades of his life it became very clear that many of those "little theorems" were gateways to entire areas of research.
Alan Kay doesn't publish a huge number of papers, but his work has certainly had a great effect on computing over the last forty years.
Baseball has lots of different statistics for comparing the performance of players and teams. Have a large set of tools can both be fun and give a more complete picture of the world.
I suppose that I should back to working beefing up my h number, or at least doing something administrative...
... courtesy of quite different triggers.
"The plan is more important than the ego."
I finally got back on the track last week for some faster running. Not fast, just faster than I've been able to manage the last couple of months. This Sunday I run a half-marathon, so I didn't want to run a speed workout this week, but I did want to get back into the habit of a weekly trek to the track, so I decided to go this morning for eight miles at my conservative half-marathon goal pace, 8:00 minutes/mile.
Everything went great for a couple of miles, when a college student joined me on the track. Then one of my personal weaknesses came forward: I wanted to run pass him. He wasn't running all that fast, and a couple of laps of fast stuff would have put him away. But it may also have cost me, either later in this run or, worse, during my race, when I discovered that I'd burned up my legs too much this week.
Fortunately, I summoned up some uncharacteristic patience. Fulfilling my plan for this morning was more important than stroking my ego for a couple of minutes. What else would have passing this guy have done for me? It wouldn't have proven anything to me or to him (or his cute girlfriend). In fact, my ego is better stroked by sticking to the plan and having the extra strength in my legs for Sunday morning.
In the end, things went well. I ran 7.5 miles pretty much on pace -- 7:55 or 7:56 per mile -- and then let myself kick home for a fast half mile to end. Can I do 13.1 miles at that pace this weekend? We'll see.
"It changes your life, the pursuit of truth."
I heard Ben Bradlee, former editor of the Washington Post, say this last night in an interview with Jim Lehrer. Bradlee is a throwback to a different era, and his comments were an interesting mix of principle and pragmatism. But this particular sentence stopped me in my tracks. It expresses a truth much bigger than journalism, and the scientist in me felt suddenly in the presence of a compatriot.
The pursuit of truth does change your life. It moves the focus off of oneself and out into the world. It makes hypotheticals and counterfactuals a natural part of one's being. It makes finding out you're wrong not only acceptable but desirable, because then you are closer to something you couldn't see before. It helps you to separate yourself -- your ego -- from your hypothesis about the world, which depersonalizes many interactions with other people and with the world. Note that it doesn't erase you or your ego; it simply helps you think of the world independent from them.
I'm sure every scientist knows just what Bradlee meant.
I had a new experience this morning and learned a new piece of vocabulary to boot. The regional chambers of commerce are recruiting a "software" company to the area, and they asked me and the head of career services here to participate in the initial contact meeting with company's founder and CFO. I was expecting a software development company, but it turns out that the company does CRM consulting for large corporations, working as a partner with the corporation that produces the particular software platform.
First, in case you aren't familiar with the term, CRM stands for "customer relationship management". It is the customer-facing side of a corporation's technical infrastructure, as contrasted to ERP -- "enterprise resource planning" -- on the back end. As far as I know, Siebel, recently purchased by Oracle, and SAP are the big CRM software companies in the US. With the push on everywhere to squeeze the last penny out of every part of the business, I expect that companies like these, and their consulting partners, are likely to do well in the near future. In any case, our local economy doesn't participate in this part of the software-and-services ecosystem right now, so attracting a firm of this sort would open up a new local opportunity for our students and strengthen the IT infrastructure of the region.
"Selling" our CS department to this company turned out to be pretty easy. They have had great success with IT students from the Midwest and are eager to locate in a place that produces that kind of graduate. I had figured that they might be looking for particular skills or experiences in our students, but beyond knowing either Java or C++, and having access to a course in databases, they asked for nothing. This was refreshing, considering that some companies seem to want university programs to do job training for them. These folks want good students that they can train. That we can do.
Not having participated in many recruiting meetings of this sort before, I was prepared to give the standard pitch: We have great students; they learn new skills with ease; they become model employees; etc. But the company founder short-circuited a lot of that by reminding me that almost every college and university says the same thing. I adapted my pitch to focus on harder facts, such as enrollments, graduation rates, and curriculum. My best chance to "sell" came when answering their questions, because it was only then that we get a good sense of what they are looking for.
Not being a big-time consumer or salesman, I have to remind myself that the things the other guys are saying are meant to sell them and so need to be examined with a critical eye. These folks seemed pretty straightforward, though they did make some claims about the salaries our graduates earn that seemed calculated to enhance their position in negotiating with the cities. But again, I was surprised -- pleasantly -- to find that this company does not seek financial support until after it has its operation in place and has reached an initial employment goal. Rather than trying to extort incentives out of the city upfront, they contribute first. That seems like both a great way to do business and a great way to sell your company to the locals.
During the meeting, it occurred to me just how hard it is to "sell" the quality of life of our area. Just as every university says that it produces great students, every town, city, and metro area touts the fine quality of life enjoyed by its residents. If we think we offer more or better -- and in many ways, I think we do -- how can you get that across in a three-hour meeting or a 10-minute DVD? I lived hear for many years before I fully appreciated our recreational trail system, which doubles quite nicely as a commuting mechanism for those who are so inclined. (Now that I spend 7 or 8 hours a week running on our roads and trails, I appreciate them!)
This was the first meeting, but things will move fast. For the next month or so, both sides of the deal will perform their due diligence, and if things work out a deal will be in place by fall. I expect that the university is done with its part, and so the next I hear -- if anything -- will be a public announcement of the development. Like the Halting Problem, no answer doesn't mean that the answer is 'no', though the longer I wait for an answer the less likely that the answer will be 'yes'.
Oh, the new vocabulary: value proposition. Not being tuned in to the latest marketing terminology, I don't think I'd ever heard this phrase before today, but our founder used it several times. He was otherwise light on jargon, at least on jargon that a CS guy would find unusual, so that was okay. Google tells me that "a value proposition is a clear statement of the tangible results a customer gets from using your products or services". The founder spoke of the company's value proposition to the community, to the city and state, and to our graduates. He was clear on what he thinks his company offers all three of these groups -- also a good way to sell yourself.
Three-hour business meetings are not usually at the top of my list of Best Ways to Spend a Beautiful 80-degree Day, but this was pleasurable. I still have a lot to learn about the world our students work in.
I see that Ralph Johnson is giving the Friday keynote talk at ECOOP 2006 this year. His talk is called "The Closing of the Frontier", and the abstract shows that it will relate to an idea that Ralph has blogged about before: software development is program transformation. This is a powerful idea that has emerged in our industry over the last decade or so, and I think that there are a lot of computer scientists who have to learn it yet. I have CS colleagues who argue that most programs are developed essentially from scratch, or at least that the skills our students most need to learn most closely relate to the ability to develop from scratch.
I'm a big believer in learning "basic" programming skills (most recently discussed here), but I'd like for my students to learn many different ways to think about problems and solutions. It's essential they learn that, in a great many contexts, "Although user requirements are important, version N+1 depends more on version N than it does on the latest requests from the users."
Seeing Ralph's abstract brought to mind a paper I read and blogged about a few months back, Rich Pattis's "A Philosophy and Example of CS-1 Programming Projects". That paper suggested that we teach students to reduce program specs to a minimum and then evolve successive versions of a program which converges on the program that satisfies all of the requirements. Agile programming for CS1 back in 1990 -- and a great implementation of the notion that software development is program transformation.
I hope to make this idea a cornerstone of my CS1 course this fall, with as little jargon and philosophizing as possible. If I can help students to develop good habits of programming, then their thoughts and minds will follow. And this mindset helps prepare students for a host of powerful ideas that they will encounter in later courses, including programming languages, compilers, theory, and software verification and validation.
I also wish that I could attend ECOOP this year!
I've come to realize something while preparing for my fall CS1 course.
I don't like textbooks.
That's what some people call a "sweeping generalization", but the exceptions are so few that I'm happy to make one.
For one thing, textbooks these days are expensive. I sympathize with the plight of authors, most of whom put in many more hours than book sales will ever pay them for. I even sympathize with the publishers and bookstores, who find themselves living in a world with an increasingly frictionless used-book market, low-cost Internet-based dealers, and overseas sellers such as Amazon India. But none of this sympathy changes the fact that $100 or more for a one-semester textbook -- one that was written specifically not to serve as a useful reference book for later -- is a lot. Textbook prices probably have not risen any faster than the rate of tuition and room and board, but still.
Price isn't my real problem. My real problem is that I do not like the books themselves. I want to teach my course, and more and more the books just seem to get in the way. I don't like the style of the code shown to students. I don't like many of the design ideas they show students. I don't like all the extra words.
I suppose that some may say these complaints say more about me than about the books, and that would be partly true. I have some specific ideas about how students should learn to program and think like a computer scientist, and it's not surprising that there aren't many books that fit my idiosyncrasy. Sticking to the textbook may have its value, but it is hard to do when I am unhappy at the thought turning another page.
But this is not just me. By and large, these books aren't about anything. They are about Java or C++ or Ada. Sure, they may be about how to develop software, too, but that's an inward-looking something. It's only interesting if you are already interested in the technical trivia of our discipline.
This issue seems more acute for CS 1, for a couple of reasons. First, one of the goals of that course is to teach students how to program so that they can use that skill in later courses, and so they tend toward teaching language. More important is the demand side of the equation, where the stakes are so high. I can usually live with one of the standard algorithms books or compilers books , if it gives students a reasonable point of view and me the freedom to do my own thing. In those cases, the book is almost a bonus for the students. (Of course, then the price of the book becomes more distasteful to students!)
Why use a text at all? For some courses, I reach a point of not requiring a book. Over the last decade or more, I have evolved a way of teaching Programming Languages that no longer requires the textbook with which I started. (The textbook also evolved away from our course.) Now, I require only The Little Schemer, which makes a fun, small, relatively inexpensive contribution to how my students learn functional programming. After a few times teaching Algorithms, I am getting close to not needing a textbook in that course, either.
I haven't taught CS 1 in a decade, so the support of a strong text would be useful. Besides, I think that most beginning students find comfort at least occasionally in a text, as something to read when today's lecture just didn't click, something to define vocabulary and give examples.
So, what was the verdict? After repressing my true desires for a few months in the putative interest of political harmony within the department, yesterday I finally threw off my shackles and chose Guzdial and Ericson's Introduction to Computing and Programming with Java: A Multimedia Approach. It is relatively small and straightforward, though a still a bit expensive -- ~ $90. But I think it will "stay out of my way" in the best sense, teaching programming and computing through concrete tasks that give students a chance to see and learn abstractions. Perhaps most important, it is about something, a something that students may actually care about. Students may even want to program. This book passes what I call the Mark Jacobson Test, after a colleague who is a big believer in motivation and fun in learning: a student's roommate might look over her shoulder one night while she's doing some programming and say, "Hey, that looks cool. Whatcha doing?"
Let's see how it goes.
The most recent issue of the Ballast Quarterly Review, on which I've commented before, came out a month or so. I had set it aside for the right time to read and only came back around to it yesterday. Once again, I am pleasantly surprised by the interconnectedness of the world.
In this issue, editor Roy Behrens reviews John Willats's book Making Sense Of Children's Drawings. (The review is available on-line at Leonardo On-Line.) Some researchers have claimed that children draw what they know and that adults draw what they see, and that what we adults think we see interferes with our ability to create authentic art. Willats presents evidence that young children draw what they see, too, but that at that stage of neural development they see in an object-centered manner, not a viewer-centered manner. It is this subjectivity of perspective that accounts for the freedom children have in creating, not their bypassing of vision.
The surprising connection for came in the form of David Marr. A vision researcher at MIT, Marr had proposed the notion that we "see by processing phenomena in two very distinct ways", which he termed viewer-centered object-centered. Our visual system gathers data in a viewer-centered way and then computes from that data more objective descriptions from which we can reason.
Where's the connection to computer science and my experience? Marr also wrote one of the seminal papers in my development as an artificial intelligence researcher, his "Artificial Intelligence: A Personal View". You can find this paper as Chapter 4 in John Haugeland's well-known collection Mind Design and on-line as a (PDF) at Elsevier.
In this paper, Marr suggested that the human brain may permit "no general theories except ones so unspecific as to have only descriptive and not predictive powers". This is, of course, not a pleasant prospect for a scientist who wishes to understand the mind, as it limits the advance of science as a method. To the extent that the human mind is our best existence proof of intelligence, such a limitation would also impinge on the field of artificial intelligence.
I was greatly influenced by Marr's response to this possibility. He argued strongly that we should not settle for incomplete theories at the implementation level of intelligence, such as neural network theory, and should instead strive to develop theories that operate at the computational and algorithmic levels. A theory at the computational level captures the insight into the nature of the information processing problem being addressed, and a theory at the algorithmic level captures insight into the different forms that solutions to this information processing problem can take. Marr's argument served as an inspiration for the work of the knowledge-based systems lab in which I did my graduate work, founded on the earlier work on the generic task model of Chandrasekaran.
Though I don't do research in that area any more, Marr's ideas still guide how I think about problems, solutions, and implementations. What a refreshing reminder of Marr to encounter in light reading over the weekend.
Behrens was likely motivated to review Willats's book for the potential effect that his theories might have on the "day-to-day practice of teaching art". As you might guess, I am now left to wonder what the implications might be for teaching children and adults to write programs. Direct visual perception has less to do with the programs an adult writes, given the cultural context and levels of abstraction that our minds impose on problems, but children may be able to connect more closely with the programs they write if we place them in environments that get out of the way of their object-centered view of the world.
I've been meaning to write all week, but it turned out to be busy. First, my wife and daughters returned from Italy, which meant plenty of opportunity for family time. Then, I spent much of my office week writing content for our new department website. We were due for a change after many years of the same look, and we'd like to use the web as a part of attracting new students and faculty. The new site is very much an early first release, in the agile development sense, because I still have a lot of work to do. But it fills some of our needs well enough now, and I can use bits and pieces of time this summer to augment the site. My blogging urge was most satisfied this week by the material I assembled and wrote for the prospective students section of the site. (Thanks to the Lord of the Webs for his design efforts on the system.)
I did get a chance to thumb through the May issue of ACM Queue magazine, where I read with some interest the interview with Werner Vogels, CTO of Amazon. Only recently I had been discussing Vogels as a potential speaker for OOPSLA this or some year soon. I've read enough of Vogels's blog to know that he has interesting things to say.
At the end of the interview, Vogels comments on recruiting students and more generally on the relationship of today's frontier IT firm to academia. First, on what kind of person Amazon seeks:
The Amazon development environment requires engineers and architects to be very independent creative thinkers. We are building things that nobody else has done before, so you need to be able to think outside the box. You need to have a strong sense of ownership, because in the small teams in which you will work at Amazon, your colleagues will count on you to pull your weight -- especially when it comes to operating the service that you have built. Can you take responsibility for making this the best it can be?
Many students these days hear so much about teamwork and "people" skills that they sometimes forget that every team member has to be able to contribute. No one wants a teammate who can't produce. Vogels stresses this upfront. To be able to contribute effectively, each of us needs to develop a set of skills that we can use right now, as well as the ability to pick up new skills with some facility.
I'd apply the same advice to another part of Vogels's answer. In order to "think outside the box", you have to start with a box.
Vogels then goes on to emphasize how important it is for candidates to "think the right way about customers and technology. Technology is useless if not used for the greater good of serving the customer." Sometimes, I think that cutting edge companies have an easier time cultivating this mindset than more mundane IT companies. A company selling a new kind of technology or lifestyle has to develop its customer base, and so thinks a lot about customers. It will be interesting to see how companies like Yahoo!, Amazon, and Google change as they make the transition into the established, mainstream companies of 2020.
On the relationship between academia and industry, Vogels says that faculty and Ph.D. students need to get out into industry in order to come into contact with "the very exciting decentralized computing work that has rocked the operating systems and distributed systems world in the past few years". Academics have always sought access to data sets large enough for them to test their theories. This era of open source and open APIs has created a lot of new opportunities for research, but open data would do even more. Of course, the data is the real asset that the big internet companies hold, so it won't be open in the same way for a while. Internships and sabbaticals are the best avenue open for academics interested in this kind of research these days.
Yesterday evening, I was thinking about map-making, which I had discussed briefly in my blog on Programming as Discovery and Expression. Dick Gabriel discussed map-making as an activity like writing and programming, an act blending discovery and expression. I've long considered the analogy between programming and writing, but the analogy between programming and map-making was new to me. It jumped back into the forefront of mind while walking.
Programming is a lot like map-making. Cartographers start with the layout of the physical world, the earth and its features, and produce a model for a specific purpose. We programmers start with a snapshot of some part of the world, too, and produce a model for a specific purpose. Like map-makers, we leave out some details, the ones that don't serve our purpose. When we write business software, we create little models of people as objects or entries in a database table, but we leave out all sorts of features. I rarely see a "hair color" attribute on my business objects. Like map-makers, we accentuate other details of interest, such as wage rate and years of service.
Is there anything for us to learn about programming from this analogy? Have you ever heard of this metaphor before, or seen it discussed somewhere? I'm not thinking of conceptual analogs like "mind maps" and generic modeling, but honest-to-goodness maps, with mercator projections and isobars and roads. I think I'll have to read a bit on the act of cartography to see what value there might be in the analogy.
Recently I pointed you to Dick Gabriel's The Art of Lisp & Writing, which I found when I looked for a Gabriel essay that discussed triggers and Richard Hugo. I must confess that I recommended Dick's essay despite never having read it; then again, I've never been disappointed by one of his essays and figured you wouldn't be, either.
I read the essay over tortellini last night, and I wasn't disappointed. I learned from his discussion of the inextricable partners in creation, discovery and presentation. I learned about mapmakers and how their job is a lot like an engineer -- and a lot like a writer.
Most of exploration is in the nature of the locally expected: What is on the other side of that hill is likely to be a lot like what's on this side. Only occasionally is the explorer taken totally by surprise, and it is for these times that many explorers live. Similarly for writers: What a writer thinks up in the next minute is likely to be a lot like what is being thought this minute -- but not always: Sometimes an idea so initially apparently unrelated pops up that the writer is as surprised as anyone. And that's why writers write.
As anyone who has ever written a program worth writing will tell you, that is also why programmers program. But then that is Dick's point. Further, he reminds us why languages such as Lisp and Smalltalk never seem to die: because programmers want them, need them.
Gabriel has been writing about programming-and-writing for many years now, and I think that his metaphor can help us to understand our discipline better. For example, by explaining writing as "two acts put together: the act of discovery and the act of perfecting the presentation", the boundaries of which blur for each writer and for each work, we see in relief one way in which "software engineering" and the methodologists who drive it have gone astray. I love how Dick disdains terms such as "software developer", "software design and implementation". For him, it's all programming, and to call it something else simply obscures a lot of what makes programming programming in the first place.
Reading this essay crystallized in mind another reason that I think Java, Ada, and C++ are not the best (or even okay) choices for CS 1: They are not languages for discovery. They are not languages that encourage play, trying to solve a hard problem and coming to understand the problem in the process of writing the program that solves it. That's the great of of programming, and it is exactly what novice programmers need to experience. To do so, they need a language that lets them -- helps them? -- both to discover and to express. Java, Ada, and C++ are Serious Languages that optimize on presentation. That's not what novice programmers need, and probably not the pros need, either.
This essay also may explain the recent rise of Ruby as a Programmer's Programming Language. It is a language for both to discovery and expression.
As usual, Gabriel has explored a new bit of landscape for me, discovered something of value, and given us a map of the land.