Philip Greenspun recently posted a provocative blog entry called Why do high school kids keep signing up to be undergrads at research universities? If you've never read any of Philip's stuff, this might seem like an odd and perhaps even naive piece. His claim is pretty straightforward: "Research universities do not bother to disguise the fact that promotion, status, salary, and tenure for faculty are all based on research accomplishments," so why don't our brightest, most ambitious high school students figure out that these institutions aren't really about teaching undergraduates? This claim might seem odd considering that Philip himself went to MIT and now teaches as an adjunct prof there. But he has an established track record of writing about how schools like Harvard, MIT, the Ivies, and their ilk could do a better job of educating undergrads, and at a lower cost.
My thoughts on this issue are mixed, though at a certain level I agree with his premise. More on how I agree below.
As an undergraduate, I went to a so-called regional university, one that grants Ph.D.s in many fields but which is not typical of the big research schools Philip considers. I chose the school for its relatively strong architecture school, which ranked in the top 15 or 20 programs nationally despite being at a school that overall catered largely to a regional student population. There I was part of a good honors college and was able to work closely with published scholars in a way that seems unlikely at a Research U. However, I eventually changed my major and studied computer science accounting. The accounting program had a good reputation, but its computer science department was average at best. It had a standard curriculum, and I was a good enough student and had enough good profs that I was able to receive a decent education and to have my mind opened to the excitement of doing computer science as an academic career. But when I arrived at grad school I was probably behind most of my peers in terms of academic preparation.
I went to a research school for my graduate study, though not one in the top tier of CS schools. It was at that time, I think, making an effort to broaden, deepen, and strengthen its CS program (something I think it has done). The department gave me great financial support and opportunities to teach several courses and do research with a couple of different groups. The undergrad students I taught and TAed sometimes commented that they felt like they were getting a better deal out of my courses than they got out of other courses at the university, but I was often surprised by how committed some of the very best researchers in the department were to their undergrad courses. Some of the more ambitious undergrads worked in labs with the grad students and got to know the research profs pretty well. At least one of those students is now a tenured prof in a strong CS program down south.
Now I teach at a so-called comprehensive university, one of those medium-sized state schools that offers neither the prestige of the big research school nor the prestige of an elite liberal arts school. We are in a no-man's land in other ways as well -- our faculty are expected to do research, but our teaching expectations and resources place an upper bound on what most faculty can do; our admissions standards grant access to a wider variety of students, but such folks tend to require a more active, more personal teaching effort.
What Greenspun says holds the essence of truth in a couple of ways. The first is that a lot of our best students think that they can only get a good education at one of the big research schools. That is almost certainly not true. The variation in quality among the programs at the less elite schools is greater, which requires students and their parents to be perhaps more careful in selecting programs. It also requires the schools themselves to do a better job communicating where their quality programs lie, because otherwise people won't know.
But a university such as mine can assemble a faculty that is current in the discipline, does research that contributes value (even basic knowledge), and cares enough about its mission to teach to devote serious energy to the classroom. I don't think that a comprehensive's teaching mission in any speaks ill of a research school faculty's desire to teach well but, as Greenspun points out, those faculty face strong institutional pressure to excel in other areas. The comprehensive school's lower admission standards means that weaker students have a chance that they couldn't get elsewhere. Its faculty's orientation means that stronger have a chance to excel in collaboration with faculty who combine interest and perhaps talent in both teaching and research.
If the MITs and Harvards don't excel in teaching undergrads, what value to they offer to bright, ambitious high school students? Commenters on the article answered in a way that sometimes struck me as cynical or mercenary, but I finally realized that perhaps they were simply being practical. Going to Research U. or Ivy C. buys you connections. For example:
Seems pretty plain that he's not looking to buy the educational experience, he's looking to buy the peers and the prestige of the university.
And in my experience of what school is good for, he's making the right decision.
You wanna learn? Set up a book budget and talk your way into or build your own facilities to play with the subject you're interested in. Lectures are a lousy way to learn anyway.
But you don't go to college to learn, you go to college to make the friends who are going to be on a similar arc as you go through your own career, and to build your reputation by association....
You will meet and make friends with rich kids with good manners who will provide critical angel funding and business connections for your startups.
Who cares if the undergrad instruction is subpar? Students admitted to these schools are strong academically and likely capable of fending for themselves when it comes to content. What these students really need is a frat brother who will soon be an investment banker in a major NYC brokerage.
It's really unfair to focus on this side of the connection connection. As many commenters also pointed out, these schools attract lots of smart people, from undergrads to grad students to research staff to faculty. And the assiduous undergrad gets to hang around with them, learning from them all. Paul Graham would say that these folks make a great pool of candidates to be partners in the start-up that will make you wealthy. And if strong undergrad can fend for him- or herself, why not do it at Harvard or MIT, in a more intellectual climate? Good points.
But Greenspun offers one potential obstacle, one that seems to grow each year: price. Is the education an undergrad receives at an Ivy League or research school, intellectual and business connections included, really worth $200,000? In one of his own comments, he writes:
Economists who've studied the question of whether or not an Ivy League education is worth it generally have concluded that students who were accepted to Ivy League schools and chose not to attend (saving money by going to a state university, for example) ended up with the same lifetime income. Being the kind of person who gets admitted to Harvard has a lot of economic value. Attending Harvard turned out not to have any economic value.
I'm guessing, though, that most of these students went to a state research university, not to a comprehensive. I'd be curious to see how the few students who did opt for the less prestigious but more teaching-oriented school fared. I'm guessing that most still managed to excel in their careers and amass comparable wealth -- at least wealth enough to live comfortably.
I'm not sure Greenspun thinks that everyone should agree with his answer so much as that they should at least be asking themselves the question, and not just assuming the prestige trumps educational experience.
This whole discussion leads me to want to borrow a phrase from Richard Gabriel that he applies to talent and performance as a writer. The perceived quality of your undergraduate institution does not determine how good you can get, only how fast you get can good.
I read Greenspun's article just as I was finishing reading the book Teaching at the People's University, by Bruce Henderson. This book describes the history and culture of the state comprehensive universities, paying special attention to the competing forces that on the one hand push their faculty to teach and serve an academically diverse student body and on the other expects research and the other trappings of the more prestigious research schools. Having taught at a comprehensive for fifteen years now, I can't say that the book has taught me much I didn't already know about the conflicting culture of these schools, but it paints a reasonably accurate picture of what the culture is like. It can be a difficult environment in which to balance the desire to pursue basic research that has a significant effect in the world and the desire to teach a broad variety of students well.
There is no doubt that many of the students who enroll in this sort of school are served well, because otherwise they would have little opportunity to receive a solid university education; the major research schools and elite liberal arts schools wouldn't admit them. That's a noble motivation and it provides a valuable service to the state, but what about the better students who choose a comprehensive? And what of the aspirations of faculty who are trained in a research-school environment to value their careers by the intellectual contribution they make to their discipline? Henderson does a nice job laying these issues out for people to consider explicitly, rather than to back into them when their expectations are unmet. This is not unlike what Greenspun does in his blog entry, laying an important question on the line that too often goes unasked until the answer is too late to matter.
All this said, I'm not sure that Greenspun was thinking of the comprehensives at all when he wrote his article. The only school he mentions as an alternative to MIT, Harvard, and the other Ivies is the Olin College of Engineering, which is a much different sort of institution than a mid-level state school. I wonder whether he would suggest that his young relative attend one of the many teacher-oriented schools in his home state of Massachusetts?
After having experienced two or three different kinds of university, would I choose a different path for myself in retrospect? This sort of guessing game is always difficult to play, because I have experienced them all under different conditions, and they have all shaped me in different ways. I sometimes think of the undergraduates who worked in our research lab while I was in grad school; they certainly had broader and deeper intellectual experiences than I had as as undergraduate. But as a first-generation university attendee I grew quite a bit as an undergraduate and had a lot of fun doing it. Had I been destined for a high-flying academic research career, I think I would have had one. Some of my undergrad friends have done well on that path. My ambition, goals, and inclinations are well suited for where I've landed; that's the best explanation for why I've landed here. Would my effect on the world have been greater had I started at a Harvard? That's hard to say, but I see lots of opportunities to contribute to the world from this perch. Would I be happier, or a better citizen, or a better father and husband? Unlikely.
I wish Greenspun's young relative luck in his academic career. And I hope that I can prepare my daughters to choose paths that allow them to grow and learn and contribute.
As a department head, I am occasionally invited to attend an event as a "university leader". This morning I had the chance to attend a breakfast reception thrown by the university for our six local state legislators. They had all been part of a strong funding year for state universities, and this meeting was a chance for us to say "thank you" and to tell them all some of the things we are doing. This may not sound like all that much fun to some of you; it's certainly unlike a morning spent cutting code. But I find this sort of meeting to be a good way to put a face on our programs to the people who hold our purse strings, and I admit to enjoying the experience of being an "insider".
I found our delegation to consist of good people who had done their homework and who have good intentions regarding higher education. Two or three of them seem to be well-connected in the legislature and so able to exercise some leadership. One in particular has the look, bearing, speaking ability, and mind that bode well should he decide to seek higher elected office.
I can always tell when I am in the presence of folks who have to market the university or themselves, as nearly every person the room must. I hear sound bites about "windows of opportunity" and "dynamic personalities in the leadership". My favorite sound bite of the morning bears directly on a computer science department: "The jobs of the future haven't been invented yet."
This post involves computing in an even more immediate way. Upon seeing my name tag, two legislators volunteered that the toughest course they took in college was their computer programming class, and the course in which they received their lowest grades (a B in Cobol and a C in Pascal, for what it's worth). These admissions came in separate conversations, completely independent from one another. The way they spoke of their experiences let me know that the feeling is still visceral for them. I'm not sure that this is the sort of impression we want to make on the folks who pay our bills! Fortunately, they both spoke in good nature and let us know that they understand how important strong CS programs are for the economic development of our region and state. So I left the meeting with a good feeling.
I remember back in my early years teaching (*) I had a student who came in to ask a question about a particular error message she had received from our Pascal compiler. She had some idea of what caused it, and she wanted to know what it meant. It was a pretty advanced error, one we hadn't reached the point of making in class, so I asked her how she had managed to bump into it. Easy, she said; she was intentionally creating errors in her program so that she could figure out what sort of error messages would result.
If you teach programming for very long, you are bound to encounter such a student. She was doing fine in class but was afraid that she was having life too easy and so decided to use her time productively -- creating errors that she could learn to debug and maybe even anticipate.
I've written many times before about practice, including practice for practice's sake. That entry was about the idea of creating "valueless software", which one writes as a learning exercise, not for anyone's consumption. But my forward-thinking student was working more in the vein of fixin' what's broke, in which one practices in an area of known weakness, with the express purpose of making that part of one's game strong. My student didn't know many of the compiler error messages that she was going to face in the coming weeks, so she set out to learn them.
I think that she was actually practicing a simple form of an even more specific learning pattern: consciously seeking out, even creating, challenges to conquer. Make a Mess, Clean it Up!, is a neat story about an example of this pattern in the history of the Macintosh team. There, Donn Denman talks about Burrell Smith's surprising way of getting better at Defender, a video game played by the Mac programmers as a way to relax or pump themselves up:
Instead of avoiding the tough situations, he'd immediately create them, and immediately start learning how to handle the worst situation imaginable. Pretty soon he would routinely handle anything the machine could throw at him.
He'd lose a few games along the way, but soon he was strong in areas of the game that his competitors may not have even encountered before -- because they had spent time avoiding difficult times! Denman saw in this game-playing behavior something he recognized in Smith's life as a programmer: he
... likes challenges so much that he actually seeks them out and consciously creates them. In the long run, this approach makes sense. He seems to aggressively set up challenging situations throughout his life. Then, when life throws him a curve ball, he'll swing hard, and knock it out of the park.
The article uses two metaphors for this pattern: make a mess so that you can clean it up, and choose to face tough situations so that you are ready for the curve balls life throws you. (I guess those tough situations must be akin the nasty breaking stuff of a major-league pitcher.) My title for this entry offers a third metaphor: digging yourself into a hole so that you can how to get out of a hole. As much a baseball fan as I was growing up, the metaphor of digging oneself into a hole was the more common. Whatever it's name, the idea is the same.
I find that I'm more likely to apply this pattern in some parts of my life than others. In programming, I can recover from bad situations by re-compiling, re-booting, or at worst reinstalling. In running, I can lose all the races I want, or come up short in a training session all I want -- so long as I don't put my body at undue risk. The rest of life, the parts that deal with other people, require some care. It's hard to create an environment in which I can screw up my interpersonal relationships just so that I can learn how to get out of the mess. There's a different metaphor for such behavior -- burning bridges -- that connotes its irreversibility. Besides, it's not right to treat people as props in my game. I suppose that this is a place in which role play can help, though artificial situations can go only so far.
Where games, machines, and tools are concerned, though, digging a deep hole just for the challenge of getting out of it can be a powerful way to learn. Pretty soon, you can find yourself as master of the machine.
(*) Yikes, how old does that make me sound?
I haven't written about running lately. There hasn't been much to say as I worked my mileage slowly starting over, again. My first milestone came yesterday morning, at the Sturgis Falls half marathon.
The short description. I did not run a personal best, yet my race was a surprising success. Today, I am sore, and happily so.
The long description: The race went much better than planned. I went into the day with relatively light training, consecutive weeks of 28, 30, 30, and 32 miles. My longest runs were 11 miles two weeks ago and 10 last week. I had done a couple of runs that pass for fast, but only 4-5 miles each. So, my plan for the race was conservative: try to run 8-10 miles at a 8:30/mile pace and then see how I felt. If I felt weak, I'd just try to maintain that pace; if I felt strong, I would see whether I could speed up a bit.
I ran miles 1-5 right at an 8:30/mile average. Unintentionally, I ran the sixth mile in 8:20 or, and it felt okay so I held that pace through the ninth mile. I was fully prepared for the chance that this would burn me out. But it didn't. Miles 10-12 took us along a trail into downtown and back, with a small loop on the end. This meant that there were a lot of runners all along the course in both directions. The energy of competition kicked in... I ran my Mile 10 in 8:07, and then Mile 11 in 8:03. The race was on. I took the twelfth mile in 7:41, finally passing a young, strong-looking runner whom I'd been tracking for several miles. I ran the last full mile 7:30 and sprinted home to finish in 1:47. The young guy finished even stronger and beat me by 7 seconds. No matter. Though this was my second worst time ever in a half marathon, it was among the most satisfying, given my expectations. Context matters.
I'm still tired from the race and a bit stiff, but that's to be expected. I have not run this far or this fast for this long in a long time. My body has a right to register its reaction.
With yesterday's race, my last five weeks have been in the 28-30 mile range. That's a far cry from the regular 38-41 mile weeks I ran throughout 2005-2006 but also my best stretch since December. I still tire more easily than in the past, and I do not have much speed yet. But I can now embark on training for Marine Corps Marathon with some confidence. I'm wondering how aggressive I should be in training. I'm even thinking ahead to the race itself -- maybe I should set a goal of 8:30/mile for the first 15 miles or so and then see if I can finish strong? There is a lot said for balancing high ambition with a dose of realism that increases the probability of success -- and fun.
Several folks have already recommended Gerard Meszaros's new book, xUnit Test Patterns. I was fortunate to have a chance to review early drafts of Gerard's pattern language on the web and then at PLoP 2004, where Gerard and I were in a writers' workshop together. By that time I felt I knew a little about writing tests and using JUnit, but reading Gerard's papers that fall taught me just how much more there was for me to learn. I learned a lot that month and can only hope that my participation in the workshop helped Gerard a small fraction as much as his book has helped me. I strongly echo Michael Feathers's recommendation: "XUnit Patterns is a great all around reference." (The same can be said for Michael's book, though my involvement reviewing early versions of it was not nearly as deep.)
As I grow older, I have a growing preference for short books. Maybe I am getting lazy, or maybe I've come to realize that most of the reasons for which I read don't require 400 or 600 hundred words. Gerard's book weighs in at a hefty 883 pages -- what gives? Well, as Martin Fowler writes in his post Duplex Book, XUnit Test Patterns is really more than one book. Martin says two, but I think of it as really three:
So in a book like this, I have the best of two worlds: a relatively short, concise, well-written story that shows me the landscape of automated unit testing and gets me started writing tests, plus a complete reference book to which I can turn as I need to learn a particular technique in greater detail. I can read the story straight through and then jump into and out of the catalogs as needed. The only downside is the actual weight of the book... It's no pocket reference! But that's a price I am happy to pay.
One of my longstanding goals has been to write an introductory programming textbook, say for CS1, in the duplex style. I'm thinking something like the dual The Timeless Way of Building/A Pattern Language, only shorter and less mystical. I had always hoped to be the first to do this, to demonstrate what I think is a better future for instructional books. But at this increasingly late date, I'd be happy if anyone could succeed with the idea.
Another coincidence in time... The day after I post a note on Alan Kay's thoughts on teaching math and science to kids, I run across (via physics blogger and fellow basketball crazy Chad Orzel) Sean Carroll's lament about a particularly striking example of what Kay wants to avoid.
Carroll's article points one step further to his source, Eli Lansey's The sad state of science education, which describes a physics club's visit to a local elementary school to do cool demos. The fifth graders loved the demos and were curious and engaged; the sixth graders were disinterested and going through the motions of school. From his one data point, Carroll and Lansey hypothesize that there might be a connection between this bit flip and what passed for science instruction at the school. Be sure to visit Lansey's article if only to see the pictures of the posters these kids made showing their "scientific procedure" on a particular project. It's really sad, and it goes on in schools everywhere. I've seen similar examples in our local schools, and I've also noticed this odd change in stance toward science -- and loss in curiosity -- that seems to happen to students around fifth or sixth grade. Especially among the girls in my daughters' classes. (My older daughter seemed to go through a similar transition about that time but also seems to have rediscovered her interest in the last year as an eighth grader. My hope abounds...)
Let's hope that the students' loss of interest isn't the result of some unavoidable developmental process and does follow primarily from non-science or anti-science educational practices. If it's the latter, then the sort of things that Alan Kay's group are doing can help.
I haven't written about it here yet, but Iowa's public universities have been charged by the state Board of Regents with making a fundamental change in how we teach science and math in the K-12 school system. My university, which is the home of the state's primary education college, is leading the charge, in collaboration with our bigger R-1 sisters. I'll write more later as the project develops, but for now I can point you to web page that outlines the initiative. Education reform is often sought, often started, and rarely consummated to anyone's satisfaction. We hope that this can be different. I'd feel a lot more confident if these folks would take work like Kay's as its starting point. I fear that too much business-as-usual will doom this exercise.
As I type this, I realize that I will have to get more involved if I want what computer scientists are doing to have any chance of being in the conversation. More to do, but a good use of time and energy.
After commenting on Alan Kay's thesis, I decided to read a more recent paper by Alan that was already in my stack, Thoughts About Teaching Science and Mathematics To Young Children. This paper is pretty informal, written in a conversational voice and marked by occasional typos. It some ways, it felt like a long blog entry, in which Kay could speak to a larger audience about some of the ideas that motivate his current work at the Viewpoints Research Institute. It's short -- barely more than four pages -- so you should read it yourself, but I'll share a few thoughts that came to mind as I read this morning in between bouts of advising incoming CS freshmen.
Kay describes one of the key challenges to teaching children to become scientists: we must help students to distinguish between empiricism and modeling on one hand and belief- based acceptance of dogma on the other. This is difficult for at least three reasons:
The last of these is a problem because most of us don't understand very well how children think, and most of us are prone to organize instruction in a way that conforms with how we think. As a parent who has watched one daughter pass through middle school and who has another just entering, I have seen children grok some ideas much better than older students when the children have an opportunity engage the concepts in a fortuitous way. I wish that I had gleaned from my experience some ideas that would enable me to create just the right opportunities for children to learn, but I'm still in the hit-or-miss phase.
This brings out a second-order effect of understanding how children think, which Kay points out: "the younger the children, the more adept need to be their mentors (and the opposite is more often the case)". To help someone learn to think and act like a scientist, it is at least valuable and more likely essential for the teacher (to be able) to think and act like a scientist. Sadly, this is all to rare among elementary-school and even middle-school teachers.
I also see this issue operating at the level of university CS education. Being a good CS1 teacher requires both knowing a lot about how students' minds work and being an active computer scientist (or software developer). Whatever drawbacks you may find in a university system that emphasizes research even for teaching faculty, I think that this phenomenon speaks to the value of the teacher-scholar. And by "scholar", I mean someone who is actively engaged doing the discipline, but the fluffy smokescreen that the term sometimes signifies for faculty who have decided to "focus on their teaching".
For Kay, it is essential that children encounter "real science". He uses the phrase "above the threshold" to emphasize that what students do must be authentic, and not circumscribed in a way that cripples asking questions and building testable models. At the end of this paper, he singles out for criticism Interactive Physics and SimCity:
Both of these packages have won many "educational awards" from the pop culture, but in many ways they are anti-real-education because they miss what modern knowledge and thinking and epistemology are all about. This is why being "above threshold" and really understanding what this means is the deep key to making modern curricula and computer environments that will really help children lift themselves.
I found particularly useful Kay's summary of Papert's seminal contribution to this enterprise and of his own contribution. Papert combined an understanding of science and math "with important insights of Piaget to realize that children could learn certain kinds of powerful math quite readily, whereas other forms of mathematics would be quite difficult." In particular, Papert showed that children could understand in a powerful way the differential geometry of vectors and that the computer could play an essential role in abetting this understanding by doing the integral calculus that is beyond them -- and which performance is not necessary for the first-order understanding of the science. Kay claims himself to have made only two small contributions:
What must the design of these tools be like? It must hide gratuitous complexity while exposing essential complexity, doing "the best job possible to make all difficulties be important ones whose overcoming is the whole point of the educational process". Learning involves overcoming difficulties, but we want learners to overcome difficulties that matter, not defects in the tools or pedagogy that we design for them. This is a common theme in the never-ending discussion of which language to use to teach CS majors to write programs -- if, say, C introduces too many unnecessary or inconsistent difficulties, should we use it to teach people to program? Certainly not children, would say Kay, and he says the same thing about most of the languages we use in our universities. Unfortunately, the set of languages that are usually part of the CS1 discussion don't really differ in ways that matter... we are discussing something that matters a lot but not in a way that matters at all.
Getting the environment and language right do matter, because students who encounter unnecessary difficulties will usually blame themselves for their failure, and even when they don't they are turned off to the discipline. Kay says it this way:
In programming language design in a UI, especially for beginners, this is especially crucial.... Many users will interpret [failure] as "I am stupid and can't do this" rather than the more correct "The UI and language designers are stupid and they can't do this".
This echoes a piece of advice by Paul Graham from an entirely different context, described here recently: "So when you get rejected by investors, don't think 'we suck,' but instead ask 'do we suck?' Rejection is a question, not an answer." Questions, not answers.
Kay spends some time talking about how language design can provide the right sort of scaffolding for learning. As students learn, we need to be able to open up the black boxes that are primitive processes and primitive language constructs in their learning to expose a new level of learning that is continuous with the previous. As Kay once wrote elsewhere, one of the beautiful things about how children learn natural language is that the language learned by two-year-olds and elementary school students is fundamentally the same language used by our great authors. The language children use to teach science and math, and the language they use to build their models, should have the same feature.
But designing these languages is a challenge, because we have to strike a balance between matching how learners think and providing avenues to greater expressiveness:
Finding the balance between these is critical, because it governs how much brain is left to the learner to think about content rather than form. And for most learners, it is the initial experiences that make the difference for whether they want to dive in or try to avoid future encounters.
Kay is writing about children, but he could just as well be describing the problem we face at the university level.
Of course, we may well have been handicapped by an education system that has already lost most students to the sciences by teaching math and science as rules and routine and dogma not to be questioned. That is ultimately what drives Kay and his team to discover something better.
If you enjoy this paper -- and there is more there than I've discussed here, including a neat paragraph on how children understand variables and parameters -- check out some more of his team's recent work on VPRI's Writings page.
Tim Ottinger recently posted a blog entry on a problem that we all face: how to know what the simplest thing is when tying to do the simplest thing. Tim points out that what he finds simple may not match at all what others find simple, and vice versa. This is a problem whenever we are working collaboratively, because our decision becomes part of the common code base that everyone works with. But I think it's also a problem for solo programmers who want to remain true to the spirit of YAGNI and reap the benefits offered by growing a program organically in small steps.
When I face this decision in my individual programming, I try to make the choice between two potential implementations based on the sheer effort I have to make today to make my program run with the new feature in it. This means ignoring the voice in my head that says, "But you know that later you'll have to change that." Well, okay then, I'll change it later. The funny thing is that sometimes, I don't have to change it later, at least not in the way I thought back then.
Below a certain threshold of time and energy, I treat all effort as roughly the same. Often, one approach uses a base data type and the other uses a simple object that hides the base data type. I can often implement the former a small bit faster, but I can usually implement both quickly enough to have my feature running now. In such cases, I will usually opt for the object. Maybe this violates the spirit of doing the simplest thing that could possibly work, but I don't find that to be the case in practice. Even when I am wrong and make a change later, it is almost never to retract my object but to change the object's implementation. I almost always want my program to speak in the language of the problem domain, not the underlying programming language, and the object enables my program to do that. In this sense, my experience gibes with that of Kevin Lawrence, who coined an eponymous maxim to address a similar case:
If you ever feel yourself drawn toward writing a static method, obey Kevin's Maxim: "in an object-oriented language the simplest thing that could possibly work is an object."
The key is that we seek to defer non-trivial programming effort until the time spent making it will prove valuable in today's version of the system.
Whenever pair programming is involved, the desire to do the simplest thing becomes the subject of a pairwise conversation. And as pairs form and dissolve over time, the group's collective wisdom can become part of the decision-making process. The goal of focusing the time spent of delivering value today remains the same, but now we can draw on more experience in making the decision.
Ultimately, I think the value in having YAGNI and Do the Simplest Thing that Could Possibly Work as goals comes back to something that came up in my last post. The value of these guidelines comes not from the specific answers we come up with but from the fact that we are asking the questions at all. At least we are thinking about giving our customer fair value for the work we are doing today, and trying to keep our program simple enough that we can grow them honestly over time. With those goals in mind, we will usually be doing right by our customers and ourselves. We will grow wiser over time as to what is simplest in our problem domain, in our programming milieu, and for us as developers. As a result, we ought to be able to give even better answers in the future.
My recent article on Alan Kay's thesis incidentally intersected with one of those blog themes (er, memes) that make the rounds. Kay brought out two essential concepts of computing: syntax and abstraction. Abstraction and the distinction between syntax and semantics are certainly two of the most important concepts in computing.
Charles Miller takes a shot at the identifying The Two Things about Computer Programming:
That first one, decomposition, is closely related to abstraction.
When I followed the link to the source of the The Two Things phenomenon, I found that my favorites were not about computers or science but from the humanities, history to be precise. These are attributed to Jonathan Dresner. Here are Dresner's The Two Things about History:
Excellent! Of course, these apply to the empirical side of science, too, and even to the empirical side of understanding large software systems. Consider #1. That Big Ball of Mud we are stuck with has antecedents, and understanding the forces that lead to such systems is important both if we want to understand the architectures of real systems and if we seek a better way to design. All patterns we notice have their antecedents, and we need to understand them. As for #2, if we changed 'sources' to 'source', most programmers would nod knowingly. Source code often lies -- hides its true intentions, masks the program's larger structure, misleads us with unnecessary complexity of embellishment. Even when we do our best to make it speak truth, code can sometimes lie.
As a CS instructor, I also liked His The Two Things about Teaching History:
This pair really nails what it's like to teach in any academic discipline. I've already written about the first in All About Stories. As to the second, helping students make the transition from answers to questions -- not turning away from seeking answers, but turning one's focus to asking questions -- is one of the goals of education. By the time students reach the university these days, the challenge seems to have grown, because they have grown up in a system that focuses on answers, implicitly even when not explicitly.
I'm not sure any of the entries on computing at the The Two Things site nail our discipline as well the two things about history above. It seems like a fun little exercise to keep thinking on what I'd say if asked the question...
I don't run into Basic and Cobol all that often these days, but lately they seem to pop up all over. Once recently I even ran into them together in an article by Tim Bray on trends in programming language publishing:
Are there any here that might go away? The only one that feels threatened at all is VB, wounded perhaps fatally in the ungraceful transition to .NET. I suppose it's unlikely that many people would pick VB for significant new applications. Perhaps it's the closest to being this millennium's COBOL; still being used a whole lot, but not creatively.
Those are harsh words, but I suppose it's true that Cobol is no longer used "creatively". But we still receive huge call for Cobol instruction from industry, both companies that typically recruit our students and companies in the larger region -- Minneapolis, Kansas City, etc. -- who have learned that we have a Cobol course on the books. Even with industry involvement, there is effectively no student demand for the course. Whether VB is traveling the same path, I don't know. Right now, there is still decent demand for VB from students and industry.
Yesterday, I ran into both languages again, in a cool way... A reader and former student pointed out that I had "hit the big leagues" when my recent post on Alan Kay started scoring points at programming.reddit.com. When I went there for a vanity stroke, I ran into something even better, a Sudoku solver written in Cobol! Programmers are a rare and wonderful breed. Thanks to Bill Price for sharing it with us. 
While looking for a Cobol compiler for my Intel Mac , I ran instead into Chipmunk Basic, "an old-fashioned Basic interpreter" for Mac OS. This brings back great memories, especially in light of my upcoming 25th high school reunion. (I learned Basic as a junior, in the fall of 1980.) Chipmunk Basic doesn't seem to handle my old graphics-enabled programs, but it runs most of the programs my students wrote back in the early 1990s. Nice.
I've been considering a Basic-like language as a possible source language for my compiler students this fall. I first began having such thoughts when I read a special section on lightweight languages in a 2005 issue of Dr. Dobbs' Journal and found Tom Pitman's article The Return of Tiny Basic. Basic has certain limitations for teaching compilers, but it would be simple enough to tackle in full within a semester. It might also be nice for historical reasons, to expose today's students to something that opened the door to so many CS students for so many years.
 I spent a few minutes poking around Mr. Price's website. In some sort of cosmic coincidence, it seems that Mr. Price is took his undergraduate degree at the university where I teach (he's an Iowa native), and is an avid chessplayer -- not to mention a computer programmer! That's a lot of intersection with my life.
 I couldn't find a binary for a Mac OS X Cobol, only sources for OpenCOBOL. Building this requires building some extension packages that don't compile without a bunch of tinkering, and I ran out of time. If anyone knows of a decent binary package somewhere, please drop me a line.
Philip Windley recently wrote about how he observed a queue in action at a local sandwich shop. Then he steps back to note:
The world is full of these kinds of patterns. There's a great write-up of why Starbucks doesn't use a two-phase commit. The fact that these kinds of process issues occur in everyday life would lead the cynic to say that there is nothing new in Computer Science -- people have always known these things.
But there's a big difference between someone figuring out to put a queue in between their order taking station and their sandwich making station and understanding why, when, and how it works in enough detail that the technique can be analyzed and applied generally.
These observations tell us that Windley is real computer scientist. They also lead me to think that he is probably an effective teacher of computer science, observing algorithms and representation in the world and relating them to concepts in the discipline.
To say that because "these kinds of process occur in everyday life" there is nothing new in Computer Science would be like saying that because mass and light and energy are everywhere there is nothing new in Physics. It is the purpose of our discipline to recognize these patterns and tell their story -- and to put them into our service in systems we build.
Windley's comment on big ideas from computing showing up in the world came to mind when I was thinking about Alan Kay's thesis, in particular his relating syntax and abstraction back to pre-literate man's recognition of fruitful patterns in their use of grunts to make a point. These big ideas -- the distinction between form and meaning; abstraction; the interplay between data and process, ... -- these are not "big ideas in computing". They are big ideas. These ideas are central to how we understand the universe, how it is and how it works. This is why we need computing for more than the construction of the next enterprise architecture-cum-web framework. It's why computer science is an essential discipline.
Without having comments enabled on my blog, I miss out on most of the feedback that readers might like to give. It seems like a bigger deal to send an e-mail message with comments. Fortunately for me, a few readers go out of their way to send me comments. Unfortunately for the rest of my readers, those comments don't make it back into the blog the way on-line comments do, and so we all miss out on the sort of conversation that a blog can generate. I think it's time to upgrade my blogging software, I think...
Contrary to Weinberg, I use the exact opposite evaluation of a critic's comments: I assume that anybody, however naive and unschooled, has a valid opinion. No matter what they say, how outrageous, how seemingly ill-founded, someone thought it true, and therefore it is my job to examine it from every presupposition, to discover how to improve the <whatever it is>. I couldn't imagine reducing valid criticism to only those who have what I choose to call "credentials". Just among other things, the <whatever it is> improves a lot faster using my test for validity.
This raises an important point. I suspect that Weinberg developed his advice while thinking about one's inner critics, that four-year-old inside our heads. When he expressed it as applying to outer critics, he may well still have been in the mode of protecting the writer from prior censorship. But that's not what he said.
I agree with Alistair's idea that we should be open to learning from everyone, which was part of the reason I suggested that students not use this as an opportunity to dismiss critique from professors. When students are receiving more criticism than they are used to, it's too easy to fall into the trap of blaming the messenger rather than considering how to improve. I think that most of us, in most situations, are much better served by adopting the stance, "What can I learn from this?" Alistair said it better.
But in the border cases I think that Alistair's position places a heavy and probably unreasonable burden on the writer: "... my job to examine it from every presupposition, to discover how to improve the <whatever it is>." That is a big order. Some criticism is ill-founded, or given with ill will. When it is, the writer is better off to turn her attention to more constructive pursuits. The goal is to make the work better and to become a better writer. Critics who don't start in good faith or who lie too far from the target audience in level of understanding may not be able to help much.
Last week I somehow came across a pointer to Matthias Müller-Prove's excerpt of "The Reactive Engine", Alan Kay's 1969 Ph.D. thesis at the University of Utah. What a neat find! The page lists the full table of contents and gives then gives the abstract and a few short passages from the first section on FLEX, his hardware-interpreted interactive language that foreshadows Smalltalk.
Those of you who have read here for a while know that I am a big fan of Kay's work and often cite his ideas about programming as a medium for expressing and creating thought. One of my most popular entries is a summary of his Turing Award talks at the 2004 OOSPLA Educators' Symposium. It is neat to see the roots of his recent work in a thesis he wrote nearly forty years ago, a work whose ambition, breadth, and depth seem shocking in a day where advances in computing tend toward the narrow and the technical. Even then, though, he observed this phenomenon: "Machines which do one thing only are boring, yet exert a terrible fascination." His goal was to construct a system that would serve as a medium for expression, not just be a special-purpose calculator of sorts.
The excerpt that jarred me most when I read it was this statement of the basic principles of his thesis:
Probably the two greatest discoveries to be made [by pre-literate man] were the importance of position in a series of grunts, and that one grunt could abbreviate a series of grunts. Those two principles, called syntax (or form) and abstraction, are the essence of this work.
In this passage Kay ties the essential nature of computing back to its source in man's discovery of language.
In these short excerpts, one sees the Alan Kay whose manner of talking about computing is delightfully his own. For example, on the need for structure in language:
The initial delights of the strongly interactive languages JOSS and CAL have hidden edges to them. Any problem not involving reasonably simple arithmetic calculations quickly developed unlimited amounts of "hair".
I think we all know just what he means by this colorful phrase! Or consider his comments on LISP and TRAC. These languages were notable exceptions to the sort of interactive language in existence at the time, which left the user fundamentally outside of the models they expressed. LISP and TRAC were "'homoiconic', in that their internal and external representations are essentially the same". (*) Yet these languages were not sufficient for Kay's goal of making programming acceptable to any person interested in entering a dialog with his system:
Their only great drawback is that programs written in them look like King Burniburiach's letter to the Sumerians done in Babylonian cun[e]iform!
Near the end of his introduction to FLEX Kay describes the goals for the work documented in the thesis (bolded text is my emphasis):
The summer of 1969 sees another FLEX (and another machine). The goals have not changed. The desire is still to design an interactive tool which can aid in the visualization and realization of provocative notions. It must be simple enough so that one does not have to become a systems programmer (one who understands the arcane rites) to use it. ... It must do more than just be able to realize computable functions; it has to be able to form the abstractions in which the user deals.
The "visualization and realization of provocative notions"... not just by those of us who have been admitted to the guild of programmers, but everyone. That is the ultimate promise -- and responsibility -- of computing.
Kay reported then that "These goals have not been reached." Sadly, forty years later, we still haven't reached them, though he and his team continue to work in this vein. His lament back in 2004 was that too few of us had joined in the search, settling instead to focus on what will in a few decades -- or maybe even five years -- be forgotten as minutiae. Even folks who thought they were on the path had succumbed to locking in to vision of Smalltalk that is now over twenty-five years old, and which Kay himself knows to be just a stepping stone early on the journey.
In some ways, this web page is only a tease. I really should obtain a copy of his full thesis and read it. But Matthias has done a nice job pulling out some of the highlights of the thesis and giving us a glimpse of what Alan Kay was thinking back before our computers could implement even a small bit of his vision. Reading the excerpt was at once a history lesson and a motivating experience.
(*) Ah, that's where I ran across the link to this thesis, in a mailing-list message that used the term "homoiconic" and linked to the excerpt.
I felt my agile tendencies come through yesterday.
First of all, let me say that by and large I am a rule follower. If a rule exists, I like for it to be applied, and applied consistently. Because I tend to follow rules without much question, even when the rule is inconvenient, it often disturbs me to learn that someone else has gotten out of the rule, either because they asked for an exception or because the relevant authority chose not to enforce the rule when it was skirted. This tendency is almost certainly a direct result of my upbringing. That said, I also know that some rules simply get in the way of getting things done.
Now for a story. Several years ago, we had a particular student in our undergraduate program. His work habits weren't very good, and his grades showed that. Before finishing his coursework, he left school and got a job. Fast forward a few years. Perhaps the student has grown up, but whatever the reason he is ready to finish his degree. He comes back to school part time and does well. He completes the coursework for his degree but comes up a bit short of the graduation requirements, due to the grades he earned during his first stint as a student. (The rule in question is rather picayune, but, hey ,it was written by faculty.)
The students asks if the department will consider waiving the rule in question. His request doesn't seem like some students' requests, which are often demands masquerading as questions, or which presume that the department really should say yes. This request seems sincere and not to presume that the department owes him anything. If we say no, he will re-take another course in the fall and try to satisfy the requirement. However, a waiver would enable him to move on professionally with a sense of closure.
Now, I am a rule follower, but... A colleague expressed well how I felt about this case: There is reason that we have this rule, but this case isn't that reason.
Part of my job is to hear and decide on such requests, but I prefer not to make such decisions without input from the whole faculty. So I make it a practice to poll faculty whenever new requests come in. The purpose isn't to take a vote that decides for me but rather to get a sense of what the group thinks. I get to hear pros and cons, maybe learn to think about the request in a way I hadn't considered before. I still have to make the decision, but I do so in a state of greater knowledge -- and a state of transparency. I'm also willing to bow to a consensus that differs from my instinct, unless there is a really good reason not to.
Faculty responses rolled in, without a consensus. More agreed with the idea of granting the request than disagreed, and a few offered suggestions for resolving the issue in another way. One person suggested that we should not waive the requirement without having a much more detailed policy in place. The more detailed policy would address the many dimensions of the case. His proposal was quite complete and well thought out. But it seemed like overkill.
This is a one-off case. We haven't had a case like this in my memory, and I don't expect that we'll have many like it in the future. Perhaps students don't ask for this kind of waiver because they don't expect it to be granted, but I think it's simpler than that: there aren't many students in this situation. It seems unnecessary and perhaps even detrimental for us to specify rules that govern all possible combinations of features when we don't know the specifics of future cases yet -- and may never face them at all. In the realm of the policy, this feels like a prototypical case of YAGNI. If we see a second case (soon), we will at least have reason to believe that the effort designing a detailed exception policy will be worth it for both faculty and students. There is one other fact that makes this increasingly unlikely as time passes: the particular set of requirements under discussion is no longer in effect, and applies only to students who began their CS majors under an older catalog that has been superseded. In any case, I'd like to give us the opportunity to learn from the next request before we try to get a detailed policy just right.
I do like to create policy where it is useful, such as for recurring decisions and for decisions that involve well-understood criteria. An example of this sort of policy is a set of guidelines for awarding a scholarship each year. I also think policy helps when it eliminate ambiguity that makes peoples' lives harder. An example here is a set of guidelines for faculty applying for and being awarded a course release for scholarly work. Without such a policy, the process looks like a free-for-all and is prone to unfairness, or appearance of the same; the result would be an inefficient market of projects that would hurt the faculty's collective work.
Otherwise, I think that policy works best when it reflects practice, rather than prescribing practice. This reminds me of a discussion that America's founding fathers had in creating the U.S. Constitution, which I mentioned in an earlier entry on Ken Alder's The Measure of All Things.
Ultimately, I trust my judgment and the judgment of my colleagues. In situations where I don't think we know enough to define good rules, I'd rather encourage a conversation that helps us reach a reasonable decision in light of current facts. When the conversation turns from giving value to wasting time, then a policy is in order. To me that is better at forecasting a policy for experiences we've not had and then facing the prospect of tinkering at its edges to get it right later. That creates just the sort of uncertainty, both in the folks applying the policy and in the folks to whom the policy applies, that a good policy should eliminate.
Trusting judgment means trusting people. I'm comfortable with that, even as a law-and-order kind of guy.
Last night my wife and I went to see a presentation by Ken Blanchard, author of many bestselling books on leadership, starting with The One Minute Manager. I previously blogged on his Leadership and the One-Minute Manager, and I very much enjoyed his Raving Fans. His talk was as good as advertised, full of great stories that made memorable a focused message: if you wish to excel, people and energy matter.
If you have a chance to hear Blanchard speak, I encourage you to do so, with one small caveat. If you are averse to spiritual talk of any sort or prefer your professional talks not to mention God even in passing, then you may be turned off. I thought he did a pretty good job keeping faith out of the talk and focusing on how to lead and why. Certainly the books of his on management and leadership that I've read stay on topic, though I can't speak to his latest book. If you can look past an occasional bit of spirit talk, then Blanchard can probably psyche you up to be a better leader. And you are a leader if you see any part of your life to be about influencing others, whether you have an organizational role as leader or not. Teachers fit the definition, and I think that agile software developers do, too.
I won't try to give a full accounting of the talk. You would be better served by reading Blanchard's books, which tend to be slim, fast reads. Here are a few takeaway points that seem applicable to my roles as software developer, teacher, and head of a university CS department:
Right now my thoughts are on moments of truth. What are the moments of truth that my students experience? My faculty? My external customers?
In the current issue of the Chronicle of Higher Education, the article You Will be Tested on This (behind pay wall) tells us something researchers have known for decades but which too few teachers act on: People learn better when they are required to actively recall and use knowledge soon after they learn it.
This idea was first documented by Herbert Spitzer, who did a study in the late 1930s with Iowa sixth-graders. Students who were quizzed about a reading assignment within twenty-four hours of their first reading the article scored much better than students who had been quizzed later or not at all. The results did not follow from different study habits or from extra preparation, as "students did not know when they would be quizzed, and they did not keep the article, so they had no chance to study on their own.
This has come to be called the Testing Effect. Spitzer concluded:
"Immediate recall in the form of a test is an effective method of aiding the retention of learning and should, therefore, be employed more frequently in the elementary school."
As the Chronicle piece points out, the Testing Effect runs counter to conventional wisdom:
"The testing effect cuts against the lay understanding of memory," says Jeffrey D. Karpicke, who recently completed a doctorate at Washington University and will become an assistant professor of psychology at Purdue University this fall. "People usually imagine memory as a storage space, as a space where we put things, as if they were books in a library. But the act of retrieval is not neutral. It affects the system."
This is another case where we rely on a metaphor beyond its range of applicability. Knowing where it fails and why can help us do our job better. In the case of human memory, instructors can help students improve their learning simply by giving a quiz promptly after teaching a new idea. Giving feedback promptly is even better, because it allows students to correct misconceptions before they become too firmly implanted.
Note that the Testing Effect does not gets its benefit from getting students to do more or different studying:
The purpose of this quizzing is not to motivate students to pay attention and to study more; if those things happen, the researchers say, they are nice side effects. The real point is that quizzing, if done correctly, is a uniquely powerful method for implanting facts in students' memory.
The value of prompt quizzing isn't from students studying for the quiz. It is from the act of taking the quiz itself, making an effort to retrieve items from memory. As a psychology professor from Washington University in St. Louis is quoted in the article, "every time you test someone, you change what they know."
There are a lot of open questions about how the Testing Effect works and the conditions under which it is maximized, such as the role of feedback, immediate or otherwise. One of the major objections raise by some university professors I know is that such frequent, short-answer testing favors the memorization of isolated facts at the expense of broader conceptual learning. Current research is trying to answer some of these questions.
Many professors also balk at the idea of writing and grading all these quizzes. There are technological solutions to part of this problem. Many folks use Blackboard to give and grade simple quizzes. For writing code, we might try something like Nick Parlante's JavaBat.com tool. Because the Testing Effect does not depend on motivating students to study more, I don't think that grading the quizzes is all that important. The key is simply to get the students to do active recall and retrieval.
My teaching may already benefit from the Testing Effect. I do not give quizzes, but I do begin nearly every class session with an Opening Exercise that asks students to use some ideas we learned the previous session to solve a problem. In courses that teach programming, these exercises almost always involve writing code. In an algorithms or compiler course, the exercise might be a more general problem to solve. But in all cases the exercises require students to produce something, not select true/false or a multiple-choice answer. After students have had time to work on the problem, we debrief answers and discuss possibilities. This is the sort of immediate feedback that seems valuable to learners -- and which I have a hard time providing when I have to grade items. Later in the period, I may ask students to solve another exercise as well. Doing things always seems like a better idea than listening to me yammer on for 75 uninterrupted minutes.
I have only once concern that my approach doesn't deliver the Testing Effect. Because I don't grade the quiz, I fear that some students choose not to exert much effort -- and effort is the key! I'm not so concerned that I myself am yet motivated to collect and grade the exercises, but maybe I should be. One thing that doesn't concern me is memorization of isolated facts at the expense of broader conceptual learning. My exercises ask student use the knowledge, not parrot it back, and good exercises cause students to integrate new knowledge into their larger understanding.
As summer begins for me, I get to think more about programming. For me, that will be Ruby and my compilers over the next few months.
From Ruby vs. Java Myth #3:
In what serious discipline is "It's too hard" a legitimate excuse? I have never seen a bank that eschews multiplication: "We use repeated addition here--multiplication was too hard for our junior staffers." And I would be uncomfortable if my surgeon said, "I refuse to perform procedures developed in the last 10 years--it is just too hard for me to learn new techniques."
Priceless. This retort applies to many of our great high-level languages, such as Scheme or Haskell, as anyone who has taught these languages will attest.
The problem we in software have is this conundrum: The level of hardness -- usually, abstraction -- we find in some programming languages narrows our target population much more than the level of hardness that we find in multiplication. At the same time, our demand for software developers far outstrips our demand for surgeons. Finding ways to counteract these competing forces is a major challenge for the software industry and for computing programs.
For what it's worth, I strongly second Stuart's comments in Ruby vs. Java Myth #1, on big and small projects. This is a case where conventional wisdom gets things backwards, at a great cost to many teams.
A Programmer's Programmer
I recently ran across a link to this interview with Don Knuth from last year. It's worth a read. You gotta love Knuth as much as you respect his work:
In retirement, he still writes several programs a week.
Programmers love to program and just have to do it. But even with 40+ years of experience, Knuth admits a weakness:
"If I had been good at making estimates of how long something was going to take, I never would have started."
If you've studied AI or search algorithms, you from A* that underestimates are better than overestimates, for almost exactly the reason that they helped Knuth. There are computational reasons this is true for A*, but with people it is mostly a matter of psychology -- humans are more likely to begin a big job if they start with a cocky underestimate. "Sure, no problem!"
If you are an agile developer, Knuth's admission should help you feel free not to be perfect with your estimates; even the best programmers are often wrong. But do stay agile and work on small, frequent releases... The agile approach requires short-term estimates, which can be only so far off and which allow you to learn about your current project more frequently. I do not recommend underestimates as drastic as the ones Knuth made on his typesetting project (which ballooned to ten years) or his Art of Computing Programming series (at nearly forty years and counting!) A great one like Knuth may be creating value all the long while, but I don't trust myself to be correspondingly productive for my clients.
[ UPDATE: I have corrected the quote of Alistair Cockburn that leads below. I'm sure Kent gave the right quote in his talk, and my notes were in error. The correct quote makes more sense in context. Thanks, Alistair. ]
Back in March 2006, I posted notes on an OOPSLA 2003 invited talk by David Ungar. While plowing through some old files last week, I found notes from another old OOPSLA invited talk, this from 2002: Kent Beck's "The Metaphor Metaphor". I've always appreciated Kent's person-centered view of software development, and I remember enjoying this talk. These notes are really a collection of snippets that deal with how language matters in how we think about our projects.
November 6, 2002
"Embellishment is the pitfall of the methodologist." (Alistair Cockburn)
You gain experience. You are asked for advice. You give advice. They ask for more. Eventually, you reach the end of your experience. You run out of advice to give. But you don't run out of people asking you for advice. So, you reach...
Stupid ideas are important. How else will you know that the clever ideas are clever? Don't be afraid of stupid ideas.
A trope is an expression whose meaning is not intended to be derived from the literal interpretation of its words. There are many kinds of trope:
Think about how much of our communication is tropic. Is this a sign that our words and tools are insufficient for communication, or a sign that communication is really hard? (Kent thinks both.)
A key to the value of metaphor is the play between is and is not. How a metaphor holds and how it doesn't both tell us something valuable.
Metaphors run deep in computing. An example: "This is a memory cell containing a 1 or a 0." All four underlined phrases are metaphorical!
Kent's college roommate used to say, "Everything is an interpreter."
Some metaphors mislead. "war on terrorism" is a bad metaphor. "war on disease (e.g., cancer)" is a bad metaphor. Perhaps "terrorism is a disease" is a better metaphor!?
Lakoff's Grounding Hypothesis states: All metaphors ground in physical reality and experience. [Kent gave an example using arithmetic and number lines, relating to an experiment with children, but my notes are incomplete.]
We made Hot Draw "before there were computers". This meant that doing graphics "took forever". Boy was that fun! One cool thing about graphics programming: your mistakes look so interesting!
Hot Draw's metaphors: DRAWING +
A lot of good design is waiting productively.
Regarding this quote, Kent told a story about duplicating code -- copy-and-paste with changes to two lines -- and not removing it. That's completely different from copying and pasting code with changes to two lines and not removing. [This is, I think, a nod to the old AI koan (listed first here) about toggling the on/off switch of a hung computer to make it work...]
Kent's final recommendations:
[end of excerpt]
That last recommendation reflects a truth that people often forget: Well-rounded people bring all sorts of positives, obvious and less so, to programming. And I love the quote about design as "productive waiting".
As with any of my conference reports, the ideas presented belong to Kent unless stated otherwise, but any mistakes are mine. With a five-year-old memory of the talk, mistakes in the details are probably unavoidable...
My entry formatting text for readability elicited some interesting responses. A couple of folks pointed to Sun's language in development, Fortress, which is something of an example going in the other direction: it is a programming language that will be presentable in multiple forms, including a more human mathematics display. Indeed, Fortress code uses a notation that mathematicians will find familiar.
I especially enjoyed a message from Zach Beane, who recently read William Manchester's biography of Winston Churchill, Churchill wrote the notes for his speeches using a non-standard, structured form. While he may not have used syntactic structure as his primary mechanism, he did use syntactic structure as part of making his text easier to scan during delivery. Zach offered a few examples from the Library of Congress's on-line exhibit Churchill and the Great Republic, including Churchill's Speech to the Virginia General Assembly, March 8, 1946. My favorite example is this page of speaking notes for Churchill's radio broadcast to the United States, on October 16, 1938:
Thanks to Zach and all who responded with pointers!