April 28, 2007 12:55 PM

Open Mind, Closed Mind

I observed an interesting phenomenon working in a group this morning.

Another professor, an undergrad student, and I are at Duke University this weekend for a workshop on peer-led team-learning in CS courses. This is an idea borrowed from chemistry educators that aims to improve recruitment and retention, especially among underrepresented populations. One of our first activities was to break off into groups of 5-8 faculty and so a sample session PLTL class session, led by an experienced undergrad peer leader from one of the participating institutions. My group's leader was an impressive young women from Duke who is headed to Stanford this fall for graduate work in biomedical informatics.

One of the exercises our group did involved Soduku. First, we worked on a puzzle individually, and then we came back together to work as a group. I finished within a few minutes, before the leader called time, while no one else had filled in much of the grid yet.

Our leader asked us to describe bits about how we had solved the puzzle, with an eye toward group-writing an algorithm. Most folks described elements of the relatively naive combinatorial approach of immediately examining constraints on individual squares. When my turn came, I described my usual approach, which starts with a preprocessing of sorts that "cherry picks" obvious slots according to groups of rows and columns. Only later do I move on to constraints on individual squares and groups of squares.

I was surprised, because no one seemed to care. They seemed happy enough with the naive approach, despite the fact that it hadn't served them all that while solving the puzzle earlier. Maybe they dismissed my finishing quickly as an outlier, perhaps the product of a Soduku savant. But I'm no Soduku savant; I simply have had a lot of practice and have developed one reasonably efficient approach.

The group didn't seem interested in a more efficient approach, because they already knew how to solve the problem. My approach didn't match their own experiences, or their theoretical understanding of the problem. They were comfortable with their own understanding.

(To be honest, I think that most of them figured they just needed to "go faster" in order to get done faster. If you know your algorithms, you know that going faster doesn't help at all with many, many algorithms! We still wouldn't get done.)

Dr. Phil -- How's that workin' for ya?

After making this observation, I also had a realization. In other situations, I behave just like this. Sometimes, I have an idea in mind, one I like and am comfortable with, and when confronted with something that might be better, I am likely to dismiss it. Hey, I just need to tweak what I already know. Right. I imagine Dr. Phil asking in his Texas drawl, "How's that workin' for ya?" Not so well, but with a little more time...

When I want to learn, entering a situation with a closed mind is counterproductive. This is, of course, true when I walk into the room saying, "I don't want to learn anything new." But it is just as important, and far more dangerous, when I think I want to learn but am holding tightly to my preconceptions and idiosyncratic experiences. In that case, I expect that I will learn, but really all I can do is rearrange what we already know. And I may end up disappointed when I don't make a big leap in knowledge or performance.

One of the PLTL undergrad leaders working with us gets it. He says that one of the greatest benefits of being a peer leader is interacting with the students in his groups. He has learned different way to approach many specific problem and different high-level approaches to solving problems more generally. And he is a group leader.

Later we had fun with a problem on text compression, using Huffman coding as our ultimate solution. I came up with an encoding targeted to a particular string, which used 53 bits instead of the 128 bits of a standard ASCII encoding. No way a Huffman code can beat that. Part way through my work on my Huffman tree, I was even surer. The end result? 52 bits. It seems my problem-solving ego can be bigger than warranted, too. Open mind, open mind.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 27, 2007 6:04 PM

Welcome to a New Century

While at Iowa State to hear Donald Norman speak at the HCI forum a couple of days ago, I spent a few minutes wandering past faculty offices. I love to read what faculty post on and around their doors -- cartoons, quotes, articles, flyers, posters, you name it. Arts and humanities offices are often more interesting than science faculty offices, at least in a lateral-thinking way, but I enjoy them all.

At ISU, one relatively new assistant prof had posted the student evaluations from his developmental robotics course. Most were quite positive and so make for good PR in attracting students, but he posted even the suggestions for improvement.

My favorite student quote?

It's nice to take a CS course that wasn't designed in the '70s.

Spot on. I wonder just how archaic most computer science courses must seem to students who were born in the late 1980s. Gotta teach those fundamentals!


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 26, 2007 7:05 PM

Don Norman on Cantankerous Cars

Yesterday afternoon I played professional hooky and rode with another professor and a few students to Ames, Iowa, to attend the fourth HCI Forum, Designing Interaction 2007, sponsored by Iowa State's Virtual Reality Applications Center. This year, the forum kicks off a three-day Emerging Technologies Conference that features several big-name speakers and a lot of the HCI research at ISU.

Donald Norman

I took an afternoon "off" to hear the keynote by Donald Norman, titled "Cautious Cars and Cantankerous Kitchens". It continues the story Norman began telling years ago, from his must-read The Design of Everyday Things to his in-progress The Design of Future Things.

"Let me start with a story." The story is about how a time when he is driving, feeling fine, and his wife feels unsafe. He tries to explain to her why everything is okay.

"New story." Same set-up, but now it's not his wife reacting to an unsafe feeling, but his car itself. He pays attention.

Why does he trust his car more than he trusts his wife? He thinks it's because, with his wife, conversation is possible. So he wants to talk. Besides, he feels in control. When conversation is not possible, and the power lies elsewhere, he acquiesces. But does he "trust"? In Norman's mind, I think the answer is 'no'.

Control is important, and not always in the way we think. Who has the most power in a negotiation? Often (Norman said always), it is the person with the least power. Don't send your CEO; send a line worker. Why? No matter how convincing the other sides' arguments are, the weakest participant may well have to say, "Sorry, I have my orders." Or at least "I'll have to check with my boss".

It's common these days to speak of smart artifacts -- smart cars, houses, and so on. But the intelligence does not reside in the artifact. It resides in the head of designer.

And when you use the artifact, the designer not there with you. The designer would be able to handle unexpected events, even by tweaking the artifact, but the artifact itself can't.

"There are two things about unexpected events... They are unexpected. And they always happen."

Throughout his talk, Norman compared driving a car to riding a horse, driving a horse and carriage, and then to riding a bike. The key to how well these analogies work or not lies in the three different levels of engagement that a human has: visceral, behavioral, and reflective. Visceral is biological, hard-coded in our brains, and so largely common to all people. It recognizes safe and dangerous situations. Behavioral refers to skills and "compiled" knowledge, knowledge that feels like instinct because it is so ingrained. Reflective is just that, our ability to step outside of a situation and consider it rationally. There are times for reflective engagement, but hurtling around a mountain curve at breakneck speed is not one of them.

Norman suggested that a good way to think of designing intelligent systems is to think of a new kind of entity: (human + machine). The (car + driver) system provides all three levels of engagement, with the car providing the visceral intelligence and the human providing the behavioral and reflective intelligences. Cars can usually measure most of what makes our situations safe or dangerous better than we can, because our visceral intelligence evolved under very different circumstances than the ones we now live in. But the car cannot provide the other levels of intelligence, which we have evolved as much more general mechanisms.

Norman described several advances in automobile technology that are in the labs or even available on the road: cars with adaptive cruise control; a Lexus that brakes when its on-board camera senses that the driver isn't paying attention; a car that follows lanes automatically; a car that parks automatically, both parallel and head-in. Some of these sound like good ideas, but...

In Norman's old model of users and tasks, he spoke of the gulfs of evaluation and execution. In his thinking these days, he speaks of the knowledge gap between human & machine, especially as we more and more think about machines as intelligence.

The problem, in Norman's view, is that machines automate the easy parts of a task, and they fail us when things get hard and we most need them. He illustrated his idea with a slide titled "Good Morning, Silicon Valley" that read, in part, "... at the very moment you enter a high-speed crisis, when a little help might come in handy, the system says, 'Here, you take it.'"

Those of us who used to work on expert systems and later knowledge-based systems recognize this as the brittleness problem. Expert systems were expert in their narrow niche only. When a system reached the boundary of its knowledge, its performance went from expert to horrible immediately. This differed from human experts and even humans who were not experts, whose performances tended to degrade more gracefully.

My mind wandered during the next bit of the talk... Discussion included ad hoc networks of cars on the road, flocking behavior, cooperative behavior, and swarms of cars cooperatively drafting. Then he discussed a few examples of automation failures. The first few were real, but the last two were fiction -- but things he thinks may be coming, in one form or another:

  • I swipe my credit card to make a purchase at the store. The machine responds, "Transaction Refused. You Have Enough Shoes."
  • A news headline: "Motorist Trapped in Roundabout for 14 Hours". If you drive a care that follows lanes and overrules your attempts to change... (April Fool's!)

Norman then came to another topic familiar to anyone who has done AI research or thought about AI for very long. The real problem here is shared assumptions, what we sometimes now call "common ground". Common ground in human-to-human communication is remarkably good, at least when the people come from cultures that share something in common. Common ground in machine-to-machine is also good, sometimes great, because it is designed. Much of what we design follows a well-defined protocol that makes explicit the channel of communication. Some protocols even admit a certain amount of fuzziness and negotiation, again with some prescribed bounds.

But there is little common ground in communication between human and machine. Human knowledge is so much richer, deeper, and interconnected than what we are yet able to provide our computer programs. So humans who wish to communicate with machines must follow rigid conventions, made explicit in language grammars, menu structures, and the like. And we aren't very good at following those kind of rules.

Norman believes that the problem lies in the "middle ground". We design systems in which machines do most or a significant part of a task and in which humans handle the tough cases. This creates expectation and capability gaps. His solution: let machine do all of a task -- or nothing. Anti-lock brakes were one of his examples. But what counts as a complete task? It seems to me that this solution is hard to implement in practice, because it's hard to draw a boundary around what is a "whole task".

Norman told a short story about visiting Delft, a city of many bicycles. As he and his guide were coming to the city square, which is filled with bicycles, many moving fast, his guide advised him, "Don't try to help them." By this, he meant not to slow down or speed up to avoid a bike, not to guess the cyclist's intention or ability. Just cross the street.

Isn't this dangerous? Not as dangerous as the alternative! The cyclist has already seen you and planned how to get through without injuring you or him. If you do something unexpected, you are likely to cause an accident! Act in the standard way so that the cyclist can solve the problem. He will.

This story led into Norman's finale, in which he argued that automation should be:

  • predictable
  • self-explaining
  • optional
  • assistive

The Delft story illustrated that the less flexible, less powerful party should be the more predictable party in an interaction. Machines are still less flexible than humans and so should be as predictable as possible. The computer should act in the standard way so that the human user can solve the problem. She will.

Norman illustrated self-explaining with a personal performance of the beeping back-up that most trucks have these days. Ever have anyone explain what the frequency of the beeps means? Ever read the manual? I don't think so.

The last item on the list -- assistive -- comes back to what Norman has been preaching forever and what many folks who see AI as impossible (or at least not far enough along) have also been saying for decades: Machines should be designed to assist humans in doing their jobs, not to do the job for them. If you believe that AI is possible, then someone has to do the research to bring it along. Norman probably disagrees that this will ever work, but he would at least say not to turn immature technology into commercial products and standards now. Wait until they are ready.

All's I know is... I could really have used a car that was smarter than its driver on Tuesday morning, when I forgot to raise my still-down garage door before putting the car into reverse! (Even 20+ years of habit sometimes fails, even if under predictable conditions.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

April 25, 2007 8:14 AM

No More Complaints

... about being too busy to do my job well. Monday night, I attended my university's senior recognition banquet for intercollegiate athletes. One of the academic award winners is on the track and field team.

He is a math major. In his first semester as a freshman, he came in and took three junior/senior level math courses. In later semesters, he took as many as five and six math courses. Some were master's level courses, because there were not enough undergraduate courses to keep him busy.

Track and field is unusual among intercollegiate sports in having competitive seasons in both the fall and the spring. Yet in the spring of his junior year, this young man took 24 credit hours -- 8 courses, including 5 math courses. This spring, he is taking 24 credit hours. I forget how many math courses are in the mix, but the number has gone down; he has exhausted the department's undergraduate curriculum and taken most of its graduate courses.

His GPA is nearly 4.0.

Let's not forget that he is an athlete, a pole vaulter, and so has practice and training nearly every day. And he's not just a member of the practice squad, having fun but saving his energy for his schoolwork. He is a 5-time conference champion and a 4-time All-American.

Oh, and he is a pretty good programmer, too, who took several CS courses his freshman year. That year he was a member of our department's programming team, which placed in the regional competition. He toyed with double majoring in CS, but there are only so many hours in a day, you know.

This young man has been busy, but he has excelled both on the field and in the classroom -- and I do mean "excelled", not the watered-down sense of the word as we too often use it these days.

Actually, attending the athlete's recognition banquet would open the eyes of most university faculty, who have very little sense of just how impressive these young men and women are. In the news we mostly hear about athletic exploits or about misbehavior. You don't hear about the lady soccer player carrying a 3.9 GPA in biomedical science, or the wrestler who double majors in humanities and philosophy, or the women's tennis team made up of players from all across the globe, studying in a second language (some just learning English) and earning a team GPA of 3.59. These student-athletes are the typical case at my university, not the exception.

If you seek excellence, do your best to be among others who seek excellence. Don't limit yourself to one sort of person, especially to people who do what you do. Inspiration can come from people working in all arenas, and you may well learn something from someone who thinks about the world in a different way. And that includes young people, even pole vaulters.


Posted by Eugene Wallingford | Permalink | Categories: General

April 23, 2007 3:44 PM

Discipline and Experience

I don't have time to keep up with the XP mailing list these days, especially when it spins off into a deep conversation on a single topic. But while browsing quickly last week before doing an rm on my mailbox, I ran across a couple of posts that deserved some thought.

Discipline in Practice

In a thread that began life as "!PP == Hacking?", discussion turned to how much discipline various XP practices demand, especially pair programming and writing tests (first). Along the way, Ron Jeffries volunteered that he is not always able to maintain discipline without exception: "I am the least disciplined person you know ..."

Robert Biddle followed:

I was pleased to read this. I'm always puzzled when people talk about discipline as a good thing in itself. I would consider it a positive attribute of a process that it required less, rather than more, discipline.

One of the things I've learned over the years is that, while habit is a powerful mechanism, processes that require people to pay close attention to details and to do everything just so are almost always destined to fail for the majority of us. It doesn't matter if the process is a diet, a fitness regimen, or a software methodology. People eventually stop doing them. They may say that they are still following the process, but they aren't, and that may be worse than just stopping. Folks often start the new discipline with great energy, psyched to turn over a new leaf. But unless they can adapt their thinking and lifestyle, they eventually tire of having to follow the rules and backslide. Alistair Cockburn has written a good agile software book that starts with the premise that any process must build on the way that human minds really work.

Later in the thread, Robert notes that -- contrary to what many folks who haven't tried XP for very long think -- XP tolerates a lower level of discipline than other methodologies:

For example, as a developer, I like talking to the customer lots; and as a customer I like talking to developers. That takes way less discipline for me than working with complex specs.

My general point is that it makes sense for processes and tools to work with human behaviour, supporting and protecting us where we are weak, and empowering us where we are strong.

He also points out that the boundary between high discipline and low discipline probably varies from person to person. A methodology (or diet, or training regimen) capable of succeeding within a varied population must hit close to the population's typical threshold and be flexible enough that folks who lie away from that threshold have a chance find a personal fit.

As methodology, XP requires us to change how we think, but it places remarkably few picayune demands on our behavior. A supportive culture can go a long toward helping well-intended newbies give it a fair shake. And I don't think it is as fragile in its demands as the Atkins Diet or many detailed running plans for beginners.

Accelerating Experience

In a different thread, folks were discussing how much experience one needs in order to evaluate a new methodology fairly, which turned to the bigger question of how much project experience one can realistically obtain. Projects that last 6 months or 2 years place something of an upper bound on our experience. I like Laurent Bossavit's response:

> It takes a lot of time to get experienced.
> How many software development projects can you
> experience in a life-time? How many can you
> experience with three years of work experience?


Quite a lot, provided they're small enough.

Short cycles let you make mistakes faster, more often. They let us succeed faster. They let us learn faster.

A post later in this thread by someone else pointed out that P and other agile approaches change the definition of a software project. Rather than thinking in terms of a big many-month or multi-year project, we think in terms of 2- or 3-week releases. These releases embody a full cycle from requirements through delivery, which means that we might think of each release as a "whole" project. We can do short retrospectives, we can adapt our practices, we can shift direction in response to feedback.

Connections... This reminds me of an old post from Laurent's blog that I cited in an old post of my own. If you want to increase your success rate, double your failure rate; if you want to double your failure rate, all you have to do is halve your project length. Ideas keep swirling around.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 21, 2007 4:54 PM

Making Something Tangible

It occurred to me recently one thing that makes administrative work different from my previous kinds of work, something that accounts for an occasional dissatisfaction that I never used to feel as a matter of course.

In my administrative role, I can often work long and hard without producing anything.

It's not that I don't do anything as department head. It's just that the work doesn't always result in a product, something tangible, something complete that one can look to and say, "I made that." Much of a head's work is about relationships, interaction, and one-one interaction. These are all valuable outcomes, and they may result in something tangible down the road. Meeting with students, parents of prospective students, industry partners, or prospective donors all may result in something tangible -- eventually. And the payoff -- say, from a donor -- can be quite tangible, quite sizable! But in the meantime, I sometimes feel like, "What did I accomplish today?"

This realization struck me a week or so back when I finished producing the inaugural issue of my department's new newsletter. I wrote nearly all of the content, arranged the rest, and did all of the image preparation and document layout. When I got done, I felt that sense one gets from making something.

I get that feeling when I write software. I think that one of the big wins from small, frequent releases is the shot of adrenaline that it gives the developers. We rarely talk about this advantage, instead speaking of the value of the releases in terms of customer feedback and quality. But the buzz that we developers feel in producing a whole something, even if it's a small whole, probably contributes more than we realize to motivation and enjoyment. That's good for the developers, and for the customer, too.

I get that feeling when I write code and a lesson for teaching a class, too. The face-to-face delivery creates its own buzz.

This makes me wonder how students feel about frequent release dates, or small, frequent homework assignments. I often use this approach in my courses, but again more for "customer-side" and quality reasons. Maybe students feel good producing something, making tangible progress, every week or so? Or does the competing stress of several other courses, work, and life create an overload? Even if students prefer this style, it does create a new force to be addressed: small frequent failures must be horribly disheartening. I need to be sure that students feel challenge and success.

Sheepishly, I must admit that I've never asked my students how they feel about this. I will next week. If you want to share your thoughts, please do.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

April 19, 2007 4:05 PM

Walking Out The Door

Today I am reminded to put a variant of this pattern into practice:

The old-fashioned idea (my door is always open; when you want to talk, c'mon in) was supposed to give people down the line access to you and your ears. The idea was that folks from layers below you would come and clue you in on what was really happening.

I don't think that ever worked for most of us. Most folks didn't have the courage to come in, so we only learned what was on the minds of the plucky few. We were in our environment, not theirs. We couldn't verify what we were hearing by looking, touching, and listening in the first person. And we got fat from all that sitting.

I ran into this quote in Jason Yip's post Instead of opening the door, walk through it. Jason is seconding an important idea: that an open door policy isn't enough, because it leaves the burden for engaging in communication on others -- and there are reasons that these other folks may not engage, or want to.

This idea applies in the working-world relationship between supervisors and their employees, but it also applies to the relationship between a service provider and its customers. This includes software developers and their customers. If we as software developers sit in a lab, even with our door open, our customer may never come in to tell us what they need. They may be afraid to bother us; they may not know what they need. Agile approaches seek to reduce the communication gap between developers and customers, sometimes to the point of putting them together in a big room. And these approaches encourage developers to engage the customer in frequent communication in a variety of ways, from working together on requirements and acceptance to releasing working software for customer use as often as possible.

As someone who is sitting in a classroom with another professor and a group of grad students just now, I can tell you that this idea applies to teachers and students. Two years ago tomorrow, I wrote about my open office hours -- they usually leave me lonely like the Maytag Repairman. Learning works best when the instructor engages the student -- in the classroom and in the hallway, in the student union, on the sidewalk, and at the ballgame. Often, students yearn to be engaged, and learning is waiting to happen. It may not happen today, in small talk about the game, but at some later time. But that later time may well depend on the relationship built up by lots of small talk before. And sometimes the learning happens right there on the sidewalk, when the students feel able to ask their data structures question out among the trees!

But above, I said that today reminded me of a variant of this pattern... Beginning Monday and culminating today, I was fortunate to have a member of my department engage me in conversation, to question a decision I had made. Hurray! The open door (and open e-mail box) worked. We have had a very good discussion by e-mail today, reaching a resolution. But I cannot leave our resolution sitting in my mail archive. I have to get up off my seat, walk through the door, and ensure that the discussion has left my colleague satisfied, in good standing with me. I'm almost certain it has, as I have a long history with this person as well as a lot of history doing e-mail.

But I have two reasons to walk through the door and engage now. First, my experience with e-mail tells me that sometimes I am wrong, and it is almost always worth confirming conclusions face-to-face. If I were "just" faculty, I might be willing to wait until my next encounter with this colleague to do the face-to-face. My second reason is that I am department head these days. This places a burden on communication, due to the real and perceived differences in power that permeate my relationships with colleagues. The power differential means that I have to take extra care to ensure that interactions, whether face to face or by e-mail, are building our relationship and not eroding it. Still being relatively new to this management business, it still feels odd that I have to be careful in this way. These folks are my colleagues, my friends! But as I came to realize pretty quickly, moving into the Big Office Downstairs changes things, whatever we may hope. The best way to inoculate ourselves from the bad effects of the new distance? Opening the door and walking through it.

Oh, and this applies to the relationship between teachers and students, too. I understand that as an advisor to my grad students, having been a grad student whose advisor encouraged healthy and direct communication. But I see it in my relationship with undergraduates, too, even in the classroom. A little care tending one-on-one and one-on-many relationships goes a long way.

(And looking back at that old post about the Friends connection, I sometimes wonder if any of my colleagues has a good Boss Man Wallingford impression yet. More likely, one of my students does!)


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

April 17, 2007 7:46 AM

Less May Be More

Over at Lambda the Ultimate, I ran into a minimal Lisp system named PicoLisp. Actually, I ran into a paper that describes PicoLisp as a "radical approach to application development", and from this paper I found my way to the software system itself.

PicoLisp is radical in eschewing conventional wisdom about programming languages and application environments. The Common Lisp community has accepted much of this conventional wisdom, it would seem, in reaction to criticism of some of Lisp's original tenets: the need for a compiler to achieve acceptable speed, static typing within an abundant set of specific types, and the rejection of pervasive, shallow dynamic binding. PicoLisp rejects these ideas, and takes Lisp's most hallowed feature, the program as s-expression, to its extreme: a tree of executable nodes, each of which...

... is typically written in optimized C or assembly, so the task of the interpreter is simply to pass control from one node to the other. Because many of those built-in Lisp functions are very powerful and do a lot of processing, most of the time is spent in the nodes. The tree itself functions as a kind of glue.

In this way an "interpreter" that walks the tree can produce rather efficient behavior, at least relative to what many people think an interpreter can do.

As a developer, the thing I notice most in writing PicoLisp code is its paucity of built-in data types. It supports but three, numbers, symbols, and lists. No floats; no strings or vectors. This simplifies the interpreter in several ways, as it now need to make run-time checks on fewer different types. The price is paid by the programmer in two ways. First, at programming time, the developer must create the higher-order data types as ADTs -- but just once. This is a price that any user of a small language must pay and was one of the main trade-offs that Guy Steele discussed in his well-known OOPSLA talk Growing a Language. Second, at run time, the program will use more space and time than if the those types were primitive in the compiler. But space is nearly free these days, and the run-time disadvantage turns out to be smaller than one might imagine. The authors of PicoLisp point out that the freedom their system gives them saves them a much more expensive sort of time -- developer time in an iterative process that they liken to XP.

Can this approach work at all in the modern world? PicoLisp's creators say yes. They have implemented in PicoLisp a full application development environment that provides a database engine, a GUI, and the generation of Java applets. Do they have the sort of competitive advantage that Paul Graham's writes about having had at the dawn of ViaWeb? Maybe so.

As a fan of languages and language processors, I always enjoy reading about how someone can be productive working in an environment that stands against conventional wisdom. Less may be more, but not just because it is less (say, fewer types and no compiler). It is usually more because it is also different (s-expressions as powerful nodes glued together with simple control).


Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development

April 14, 2007 3:38 PM

If Only We Had More Time...

Unlike some phrases I find myself saying in class, I don't mind saying this one.

Used for the wrong reasons, it would signal a problem. "If we had more time, I would teach you this important concept, but..." ... I've left it out because I didn't plan the course properly. ... I've left it out because preparing to teach it well would take too much time. ... I'm running behind; I wasted too much time speaking off-topic. There are lots of ways that not covering something important is wrong.

But there is a very good reason why it's not possible to cover every topic that comes up. There is so much more! There are more interesting ideas in this world -- in programming languages, in object-oriented programming, in algorithms -- than we can cover in a 3-credit, 15-week course. The ideas of computing are bigger than any one course, and some of the cool things we do in class are only the beginning. This is a good thing. Our discipline is deep, and it rewards the curious with unexpected treasures.

More practically, "If only we had more time..." is a cue to students who do have more time -- graduate students looking for research projects, undergrad honors students looking for thesis topics, an undergrads who might be thinking of grad school down the line.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 12, 2007 6:54 PM

Agile Moments: Accountability and Continuous Feedback in Higher Ed

It's all talk until the tests run.
-- Ward Cunningham

A couple of years ago, I wrote about what I call my Agile Moments, and soon after wrote about another. If I were teaching an agile software development course, or some other course with an agile development bent, I'd probably have more such posts. (I teach a compiler development course this fall...) But I had an Agile Moment yesterday afternoon in an un-software-like place: a talk on program assessment at universities.

Student outcomes assessment is one of those trendy educational movements that comes and go like the seasons or the weather. Most faculty in the trenches view it with unrelenting cynicism, because they've been there before. Some legislative body or accrediting agency or university administrator decides that assessment is essential, and they deem it Our Highest Priority. The result is an unfunded mandate on departments and faculty to create an assessment plan and implement the plan. The content and structure of the plans are defined from above, and these are almost always onerous -- they look good from above but they look like unhelpful busy work to professors and students who just want to do computer science, or history, or accounting.

But as a software developer, and especially as someone with an agile bent, I see the idea of outcomes assessment as a no-brainer. It's all about continuous feedback and accountability.

Let's start with accountability. We don't set out to write software without a specification or a set of stories that tell us what our goal is. Why do we think we should start to teach a course -- or a four-year computer science degree program! -- without having a spec in hand? Without some public document that details what we are trying to achieve, we probably won't know if we are delivering value. And even if we know, we will probably have a hard time convincing anyone else.

The trouble is, most university educators think that they know what an education looks like, and they expect the rest of the world to trust them. For most of the history of universities, that's how things worked. Within the university, faculty shared a common vision of what to do when, and outside the students and the funders trusted them. The relationship worked out fine on both ends, and everyone was mostly happy.

Someone at the talk commented that the call for student and program outcomes assessment "break the social contract" between a university and its "users". I disagree and think that the call merely recognizes that the social contract is already broken. For whatever reason, students and parents and state governments now want the university to demonstrate its accountable.

While this may be unsettling, it really shouldn't surprise us. In the software world, most anyone would find it strange if the developers were not held accountable to deliver a particular product. (That is even more true in the rest of the economy, and this difference is the source of much consternation among folks outside the software world -- or the university.) One of the things I love about what Kent Beck has been teaching for the last few years is the notion of accountability, and the sort of honest communication that aims at working fairly with the people who hire us to build software. I don't expect less of my university.

In the agile software world, we often think about to whom they are accountable, and even focus on the best word to use, to send the right message: client, customer, user, stakeholder, .... Who is my client when I teach a CS course? My customer? My stakeholders? These are complex question, with many answers depending on the type of school and the level at which we ask them. Certainly students, parents, the companies who hire our graduates, the local community, the state government, and the citizens of the state are all partial stakeholders and thus potential answers as client or customer.

Outcomes assessment forces an academic department to specify what it intends to deliver, in a way that communicate the end product more effectively to others. This offers better accountability. It also opens the door to feedback and improvement.

When most people talk about outcomes assessment, they are thinking of the feedback component. As an agile software developer, I know that continuous feedback is essential to keeping me on track and to helping me improve as a developer. Yet we teach courses at universities and offer degrees to graduates while collecting little or no data as we go along. This is the data that we might use to improve our course or our degree programs.

The speaker yesterday quoted someone as saying that universities "systematically deprive themselves" of input from their customers. We sometimes collect data, but usually at the end of the semester, when we ask students to evaluate the course and the instructor using a form that often doesn't tell us what we need to know. Besides, the end of the semester is too late to improve the course while teaching the students giving the feedback!

From whom should I as instructor collect data? How do I use that data to improve a course? How do I use that data to improve my teaching more generally? To whom must I provide an accounting of my performance?

We should do assessment because we want to know something -- because we want to learn how to do our jobs better. External mandates to do outcomes assessment demotivate, not motivate. Does this sound anything like the world of software development?

Ultimately, outcomes assessment comes down to assessing student learning. We need to know whether students are learning what we want them to learn. This is one of those issues that goes back to the old social contract and common understanding of the university's goal. Many faculty define what they want students to know simply as "what our program expects of them" and whether they have learned it as "have they passed our courses?" But such circular definitions offer no room for accountability and no systematic way for departments t get better at what they do.

The part of assessment everyone seems to understand is grading, the assessment of students. Grades are offered by many professors as the primary indicator that we are meeting our curricular goals: students who pass my course have learned the requisite content. Yet even in this area most of us do an insufficient job. What does an A in a course mean? Or an 87%? When a student moves on to the next course in the program with a 72% (a C in most of my courses) in the prerequisite course, does that mean the student knows 72% of the material 100% of the way, 100% of the material 72% of the way, some mixture of the two, or something altogether different? And do we want to such a student writing the software on which we will depend tomorrow?

Grades are of little use to students except perhaps as carrots and sticks. What students really need is feedback that helps them improve. They need feedback that places the content and process they are learning into the context of doing something. More and more I am convinced that we need to think about how to use the idea of course competencies that West and Rostal implemented in their apprenticeship-based CS curriculum as a way to define for students and instructors alike what success in a course or curriculum mean.

My mind made what it thinks is one last connection to agile software development. One author suggests that we think of "assessment as narrative", as a way of telling our story. Collecting the right data at the right times can help us to improve. But it can also help us tell our story better. I think a bit part of agile development is telling our story: to other developers on the team, to new people we hire, to our clients and customers and stakeholders, and to our potential future clients and customers. The continuous feedback and integration that we do -- both on our software and on our teams -- is an essential cog in defining and telling that story. But maybe my mind was simply in overdrive when it made this connection.

It was the at end of this talk that I read the quote which led me to think of Kurt Vonnegut, coincidental to his passing yesterday, and which led me to write this entry. So it goes.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 12, 2007 8:16 AM

Kurt Vonnegut Has Come Unstuck in Time

Be careful what you pretend to be
because you are what you pretend to be.

Kurt Vonnegut

Sometimes, the universe speaks to us and catches us unaware.

Yesterday, I attended a workshop, about which I will have more to say later today. Toward the end, I saw a quote that struck me as an expression of this blog's purpose, and almost an unknowing source for the name of this blog:

Learning is about ... connecting teaching and knowing to action.

Connecting knowing to doing. That's what this blog is all about.

But long time readers know that "Knowing and Doing" almost wasn't the name of my blog. I considered several alternatives. Back in November 2004, I wrote about some of the alternatives. Most of the serious candidates came from Kurt Vonnegut, my favorite author. Indeed, that post wasn't primarily about the name of my blog but about Vonnegut himself, who was celebrating his 82nd birthday.

Here we are, trapped in the amber of the moment.
There is no why.

And then I wake up this morning to find the world atwitter with news of Vonnegut's passing yesterday. I'm not sure that anyone noticed, but Vonnegut died on a notable unbirthday, five months from the day of his birth. I think that Vonnegut would have liked that, as a great cosmic coincidence and as a connection to Lewis Carroll, a writer whose sense of unreality often matched Vonnegut's own. More than most, Kurt was in tune with just how much of what happens in this world is coincidence and happenstance. He wrote in part to encourage us not to put too much stock in our control over a very complex universe.

Busy, busy, busy.

Many people, critics included, considered Vonnegut a pessimist, an unhappy man writing dark humor as a personal therapy. But Vonnegut was not a pessimist. He was at his core one of the world's great optimists, an idealist who believed deeply in the irrepressible goodness of man. He once wrote that "Robin Hood" and the New Testament were the most revolutionary books of all time because they showed us a world in which people loved one another and looked out for the less fortunate. He wrote to remind us that people are lonely and that we have it in our own power to solve our own loneliness and the loneliness of our neighbors -- by loving one another, and building communities in which we all have the support we need to live.

Live by the foma that make you
brave and kind and healthy and happy.

I had the good fortune to see Kurt Vonnegut speak at the Wharton Center in East Lansing when I was a graduate student at Michigan State. I had the greater good fortune to see him speak when he visited UNI in the late 1990s. Then I saw his public talk, but I also sat in on a talk he gave to a foreign language class, on writing and translation. I also was able to sit in on his intimate meeting with the school's English Club, where he sat patiently in a small crowded room and told stories, answered questions, and generally fascinated awestruck fans, whether college students or old fogies like me. I am forever in the debt of the former student who let me know about those side events and made sure that I could be there with the students.

Sometimes the pool-pah
exceeds the power of humans to comment.

On aging, Vonnegut once said, "When Hemingway killed himself he put a period at the end of his life; old age is more like a semicolon." But I often think of Vonnegut staring down death and God himself in the form of old Bokonon, the shadow protagonist of his classic Cat's Cradle:

If I were a younger man, I would write a history of human stupidity; and I would climb to the top of Mount McCabe and lie down on my back with my history for a pillow; and I would take from the ground some of the blue-white poison that makes statues of men; and I would make a statue of myself, lying on my back, grinning horribly, and thumbing my nose at You Know Who.

The blue-white poison was, of course, Ice Nine. These days, that is the name of my rotisserie baseball team. I've used Vonnegut's words as names many times. Back at Ball State, my College Bowl team was named Slaughterhouse Five. (With our alternate, we were five.)

Kurt Vonnegut was without question my favorite writer. I spent teenage years reading Slaughterhouse Five and Cat's Cradle, Welcome to the Monkey House and Slapstick, The Sirens of Titan and Breakfast of Champions and Player Piano, the wonderfully touching God Bless You, Mr. Rosewater and the haunting Mother Night. Later I came to love Jailbird and Galapagos, Deadeye Dick and Hocus Pocus and especially Bluebeard. I reveled in his autobiographical collages, too, Wampeters, Foma, and Granfalloons, Palm Sunday, Fates Worse Than Death, and Timequake. His works affected me as much or more than those of any of the classic writers feted by university professors and critics.

The world is a lesser place today. But I am happy for the words he left us.

Tiger gotta hunt.
Bird gotta fly.
Man gotta sit and wonder why, why, why.

Tiger gotta sleep.
Bird gotta land.
Man gotta tell himself he understand.

If you've never read any Vonnegut, try it sometime. Start with Slaughterhouse Five or Cat's Cradle, both novels, or Welcome to the Monkey House, a collection of his short stories. Some of his short stories are simply stellar. If you like Cat's Cradle, check out my tabulation of The Books of Bokonon, which is proof that a grown man can still be smitten with a really good book.

And, yes, I still lust after naming my blog The Euphio Question.

Rest in peace, Kurt.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

April 11, 2007 7:41 AM

Negative PRs

Negative splits are usually a good thing. A negative PR is usually not.

In the last fours days, I have recorded a dubious achievement. I have run my slowest time ever on three different routes, ranging from 3 miles to 9 miles. And these times weren't close to what I expect. The only thing that saved me from the more dubious four-for-four was the 1.5 miles yesterday in the middle of a 5.5-mile route on which I picked up my pace to something speakable.

My rationalization is that this downturn in speed is the result of running a few more miles again plus my first interval workout in seven months -- a 5x800m, 6.5-mile workout last Friday.

Patience, patience.

UPDATE: I almost made it four for five this morning, but I came in a minute or two under my slowest time for the route in question. Of course, that slowest time had been run in 8" of slushy snow, a year ago December! Rationalizing this one would have been even easier to make, as I ran an extra four miles with a student last night and my legs should be a bit more tired than usual. Besides, both the run last night and the one this morning were also done in 4-5" of slushy snow as well. I don't usually plan on snow runs for mid-April, but at least it was nice to break a fresh pack of snow one last time before spring arrives for good.


Posted by Eugene Wallingford | Permalink | Categories: Running

April 10, 2007 7:54 PM

Incendiary Humor Considered Harmful?

For a good laugh, take a look at Jeff Overbey's "considered harmful" considered harmful web page. He writes:

I'm not entirely sure why, but I searched ACM and IEEE for all papers with "Considered Harmful" in the title. The length of this list should substantiate my claim that that phrase should be banned from the literature.

And he lists them all. The diversity of areas in computing where people have off Dijkstra's famous screed on go-to statements is pretty broad. The papers range from computer graphics (Bishop et al.) to software engineering (de Champeaux), from the use of comments (Beckman) to web services (Khare et al.) and human-centered design (Donald Norman!). Guy Steele has two entries on the list, one from his classic lambda series of papers and the other on arithmetic shifting, of all things.

A lot of the "considered harmful" papers deal with low-level programming constructs, like go-to, =, if-then-else, and the like. People doing deep and abstract work in computing and software development can still have deeply-held opinions about the lowest-level issues in programming -- and hold them so strongly that feel obligated to make their case publicly.

There is even a paper on the list that uses the device in a circular reference: "'Cloning Considered Harmful' Considered Harmful", by Kapser and Godfrey. This idea is taken to its natural endpoint by Eric Meyer in his probably-should-be-a-classic essay "Considered Harmful" Essays Considered Harmful. While Meyer deserves credit for the accuracy his title, I can't help but thinking he'd have score more style points from the judges for the pithier "Considered Harmful" Considered Harmful.

Of course, that would invite the obvious rejoinder "'Considered Harmful' Considered Harmful" Considered Harmful, and where would that leave us?

Meyer's essay makes a reasonable point:

It is not uncommon, in the context of academic debates over computer science and Web standards topics, to see the publication of one or more "considered harmful" essays. These essays have existed in some form for more than three decades now, and it has become obvious that their time has passed. Because "considered harmful" essays are, by their nature, so incendiary, they are counter-productive both in terms of encouraging open and intelligent debate, and in gathering support for the view they promote. In other words, "considered harmful" essays cause more harm than they do good.

I think that many authors adopt the naming device as an attempt to use humor to take the sharp edge off what is intended as an incendiary argument, or at least a direct challenge to what is perceived as an orthodoxy that no one thinks to challenge any more.

Apparently, the CS education community is more prone than most to making this sort of challenge. CS educators are indeed almost religious in their zeal for particular approaches, and the conservatism of academic CS is deeply entrenched. Looking at Overbey's list, I identify at least nine "considered harmful" papers on CS education topics, especially on the teaching of intro CS courses:

  • Westfall, "'Hello, World' Considered Harmful"
  • Rosenberg and Koelling, "I/O Considered Harmful..."
  • Martin, "Toy Projects Considered Harmful"
  • Johnson, "C in the First Course Considered Harmful"
  • Schneider, "Compiler Textbook Bibliographies Considered Harmful"
  • Hitchner et al., "Programming Early Considered Harmful"
  • Buck and Stucki, "Design Early Considered Harmful"
  • Kay, "Bandwagons Considered Harmful..." (in curriculum development)
  • Hu, "Dataless Objects Considered Harmful"
I've read far too many of these... And there may well be other intro CS papers on the list that I don't recognize just from their names.

Some of the papers on the CS ed list are even in direct opposition to one another! Consider "Programming Early Considered Harmful" and "Design Early Considered Harmful". If we can't do programming early, and we can't do design early, what can we do? Certainly not structured programming; that's on the bigger list twice.

This tells you something about the differences that arise in CS education, as well as the community's sense of humor. It may also say something about our level of creativity! (Just joking... I know some of these folks and know them to be quite creative.)


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

April 06, 2007 6:20 PM

Two, to Close the Week

... to the Sixth

My dad turned 64 yesterday. That's a nice round number in the computing world, though he might not appreciate me pointing that out. It's hard for me to imagine him, or me, any different than we were when I was a little boy growing up at home. It's also hard for me to imagine that someday soon my daughter might be thinking the same about the two of us. Perhaps I need a bigger imagination.

... Months

This is the time of the academic year when folks seeking jobs at other institutions, in particular administrative promotions, begin to learn of their good fortune and to plan to depart. Several of my colleagues at the university will be moving on to new challenges after this academic year.

In a meeting this week, one such colleague said something that needed to be said, but which most people wouldn't say. It was on one of those topics that seems off limits, for political or personal reasons, and so it usually just hangs in the air like Muzak.

Upon hearing the statement, another colleague joked, "Two months. You have two months to speak the truth. Two months to be a truth teller."

It occurred to me then that this must be quite a liberating feeling -- to be able to speak truths that otherwise will go unspoken. Almost immediately on the heels of this thought, it occurred to me just how sad it is that such truths go unspoken. And that I am also unwilling to speak them. Perhaps I need greater courage, or more skill.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Personal

April 05, 2007 8:57 PM

Feats of Association

An idea is a feat of association.
-- Robert Frost

Yesterday I went to a talk by Roy Behrens, an earlier talk of whose I enjoyed very much and blogged about. That time he talked about teaching as a "subversive inactivity", and this time he spoke more on the topic of his scholarly interest, in creativity and design, ideas and metaphors, similarities and differences, even camouflage! Given that these are his scholarly interests, I wasn't surprised that this talk touched on some of the same concepts as his teaching talk. There are natural connection between how ideas are formed at the nexus os similarity and difference and how one can best help people to learn. I found this talk energizing and challenging in a different sort of way.

In the preceding paragraph, I first wrote that Roy "spoke directly on the topic of his scholarly interest", but there was little direct about this talk. Instead, Roy gave us parallel streams of written passages and images from a variety of sources. This talk felt much like an issue of his commonplace book/journal Ballast Quarterly Review, which I have blogged about before. The effect was mesmerizing, and it had its intended effect in illustrating his point: that the human mind is a connection-making machine, an almost unwilling creator of ideas that grow out of the stimuli it encounters. We all left the talk with new ideas forming.

I don't have a coherent, focused essay on this talk yet, but I do have a collection of thoughts that are in various stages of forming. I'll share what I have now, as much for my own benefit as for what value that may have to you.

Similarity and difference, the keys to metaphor, matter in the creation of software. James Coplien has written an important book that explicates the roles of commonality analysis and variability analysis in the design of software that can separate domain concerns into appropriate modules and evolve gracefully as domain requirements change. Commonality and variability; similarity and difference. As one of Roy's texts pointed out, the ability to recognize similarity and difference is common to all practical arts -- and to scientists, creators, and inventors.

The idea of metaphor in software isn't entirely metaphorical. See this paper by Noble, Biddle, and Tempero that considers how metaphor and metonymy relate to object-oriented design patterns. These creative fellows have explored the application of several ideas from the creative arts to computing, including deconstruction and postmodernism.

To close, Roy showed us the 1952 short film Blacktop: A Story of the Washing of a School Play Yard. And that's the story it told, "with beautiful slow camera strides, the washing of a blacktop with water and soap as it moves across the asphalt's painted lines". This film is an example of how to make something fabulous out of... nothing. I think the more usual term he used was "making the familiar strange". Earlier in his talk he had read the last sentence of this passage from Maslow (emphasis added):

For instance, one woman, uneducated, poor, a full-time housewife and mother, did none of these conventionally creative things and yet was a marvelous cook, mother, wife, and home-maker. With little money, her home was somehow always beautiful. She was a perfect hostess. Her meals were banquets, her taste in linens, silver, glass crockery and furniture was impeccable. She was in all these areas original, novel, ingenious, unexpected, inventive. I learned from her and others like her that a first-rate soup is more creative than a second-rate painting, and that, generally, (un)cooking or parenthood or making a home could be creative while poetry need not be; it cold be uncreative.

Humble acts and humble materials can give birth to unimagined creativity. This is something of a theme for me in the design patterns world, where I tell people that even novices engage in creative design when they write the simplest of programs and where so-called elementary patterns are just as likely to give rise to creative programs as Factory or Decorator.

Behrens's talk touched on two other themes that run through my daily thoughts about software, design, and teaching. One dealt with tool-making, and the other with craft and limitations.

At one point during the Q-n-A after the talk, he reminisced about Let's Pretend, a radio show from his youth which told stories. The value to him as a young listener lay in forcing -- no, allowing -- him to create the world of the story in his own mind. Most of us these days are conditioned to approach an entertainment venue looking for something that has already been assembled for us, for the express purpose of entertaining ourselves. Creativity is lost when our minds never have the opportunity to create, and when our minds' ability to create atrophies from disuse. One of Roy's goals in teaching graphic design students is to help students see that they have the tools they need to create, to entertain.

This is true for artists, but in a very important sense it is true for computer science students, too. We can create. We can build our own tools--our own compilers, our own IDEs, our own applications, our own languages... anything we need! That is one of the great powers of learning computer science. We are in a new and powerful way masters of our own universe. That's one of the reasons I so enjoy teaching Programming Languages and compilers: because they confront CS students directly with the notion that their tools are programs just like any other. You never have to settle for less.

Finally, may favorite passage from Roy's talk plays right into my weakness for the relationship between writing and programming, and for the indispensable role of limitation in creativity and in learning how to create. From Anthony Burgess:

Art begins with craft, and there is no art until craft has been mastered. You can't create until you're willing to subordinate the creative impulses to the constriction of a form. But the learning of craft takes a long time, and we all think we're entitled to shortcuts.... Art is rare and sacred and hard work, and there ought to be a wall of fire around it.

One of my favorite of my blog posts is from March 2005, when I wrote a piece called Patterns as a Source of Freedom. Only in looking back now do I realize that I quoted Burgess there, too -- but only the sentence about willing subordination! I'm glad that Roy gave the context around that sentence yesterday, because it takes the quote beyond constriction of form to the notion of art growing out of craft. It then closes with that soaring allusion. Anyone who has felt even the slightest sense of creating something knows what Burgess means. We computer scientists may not like to admit that what we do is sometimes art, and that said art is rare and sacred, but that doesn't change reality.

Good talk -- worth much more in associations and ideas than the lunch hour it cost. My university is lucky to have Roy Behrens, and other thinkers like him, on our faculty.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

April 04, 2007 5:57 PM

Science Superstars for an Unscientific Audience

Somewhere, something incredible is waiting to be known.
-- Carl Sagan

Sometime ago I remember reading John Scalzi's On Carl Sagan, a nostalgic piece of hero worship for perhaps the most famous popularizer of science in the second half of the 20th century. Having written a piece or two of my own of hero worship, I empathize with Scalzi's nostalgia. He reminisces about what it was like to be an 11-year-old astronomer wanna-be, watching Sagan on TV, "talk[ing] with celebrity fluidity about what was going on in the universe. He was the people's scientist."

Scalzi isn't a scientist, but he has insight into the importance of someone like Sagan to science:

... Getting science in front of people in a way they can understand -- without speaking down to them -- is the way to get people to support science, and to understand that science is neither beyond their comprehension nor hostile to their beliefs. There need to be scientists and popularizers of good science who are of good will, who have patience and humor, and who are willing to sit with those who are skeptical or unknowing of science and show how science is already speaking their language. Sagan knew how to do this; he was uncommonly good at it.

We should be excited to talk about our work, and to seek ways to help others understand the beauty in what we do, and the value of what we do to humanity. But patience and humor are too often in short supply.

I thought of Scalzi's piece when I ran across a link to the recently retired Lance Fortnow's blog entry on Yisroel Brumer's Newsweek My Turn column Let's Not Crowd Me, I'm Only a Scientist. Seconding Brumer's comments, Fortnow laments that the theoretical computer scientist seems at a disadvantage in trying to be Sagan-like:

Much as I get excited about the P versus NP problem and its great importance to all science and society, trying to express these ideas to uninterested laypeople always seems to end up with "Should I buy an Apple or a Windows machine?"

(Ooh, ooh! Mr. Kotter, Mr. Kotter! I know the answer to that one!)

I wonder if Carl Sagan ever felt like that. Somehow I doubt so. Maybe it's an unfair envy, but astronomy and physics seem more visceral, more romantic to the general public. We in computing certainly have our romantic sub-disciplines. When I did AI research, I could always find an interested audience! People were fascinated by the prospect of AI, or disturbed by it, and both groups wanted to talk about. But as I began to do work in more inward-looking areas, such as object-oriented programming or agile software development, I felt more like Brumer felt as a scientist:

Just a few years ago, I was a graduate student in chemical physics, working on obscure problems involving terms like quantum mechanics, supercooled liquids and statistical thermodynamics. The work I was doing was fascinating, and I could have explained the basic concepts with ease. Sure, people would sometimes ask about my work in the same way they say "How are you?" when you pass them in the hall, but no one, other than the occasional fellow scientist, would actually want to know. No one wanted to hear about a boring old scientist doing boring old science.

So I know the feeling reported by Brumer and echoed by Fortnow. My casual conversation occurs not at cocktail parties (there aren't my style) but at 8th-grade girls' basketball games, and in the hall outside dance and choir practices. Many university colleagues don't ask about what I do at all, at least once they know I'm in CS. Most assume that computers are abstract and hard and beyond them. When conversation does turn to computers, it usually turns to e-mail clients or ISPs. If I can't diagnose some Windows machine's seemingly random travails, I am considered quizzically. I can't tell if they think I am a fraud or an idiot. Isn't that what computer scientists know, what they do?

I really can't blame them. We in computing don't tell our story all that well. (I'm have a distinct sense of deja vu right now, as I have blogged this topic several times before.) The non-CS public doesn't know what we in CS do because the public story of computing is mostly non-existent. Their impressions are formed by bad experiences using computers and learning how to program.

I take on some personal responsibility as well. When my students don't get something, I have to examine what I am doing to see whether the problem is with how I am teaching. In this case, maybe I just need to to be more interesting! At least I should be better prepared to talk about computing with a non-technical audience.

(By the way, I do know how to fix that Windows computer.)

But I think that Brumer and Fortnow are talking about something bigger. Most people aren't all that interested in science these days. They are interested in the end-user technology -- just ask them to show you the cool features on their new cell phones -- but not so much in the science that underlies the technology. Were folks in prior times more curious about science? Has our "audience" changed?

Again, we should think about where else responsibility for such change may lie. Certainly our science has changed over time. It is often more abstract than it was in the past, farther removed from the user's experience. When you drop too many layers of abstraction between the science and the human experience, the natural response of the non-scientist is to view the science as magic, impenetrable by the ordinary person. Or maybe it's just that the tools folks use are so commonplace that they pay the tools no mind. Do us old geezers think much about the technology that underlies pencils and the making of paper?

The other side of this issue is that Brumer found, after leaving his scientific post for a public policy position, that he is now something of a star among his friends and acquaintances. They want to know what he thinks about policy questions, about the future. Ironic, huh? Scientists and technologists create the future, but people want to talk to wonks about it. They must figure that a non-scientist has a better chance of communicating clearly with them. Either they don't fear that something will be lost in the translation via the wonk, or they decide that the risk is worth taking, whatever the cost of that.

This is the bigger issue: understanding and appreciation of science by the non-scientist, the curiosity that the ordinary person brings to the conversation. When I taught my university's capstone course, aimed at all students as their culminating liberal-arts core "experience", I was dismayed by the lack of interest among students about the technological issues that face them and their nation. But it seems sometimes that even CS students don't want to go deeper than the surface of their tools. This is consistent with a general lack of interest in how world works, and the role that science and engineering play in defining today's world. Many, many people are talking and writing about this, because a scientifically "illiterate" person cannot make informed decisions in the public arena. And we all live with the results.

I guess we need our Carl Sagan. I don't think it's in me, at least not by default. People like Bernard Chazelle and Jeannette Wing are making an effort to step out and engage the broader community on its own terms. I wish them luck in reaching Sagan's level and will continue to do my part on a local scale.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

April 02, 2007 6:09 PM

The First Monday in April

... is the traditional start of my training for the Sturgis Falls Half Marathon, which is the last Sunday in June. The tradition is only four years old, and it also marks the longer-term start of my marathon training for fall.

Last year, I was in such good shape coming out of winter that training for the June half didn't seem all that big a deal. This year, I'm coming off a January and February of little or no mileage, and a March in which I've slowly been building my mileage back up. So preparing for the half matters.

I had worked my way up to a 30-mile week before last week, but with a Sunday long run of only 7 or 7.5 miles. Last week, I continued a regular week of mileage and then took a big step forward: a 20K (12.4-mile) long run. A student is training for the St. Louis Marathon, but had done most of his work on city streets. I offered to take him out on our trail system for a more scenic long run and decided to go the full run with him. The result was good for me. I needed a nap on Sunday afternoon, but my legs were nothing but a little stiff this morning. I ran an easy three miles to loosen them up, and all is well. I think I am ready to train now, though I expect to be slower than last year for at least a few more weeks. With the coming of spring, I am ready for miles outdoors, however slow they may be.

Looking farther ahead, I think that this year I will run the Marine Corps Marathon. This will be my latest marathon (October 28, 2007) and will follow on the heels of OOPSLA in Montreal. That will be a challenge to my taper...


Posted by Eugene Wallingford | Permalink | Categories: Running