September 24, 2014 3:54 PM

Is It Really That Hard?

This morning, I tweeted:

Pretty sure I could build a git-based curriculum management system in two weeks that would be miles better than anything on the market now.

Yes, I know that it is easy to have ideas, and that carrying an idea through to a product is the real challenge. At least I don't just need a programmer...

My tweet was the result of temporary madness provoked by yet another round of listening to non-CS colleagues talk about one of the pieces of software we use on campus. It is a commercial product purchased for one task only, to help us manage the cycle of updating the university catalog. Alas, in its current state, it can handle only one catalog at a time. This is, of course, inconvenient. There are always at least two catalogs: the one in effect at this moment, and the one in progress of being updated. That doesn't even take into account all of the old catalogs still in effect for the students who entered the university when they were The Catalog.

Yes, we need version control. Either the current software does not provide it, or that feature is turned off.

The madness arises because of the deep internal conflict that occurs within me when I'm drawn into such conversations. Everyone assumes that programs "can't do this", or that the programmers who wrote our product were mean or incompetent. I could try to convince them otherwise by explaining the idea of version control. But their experience with commercial software is so uniformly bad that they have a hard time imagining I'm telling the truth. Either I misunderstand the problem, or I am telling them a white lie.

The alternative is to shake my head, agree with them implicitly, and keep thinking about how to teach my intro students how to design simple programs.

I'm convinced that a suitable web front-end to a git back end could do 98% of what we need, which is about 53% more than either of our last two commercial solutions has done for us.

Maybe it's time for me to take a leave of absence, put together a small team of programmers, and do this. Yes, I would need a team. I know my limitations, and besides working with a few friends would be a lot more fun. The current tools in this space leave a lot of room for improvement. Built well and marketed well, this product would make enough money from satisfaction-starved universities to reward everyone on the team well enough for all to retire comfortably.

Maybe not. But the idea is free the taking. All I ask is that if you build it, give me a shout-out on your website. Oh, and cut my university a good deal when we buy your software to replace whatever product we are grumbling about when you reach market.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

September 15, 2014 4:22 PM

It's All Just Keystrokes

Laurie Penny describes one effect of so many strands of modern life converging into the use of a single device:

That girl typing alone at the internet café might be finishing off her novel. Or she might be breaking up with her boyfriend. Or breaking into a bank. Unless you can see her screen, you can't know for sure. It's all just keystrokes.

Some of it is writing in ways we have always written; some it is writing in ways only recently imagined. Some of it is writing for a computer. A lot of it is writing.

(Excerpt from Why I Write.)


Posted by Eugene Wallingford | Permalink | Categories: General

September 12, 2014 1:49 PM

The Suffocating Gerbils Problem

I had never heard of the "suffocating gerbils" problem until I ran across this comment in a Lambda the Ultimate thread on mixing declarative and imperative approaches to GUI design. Peter Van Roy explained the problem this way:

A space rocket, like the Saturn V, is a complex piece of engineering with many layered subsystems, each of which is often pushed to the limits. Each subsystem depends on some others. Suppose that subsystem A depends on subsystem B. If A uses B in a way that was not intended by B's designers, even though formally B's specification is being followed by A, then we have a suffocating gerbils problem. The mental image is that B is implemented by a bunch of gerbils running to exhaustion in their hoops. A is pushing them to do too much.

I first came to appreciate the interrelated and overlapping functionality of engineered subsystems in graduate school, when I helped a fellow student build a software model of the fuel and motive systems of an F-18 fighter plane. It was quite a challenge for our modeling language, because the functions and behaviors of the systems were intertwined and did not follow obviously from the specification of components and connections. This challenge motivated the project. McDonnell Douglas was trying to understand the systems in a new way, in order to better monitor performance and diagnose failures. (I'm not sure how the project turned out...)

We suffocate gerbils at the university sometimes, too. Some functions depend on tenure-track faculty teaching occasional overloads, or the hiring of temporary faculty as adjuncts. When money is good, all is well. As budgets tighten, we find ourselves putting demands on these subsystems to meet other essential functions, such as advising, recruiting, and external engagement. It's hard to anticipate looming problems before they arrive in full failure; everything is being done according to specification.

Now there's a mental image: faculty gerbils running to exhaustion.

If you are looking for something new to read, check out some of Van Roy's work. His Concepts, Techniques, and Models of Computer Programming offers all kinds of cool ideas about programming language design and use. I happily second the sentiment of this tweet:

Note to self: read all Peter Van Roy's LtU comments in chronological order and build the things that don't exist yet: http://lambda-the-ultimate.org/user/288/track?from=120&sort=asc&order=last%20post

There are probably a few PhD dissertations lurking in those comments.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

August 19, 2014 1:49 PM

The Universal Justification

Because we need it to tell better stories.

Ethan Zuckerman says that this is the reason people are addicted to big data, quoting Macej Ceglowski's wonderful The Internet with a Human Face But if you look deep enough, this is the reason that most of us do so many of the things we do. We want to tell better stories.

As I teach our intro course this fall, I am going to ask myself occasionally, "How does what we are learning today help my students tell a better story?" I'm curious to see how that changes the way I think about the things we do in class.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 28, 2014 1:00 PM

Sometimes, You Have To Speak Up For Yourself

Wisdom from the TV:

"Whatever happened to humility, isn't that a virtue or something?"

"One of the highest. People in power are always saying so."

It is worth noting that one of the antonyms of "humble" is "privileged".

~~~~

This passage apparently occurs in an episode of Orange Is The New Black. I've never seen the show, but the exchange is quoted in this discussion of the show.

I just realized how odd it is to refer to Orange Is The New Black as a TV show. It is Netflix original series, which shows up on your TV only if you route your Internet viewing through that old box. Alas, 30- and 60-minute serialized shows have always been "TV" to me. I'm caught in the slipstream as our dominant entertainment media change forms.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

July 10, 2014 3:08 PM

The Passing of the Postage Stamp

In this New York Times article on James Baldwin's ninetieth birthday, scholar Henry Louis Gates laments:

On one hand, he's on a U.S. postage stamp; on the other hand, he's not in the Common Core.

I'm not qualified to comment on Baldwin and his place in the Common Core. In the last few months, I read several articles about and including Baldwin, and from those I have come to appreciate better his role in twentieth-century literature. But I also empathize with anyone trying to create a list of things that every American should learn in school.

What struck me in Gates's comment was the reference to the postage stamp. I'm old enough to have grown up in a world where the postage stamp held a position of singular importance in our culture. It enabled communication at a distance, whether geographical or personal. Stamps were a staple of daily life.

In such a world, appearing on a stamp was an honor. It indicated a widespread acknowledgment of a person's (or organization's, or event's) cultural impact. In this sense, the Postal Service's decision to include James Baldwin on a stamp was a sign of his importance to our culture, and a way to honor his contributions to our literature.

Alas, this would have been a much more significant and visible honor in the 1980s or even the 1990s. In the span of the last decade or so, the postage stamp has gone from relevant and essential to archaic.

When I was a boy, I collected stamps. It was a fun hobby. I still have my collection, even if it's many years out of date now. Back then, stamp collecting was a popular activity with a vibrant community of hobbyists. For all I know, that's still true. There's certainly still a vibrant market for some stamps!

But these days, whenever I use a new stamp, I feel as if I'm holding an anachronism in my hands. Computing technology played a central role in the obsolescence of the stamp, at least for personal and social communication.

Sometimes people say that we in CS need to a better job helping potential majors see the ways in which our discipline can be used to effect change in the world. We never have to look far to find examples. If a young person wants to be able to participate in how our culture changes in the future, they can hardly do better than to know a little computer science.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Personal

July 09, 2014 12:35 PM

Why I Blog, Ten Years On

A blog can be many things.

It can an essay, a place to work out what I think, in the act of writing.

It can be a lecture, a place to teach something, however big or small, in my own way.

It can be memoir, a place to tell stories about my life, maybe with a connection to someone else's story.

It can be a book review or a conference review, a place to tell others about something I've read or seen that they might like, too. Or not.

It can be an open letter, a place to share news, good or bad, in a broadcast that reaches many.

It can be a call for help, a request for help from anyone who receives the message and has the time and energy to respond.

It can be a riff on someone else's post. I'm not a jazz musician, but I like to quote the melodies in other people's writing. Some blog posts are my solos.

It can be a place to make connections, to think about how things are similar and different, and maybe learn something in the process.

A blog is all of these, and more.

A blog can also be a time machine. In this mode, I am the reader. My blog reminds me who I was at another time.

This effect often begins with a practical question. When I taught agile software development this summer, I looked back to when I taught it last. What had I learned then but forgotten since? How might I do a better job this time around?

When I visit blog posts from the past, though, something else can happen. I sometimes find myself reading on. The words mesmerize me and pull me forward on the page, but back in time. It is not that the words are so good that I can't stop reading. It's that they remind me who I was back then. A different person wrote those words. A different person, yet me. It's quite a feeling.

A blog can combine any number of writing forms. I am not equally good writing in all of these forms, or even passably good in any of them. But they are me. Dave Winer has long said that a blog is the unedited voice of a person. This blog is the unedited voice of me.

When I wrote my first blog post ten years ago today, I wasn't sure if anyone wanted to hear my voice. Over the years, I've had the good fortune to interact with many readers, so I know someone is listening. That still amazes me. I'm glad that something you read here is worth the visit.

Back in those early days, I wondered if it even mattered whether anyone else would read. The blog as essay and as time machine are valuable enough on their own to make writing worth the effort to me. But I'll be honest: it helps a lot knowing that other people are reading. Even when you don't send comments by e-mail, I know you are there. Thank you for your time.

I don't write as often as I did in the beginning. But I still have things to say, so I'll keep writing.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

June 26, 2014 11:12 AM

Debunking Christensen?

A lot of people I know have been discussing the recent New Yorker article "debunking" Clayton Christensen's theory of disruptive innovation. I'm withholding judgment, because that usually is the right thing for me to do when discussing theories about systems we don't understand well and critiques of such theories. The best way to find out the answer is to wait for more data.

That said, we have seen this before in the space of economics and business management. A few years back, the book Good to Great by James Collins became quite popular on my campus, because our new president, an economist by training, was a proponent of its view of how companies had gone from being merely steady producers to being stars in their markets. He hoped that we could use some of its prescriptions to help transform our university from a decent public comprehensive into a better, stronger institution.

But in recent years we have seen critiques of Collins's theory. The problem: some of the companies that Collins touts in the book have fallen on hard times and been unable to sustain their greatness. (As I said, more data usually settles all scores.) Good to Great's prescriptions weren't enough for companies to sustain greatness; maybe they were not sufficient, or even necessary, for achieving (short-term) market dominance.

This has long been a weakness of the business management literature. When I was an undergrad double majoring in CS and accounting, I read a lot of case studies about successful companies, and my professors tried to help us draw out truths that would help any company succeed. Neither the authors of the case studies nor the professors seemed aware that we were suffering from a base case of survivor bias. Sure, that set of strategies worked for Coca Cola. Did other companies use the same strategies and fail? If so, why? Maybe Coca Cola just got lucky. We didn't really know.

My takeaway from reading most business books of this sort is that they tell great stories. They give us posthoc explanations of complex systems that fit the data at hand, but they don't have much in the way of predictive power. Buying into such theories wholesale as a plan for the future is rarely a good idea.

These books can still be useful to people who read them as inspirational stories and a source of ideas to try. For example, I found Collins's idea of "getting the right people on the bus" to be helpful when I was first starting as department head. I took a broad view of the book and learned some things.

And that said, I have speculated many times here about the future of universities and even mentioned Christensen's idea of disruption a couple of times [ 1 | 2 ]. Have I been acting on a bad theory?

I think the positive reaction to the New Yorker article is really a reaction to the many people who have been using the idea of disruptive innovation as a bludgeon in the university space, especially with regard to MOOCs. Christensen himself has sometimes been guilty of speaking rather confidently about particular ways to disrupt universities. After a period of groupthink in which people know without evidence that MOOCs will topple the existing university model, many of my colleagues are simply happy to have someone speak up on their side of the debate.

The current way that universities do business faces a number of big challenges as the balance of revenue streams and costs shifts. Perhaps universities as we know them now will ultimately be disrupted. This does not mean that any technology we throw at the problem will be the disruptive force that topples them. As Mark Guzdial wrote recently,

Moving education onto MOOCs just to be disruptive isn't valuable.

That's the most important point to take away from the piece in the New Yorker: disruptors ultimately have to provide value in the market. We don't know yet if MOOCs or any other current technology experiment in education can do that. We likely won't know until after it starts to happen. That's one of the important points to take away from so much of the business management literature. Good descriptive theories often don't make good prescriptive theories.

The risk people inside universities run is falling into a groupthink of their own, in which something very like the status quo is the future of higher education. My colleagues tend to speak in more measured tones than some of the revolutionaries espousing on-line courses and MOOCs, but their words carry an unmistakable message: "What we do is essential. The way we do has stood the test of time. No one can replace us." Some of my colleagues admit ruefully that perhaps something can replace the university as it is, but that we will all be worse off as a result.

That's dangerous thinking, too. Over the years, plenty of people who have said, "No one can do what we do as well as we do" have been proven wrong.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

May 05, 2014 4:35 PM

Motivated by Teaching Undergrads

Recently, a gentleman named Seth Roberts passed away. I didn't know Roberts and was not familiar with his work. However, several people I respect commented on his life and career, so I took a look at one colleague's reminiscence. Roberts was an interesting fellow who didn't do things the usual way for a research academic. This passage stood out:

Seth's academic career was unusual. He shot through college and graduate school to a tenure-track job at a top university, then continued to do publication-quality research for several years until receiving tenure. At that point he was not a superstar but I think he was still considered a respected member of the mainstream academic community. But during the years that followed, Seth lost interest in that thread of research (you can see this by looking at the dates of most of his highly-cited papers). He told me once that his shift was motivated by teaching introductory undergraduate psychology: the students, he said, were interested in things that would affect their lives, and, compared to that, the kind of research that leads to a productive academic career did not seem so appealing.

That last sentence explains, I think, why so many computer science faculty at schools that are not research-intensive end up falling away from traditional research and publishing. When you come into contact with a lot of undergrads, you may well find yourself caring more deeply about things that will affect their lives in a more direct way. Pushing deeper down a narrow theoretical path, or developing a novel framework for file system management that most people will never use, may not seem like the best way to use your time.

My interests have certainly shifted over the years. I found myself interested in software development, in particular tools and practices that students can use to make software more reliably and teaching practices that would students learn more effectively. Fortunately, I've always loved programming qua programming, and this has allowed me to teach different programming styles with an eye on how learning them will help my students become better programmers. Heck, I was even able to stick with it long enough that functional programming became popular in industry! I've also been lucky that my interest in languages and compilers has been of interest to students and employers over the last few years.

In any event, I can certainly understand how Roberts diverged from the ordained path and turned his interest to other things. One challenge for leaving the ordained path is to retain the mindset of a scientist, seeking out opportunities to evaluate ideas and to disseminate the ones that appear to hold up. You don't need to publish in the best journals to disseminate good ideas widely. That may not even be the best route.

Another challenge is to find a community of like-minded people in which to work. An open, inquisitive community is a place to find new ideas, a place to try ideas out before investing too much in a doomed one, and a place to find the colleagues most of us need to stay sane while exploring what interests. The software and CS worlds have helped create the technology that makes it possible to grow such communities in new ways, and our own technology now supports some amazing communities of software and CS people. It is a good time to be an academic or developer.

I've enjoyed reading about Roberts' career and learning about what seems to have been one of the academia's unique individuals. And I certainly understand how teaching introductory undergrads might motivate a different worldview for an academic. It's good to be reminded that it's okay to care about the things that will affect the lives of our students now rather than later.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 27, 2014 7:20 PM

Knowing and Doing in the Wild, Antifragile Edition

a passage from Taleb's 'Antifragile' that mentions knowing and doing

Reader Aaron Friel was reading Taleb's Antifragile and came across a passage that brought to mind this blog. Because of "modernity's connectivity, and the newfound invisibility of causal chains", Taleb says, ...

The intellectual today is vastly more powerful and dangerous than before. The "knowledge world" causes separation of knowing and doing (within the same person) and leads to the fragility of society.

He wondered if this passage was the source of the title of my blog. Knowing and Doing predates Taleb's book by nearly decade, so it wasn't the source. But the idea expressed in this passage was certainly central to how the blog got its name. I hoped to examine the relationship between knowing and doing, and in particular the danger of separating them in the classroom or in the software studio. So, I'm happy to have someone make a connection to this passage.

Even so, I still lust after naming my blog The Euphio Question. RIP, Mr. Vonnegut.


Posted by Eugene Wallingford | Permalink | Categories: General

April 22, 2014 2:56 PM

Not Writing At All Leads To Nothing

In a recent interview, novelist and journalist Anna Quindlen was asked if she ever has writer's block. Her answer:

Some days I fear writing dreadfully, but I do it anyway. I've discovered that sometimes writing badly can eventually lead to something better. Not writing at all leads to nothing.

I deal with CS students all the time who are paralyzed by starting on a programming assignment, for fear of doing it wrong. All that gets them is never done. My job in those cases is less likely to involve teaching them something new they need to do the assignment and than to involve helping them get past the fear. A teacher sometimes has to be a psychologist.

I'd like to think that, at my advanced age and experience, I am beyond such fears myself. But occasionally they are there. Sometimes, I just have to force myself to write that first simple test, watch it fail, and ask myself, "What now?" As code happens, it may be good, or it may be bad, but it's not an empty file. Refactoring helps me make it better as I go along. I can always delete it all and start over, but by then I know more than I did at the outset, and I usually am ready to plow ahead.


Posted by Eugene Wallingford | Permalink | Categories: General

April 09, 2014 3:26 PM

Programming Everywhere, Vox Edition

In a report on the launch of Vox Media, we learn that line between software developers and journalists at Vox is blurred, as writers and reporters work together "to build the tools they require".

"It is thrilling as a journalist being able to envision a tool and having it become a real thing," Mr. Topolsky said. "And it is rare."

It will be less rare in the future. Programming will become a natural part of more and more people's toolboxes.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

March 12, 2014 3:55 PM

Not Content With Content

Last week, the Chronicle of Higher Ed ran an article on a new joint major at Stanford combining computer science and the humanities.

[Students] might compose music or write a short story and translate those works, through code, into something they can share on the web.

"For students it seems perfectly natural to have an interest in coding," [the program's director] said. "In one sense these fields might feel like they're far apart, but they're getting closer and closer."

The program works in both directions, by also engaging CS students in the societal issues created by ubiquitous networks and computing power.

We are doing something similar at my university. A few years ago, several departments began to collaborate on a multidisciplinary program called Interactive Digital Studies which went live in 2012. In the IDS program, students complete a common core of courses from the Communication Studies department and then take "bundles" of coursework involving digital technology from at least two different disciplines. These areas of emphasis enable students to explore the interaction of computing with various topics in media, the humanities, and culture.

Like Stanford's new major, most of the coursework is designed to work at the intersection of disciplines, rather than pursuing disciplines independently, "in parallel".

The initial version of the computation bundle consists of an odd mix of application tools and opportunities to write programs. Now that the program is in place, we are finding that students and faculty alike desire more depth of understanding about programming and development. We are in the process of re-designing the bundle to prepare students to work in a world where so many ideas become web sites or apps, and in which data analytics plays an important role in understanding what people do.

Both our IDS program and Stanford's new major focus on something that we are seeing increasingly at universities these days: the intersections of digital technology and other disciplines, in particular the humanities. Computational tools make it possible for everyone to create more kinds of things, but only if people learn how to use new tools and think about their work in new ways.

Consider this passage by Jim O'Loughlin, a UNI English professor, from a recent position statement on the the "digital turn" of the humanities:

We are increasingly unlikely to find writers who only provide content when the tools for photography, videography and digital design can all be found on our laptops or even on our phones. It is not simply that writers will need to do more. Writers will want to do more, because with a modest amount of effort they can be their own designers, photographers, publishers or even programmers.

Writers don't have to settle for producing "content" and then relying heavily on others to help bring the content to an audience. New tools enable writers to take greater control of putting their ideas before an audience. But...

... only if we [writers] are willing to think seriously not only about our ideas but about what tools we can use to bring our ideas to an audience.

More tools are within the reach of more people now than ever before. Computing makes that possible, not only for writers, but also for musicians and teachers and social scientists.

Going further, computer programming makes it possible to modify existing tools and to create new tools when the old ones are not sufficient. Writers, musicians, teachers, and social scientists may not want to program at that level, but they can participate in the process.

The critical link is preparation. This digital turn empowers only those who are prepared to think in new ways and to wield a new set of tools. Programs like our IDS major and Stanford's new joint major are among the many efforts hoping to spread the opportunities available now to a larger and more diverse set of people.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

March 11, 2014 4:52 PM

Change The Battle From Arguments To Tests

In his recent article on the future of the news business, Marc Andreessen has a great passage in his section on ways for the journalism industry to move forward:

Experimentation: You may not have all the right answers up front, but running many experiments changes the battle for the right way forward from arguments to tests. You get data, which leads to correctness and ultimately finding the right answers.

I love that clause: "running many experiments changes the battle for the right way forward from arguments to tests".

While programming, it's easy to get caught up in what we know about the code we have just written and assume that this somehow empowers us to declare sweeping truths about what to do next.

When students are first learning to program, they often fall into this trap -- despite the fact that they don't know much at all. From other courses, though, they are used to thinking for a bit, drawing some conclusions, and then expressing strongly-held opinions. Why not do it with their code, too?

No matter who we are, whenever we do this, sometimes we are right, and sometimes, we are wrong. Why leave it to chance? Run a simple little experiment. Write a snippet of code that implements our idea, and run it. See what happens.

Programs let us test our ideas, even the ideas we have about the program we are writing. Why settle for abstract assertions when we can do better? In the end, even well-reasoned assertions are so much hot air. I learned this from Ward Cunningham: It's all talk until the tests run.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

February 25, 2014 3:31 PM

Abraham Lincoln on Reading the Comment Section

From Abraham Lincoln's last public address:

As a general rule, I abstain from reading the reports of attacks upon myself, wishing not to be provoked by that to which I cannot properly offer an answer.

These remarks came two days after Robert E. Lee surrendered at Appomattox Court House. Lincoln was facing abuse from the North and the South, and from within his party and without.

The great ones speak truths that outlive their times.


Posted by Eugene Wallingford | Permalink | Categories: General

February 22, 2014 2:05 PM

MOOCs: Have No Fear! -- Or Should We?

The Grumpy Economist has taught a MOOC and says in his analysis of MOOCs:

The grumpy response to moocs: When Gutenberg invented moveable type, universities reacted in horror. "They'll just read the textbook. Nobody will come to lectures anymore!" It didn't happen. Why should we worry now?

The calming effect of his rather long entry is mitigated by other predictions, such as:

However, no question about it, the deadly boring hour and a half lecture in a hall with 100 people by a mediocre professor teaching utterly standard material is just dead, RIP. And universities and classes which offer nothing more to their campus students will indeed be pressed.

In downplaying the potential effects of MOOCs, Cochrane seems mostly to be speaking about research schools and more prestigious liberal arts schools. Education is but one of the "goods' being sold by such schools; prestige and connections are often the primary benefits sought by students there.

I usually feel a little odd when I read comments on teaching from people who teach mostly graduate students and mostly at big R-1 schools. I'm not sure their experience of teaching is quite the same as the experience of most university professors. Consequently, I'm suspicious of the prescriptions and predictions they make for higher education, because our personal experiences affect our view of the world.

That said, Cochrane's blog spends a lot of time talking about the nuts and bolts of creating MOOCs, and his comments on fixed and marginal costs are on the mark. (He may be grumpy, but he is an economist!) And a few of his remarks about teaching apply just as well to undergrads at a state teaching university as they do to U. of Chicago's doctoral program in economics. One that stood out:

Most of my skill as a classroom teacher comes from the fact that I know most of the wrong answers as well as the right ones.

All discussions of MOOCs ultimately include the question of revenue. Cochrane reminds us that universities...

... are, in the end, nonprofit institutions that give away what used to be called knowledge and is now called intellectual property.

The question now, though, is how schools can afford to give away knowledge as state support for public schools declines sharply and relative cost structure makes it hard for public and private schools alike to offer education at a price reasonable for their respective target audiences. The R-1s face a future just as challenging as the rest of us; how can they afford to support researchers who spend most of their time creating knowledge, not teaching it to students?

MOOCs are a weird wrench thrown into this mix. They seem to taketh away as much as they giveth. Interesting times.


Posted by Eugene Wallingford | Permalink | Categories: General

February 05, 2014 4:08 PM

Eccentric Internet Holdout Delbert T. Quimby

Back in a July 2006 entry, I mentioned a 1995 editorial cartoon by Ed Stein, then of the Rocky Mountain News. The cartoon featured "eccentric Internet holdout Delbert T. Quimby", contentedly passing another day non-digitally, reading a book in his den and drinking a glass of wine. It's always been a favorite of mine.

The cartoon had at least one other big fan. He looked for it on the web but had no luck finding it. When he googled the quote, though, there my blog entry was. Recently, his wife uncovered a newspaper clipping of the cartoon, and he remembered the link to my blog post. In an act of unprovoked kindness, he sent me a scan of the cartoon. So, 7+ years later, here it is:

Eccentric Internet holdout Delbert T. Quimby passes yet another day non-digitally.

The web really is an amazing place. Thanks, Duncan.

In 1995, being an Internet holdout was not quite as radical as it would be today. I'm guessing that most of the holdouts in 2014 are Of A Certain Age, remembering a simpler time when information was harder to come. To avoid the Internet and the web entirely these days is to miss out on a lot of life.

Even so, I am eccentric enough still to appreciate time off-line, a good book in my hand and a beverage at my side. Like my digital devices, I need to recharge every now and then.

(Right now, I am re-reading David Lodge's Small World. It's fun to watch academics made good sport of.)


Posted by Eugene Wallingford | Permalink | Categories: General

February 03, 2014 4:07 PM

Remembering Generosity

For a variety of reasons, the following passage came to mind today. It is from a letter that Jonathan Schoenberg wrote as part of the "Dear Me, On My First Day of Advertising" series on The Egotist forum:

You got into this business by accident, and by the generosity of people who could have easily been less generous with their time. Please don't forget it.

It's good for me to remind myself frequently of this. I hope I can be as generous with time to my students and colleagues as as so many of my professors and colleagues were with their time. Even when it means explaining nested for-loops again.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

January 27, 2014 3:29 PM

An Example of the Difference Between Scientists and Humanists

Earlier today, I tweeted a link to The origin of consciousness in the breakdown of the bicameral mind, in which Erik Weijers discusses an unusual theory about the origin of consciousness developed by Julian Jaynes:

[U]ntil a few thousand years ago human beings did not 'view themselves'. They did not have the ability: they had no introspection and no concept of 'self' that they could reflect upon. In other words: they had no subjective consciousness. Jaynes calls their mental world the bicameral mind.

It sounds odd, I know, but I found Jaynes's hypothesis to be a fascinating extrapolation of human history. Read more of Weijers's review if you might be interested.

A number of people who saw my tweet expressed interest in the article or a similar fascination with Jaynes's idea. Two people mentioned the book in which Jaynes presented his hypothesis. I responded that I would now have to dive into the book and learn more. How could I resist the opportunity?

Two of the comments that followed illustrate nicely the differing perspectives of the scientist and the humanist. First, Chris said:

My uncle always loved that book; I should read it, since I suspect serious fundamental evidentiary problems with his thesis.

And then Liz said:

It's good! I come from a humanities angle, so I read it as a thought experiment & human narrative.

The scientist thinks almost immediately of evidence and how well supported the hypothesis might be. The humanist thinks of the hypothesis first as a human narrative, and perhaps only then as a narrow scientific claim. Both perspectives are valuable; they simply highlight different forms of the claim.

From what I've seen on Twitter, I think that Chris and Liz are like me and most of the people I know: a little bit scientist, a little bit humanist -- interested in both the story and the argument. All that differs sometimes is the point from which we launch our investigations.


Posted by Eugene Wallingford | Permalink | Categories: General

January 27, 2014 11:39 AM

The Polymath as Intellectual Polygamist

Carl Djerassi, quoted in The Last Days of the Polymath:

Nowadays people [who] are called polymaths are dabblers -- are dabblers in many different areas. I aspire to be an intellectual polygamist. And I deliberately use that metaphor to provoke with its sexual allusion and to point out the real difference to me between polygamy and promiscuity.

On this view, a dilettante is merely promiscuous, making no real commitment to any love interest. A polymath has many great loves, and loves them all deeply, if not equally.

We tend to look down on dilettantes, but they can perform a useful service. Sometimes, making a connection between two ideas at the right time and in the right place can help spur someone else to "go deep" with the idea. Even when that doesn't happen, dabbling can bring great personal joy and provide more substantial entertainment than a lot of pop culture.

Academics are among the people these days with a well-defined social opportunity to be explore at least two areas deeply and seriously: their chosen discipline and teaching. This is perhaps the most compelling reason to desire a life in academia. It even offers a freedom to branch out into new areas later in one's career that is not so easily available to people who work in industry.

These days, it's hard to be a polymath even inside one's own discipline. To know all sub-areas of computer science, say, as well as the experts in those sub-areas is a daunting challenge. I think back to the effort my fellow students and I put in over the years that enabled us to take the Ph.D. qualifying exams in CS. I did quite well across the board, but even then I didn't understand operating systems or programming languages as well as experts in those areas. Many years later, despite continued reading and programming, the gap has only grown.

I share the vague sense of loss, expressed by the author of the article linked to above, of a time when one human could master multiple areas of discourse and make fundamental advances to several. We are certainly better off for collective understanding the world so much much better, but the result is a blow to a certain sort of individual mind and spirit.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

January 26, 2014 3:05 PM

One Reason We Need Computer Programs

Code bridges the gap between theory and data. From A few thoughts on code review of scientific code:

... there is a gulf of unknown size between the theory and the data. Code is what bridges that gap, and specifies how edge cases, weird features of the data, and unknown unknowns are handled or ignored.

I learned this lesson the hard way as a novice programmer. Other activities, such as writing and doing math, exhibit the same characteristic, but it wasn't until I started learning to program that the gap between theory and data really challenged me.

Since learning to program myself, I have observed hundreds of CS students encounter this gap. To their credit, they usually buckle down, work hard, and close the gap. Of course, we have to close the gap for every new problem we try to solve. The challenge doesn't go away; it simply becomes more manageable as we become better programmers.

In the passage above, Titus Brown is talking to his fellow scientists in biology and chemistry. I imagine that they encounter the gap between theory and data in a new and visceral way when they move into computational science. Programming has that power to change how we think.

There is an element of this, too, in how techies and non-techies alike sometimes lose track of how hard it is to create a successful start up. You need an idea, you need a programmer, and you need a lot of hard work to bridge the gap between idea and executed idea.

Whether doing science or starting a company, the code teaches us a lot about out theory. The code makes our theory better.

As Ward Cunningham is fond of saying, it's all talk until the tests run.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

December 18, 2013 3:31 PM

Favorite Passages from Today's Reading

From The End of the Facebook Era:

This is why social networks [like Google+] are struggling even more than Facebook to get a foothold in the future of social networking. They are betting on last year's fashion -- they're fighting Facebook for the last available room on the Titanic when they should be looking at all of the other ships leaving the marina.

A lot of people and organizations in this world are fighting over the last available room on their sector's version of the Titanic. Universities may well be among them. Who is leaving the marina?

From We Need to Talk About TED:

Astrophysics run on the model of American Idol is a recipe for civilizational disaster.

...

TED's version [of deep technocultural shift] has too much faith in technology, and not nearly enough commitment to technology. It is placebo technoradicalism, toying with risk so as to re-affirm the comfortable.

I like TED talks as much as the next person, but I often wonder how much change they cause in the world, as opposed to serving merely as chic entertainment for the comfortable First World set.


Posted by Eugene Wallingford | Permalink | Categories: General

December 17, 2013 3:32 PM

Always Have At Least Two Alternatives

Paraphrasing Kent Beck:

Whenever I write a new piece of code, I like to have at least two alternatives in mind. That way, I know I am not doing the worst thing possible.

I heard Kent say something like this at OOPSLA in the late 1990s. This is advice I give often to students and colleagues, but I've never had a URL that I could point them to.

It's tempting for programmers to start implementing the first good idea that comes to mind. It's especially tempting for novices, who sometimes seem surprised that they have even one good idea. Where would a second one come from?

More experienced students and programmers sometimes trust their skill and experience a little too easily. That first idea seems so good, and I'm a good programmer... Famous last words. Reality eventually catches up with us and helps us become more humble.

Some students are afraid: afraid they won't get done if they waste time considering alternatives, or afraid that they will choose wrong anyway. Such students need more confidence, the kind born out of small successes.

I think the most likely explanation for why beginners don't already seek alternatives is quite simple. They have not developed the design habit. Kent's advice can be a good start.

One pithy statement is often enough of a reminder for more experienced programmers. By itself, though, it probably isn't enough for beginners. But it can be an important first step for students -- and others -- who are in the habit of doing the first thing that pops into their heads.

Do note that this advice is consistent with XP's counsel to do the simplest thing that could possibly work. "Simplest" is a superlative. Grammatically, that suggests having at least three options from which to choose!


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

December 03, 2013 3:17 PM

The Workaday Byproducts of Striving for Higher Goals

Why set audacious goals? In his piece about the Snowfall experiment, David Sleight says yes, and not simply for the immediate end:

The benefits go beyond the plainly obvious. You need good R&D for the same reason you need a good space program. It doesn't just get you to the Moon. It gives you things like memory foam, scratch-resistant lenses, and Dustbusters. It gets you the workaday byproducts of striving for higher goals.

I showed that last sentence a little Twitter love, because it's something people often forget to consider, both when they are working in the trenches and when they are selecting projects to work on. An ambitious project may have a higher risk of failure than something more mundane, but it also has a higher chance of producing unexpected value in the form of new tools and improved process.

This is also something that university curricula don't do well. We tend to design learning experiences that fit neatly into a fifteen-week semester, with predictable gains for our students. That sort of progress is important, of course, but it misses out on opportunities for students to produce their own workaday byproducts. And that's an important experience for students to have.

It also gives a bad example of what learning should feel like, and what it should do for us. Students generally learn what we teach them, or what we make easiest for them to learn. If we always set before them tasks of known, easily-understood dimensions, then they will have to learn after leaving us that the world doesn't usually work like that.

This is one of the reasons I am such a fan of project-based computer science education, as in the traditional compiler course. A compiler is an audacious enough goal for most students that they get to discover their own personal memory foam.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

November 26, 2013 1:38 PM

Saying Thanks, and Giving Back

When someone asked Benjamin Franklin why he had declined to seek a patent for his famous stove, he said:

I declined it from a principle which has ever weighed with me on such occasions, that as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours.

This seems a fitting sentiment to recall as I look forward to a few days of break with my family for Thanksgiving. I know I have a lot to be thankful for, not the least of which are the inventions of so many others that confer great advantage on me. This week, I give thanks for these creations, and for the creators who shared them with me.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

November 21, 2013 3:06 PM

Agile Thoughts, Healthcare.gov Edition

Clay Shirky explains the cultural attitudes that underlie Healthcare.gov's problems in his recent essay on the gulf between planning and reality. The danger of this gulf exists in any organization, whether business or government, but especially in large organizations. As the number of levels grows between the most powerful decision makers and the workers in the trenches, there is an increasing risk of developing "a culture that prefers deluding the boss over delivering bad news".

But this is also a story of the danger inherent in so-called Big Design Up Front, especially for a new kind of product. Shirky oversimplifies this as the waterfall method, but the basic idea is the same:

By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work.

You may learn something, of course; you just aren't allowed to let it change what you build, or how.

Instead, waterfall insists that the participants will understand best how things should work before accumulating any real-world experience, and that planners will always know more than workers.

If the planners believe this, or they allow the workers to think they believe this, then workers will naturally avoid telling their managers what they have learned. In the best case, they don't want to waste anyone's time if sharing the information will have no effect. In the worst case, they might fear the results of sharing what they have learned. No one likes to admit that they can't get the assigned task done, however unrealistic it is.

As Shirky notes, many people believe that a difficult launch of Healthcare.gov was unavoidable, because political and practical factors prevented developers from testing parts of the project as they went along and adjusting their actions in response. Shirky hits this one out of the park:

That observation illustrates the gulf between planning and reality in political circles. It is hard for policy people to imagine that Healthcare.gov could have had a phased rollout, even while it is having one.

You can learn from feedback earlier, or you can learn from feedback later. Pretending that you can avoid problems you already know exist never works.

One of the things I like about agile approaches to software development is they encourage us not to delude ourselves, or our clients. Or our bosses.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Software Development

November 14, 2013 2:55 PM

Toward A New Data Science Culture in Academia

Fernando Perez has a nice write-up, An Ambitious Experiment in Data Science, describing a well-funded new project in which teams at UC Berkeley, the University of Washington, and NYU will collaborate to "change the culture of universities to create a data science culture". A lot of people have been quoting Perez's entry for its colorful assessment of academic incentives and reward structures. I like this piece for the way Perez defines and outlines the problem, in terms of both data science across disciplines and academic culture in general.

For example:

Most scientists are taught to treat computation as an afterthought. Similarly, most methodologists are taught to treat applications as an afterthought.

Methodologists here includes computer scientists, who are often more interested in new data structures, algorithms, and protocols.

This "mirror" disconnect is a problem for a reason many people already understand well:

Computation and data skills are all of a sudden everybody's problem.

(Here are a few past entries of mine that talk about how programming and the nebulous "computational thinking" have spread far and wide: 1 | 2 | 3 | 4.)

Perez rightly points out that the open-source software, while imperfect, often embodies the principles or science and scientific collaboration better than the academy. It will be interesting to see how well this data science project can inject OSS attitudes into big research universities.

He is concerned because, as I have noted before, are, as a whole, a conservative lot. Perez says this in a much more entertaining way:

There are few organizations more proud of their traditions and more resistant to change than universities (churches and armies might be worse, but that's about it).

I think he gives churches and armies more credit than they deserve.

The good news is that experiments of the sort being conducted in the Berkley/UW/NYU project are springing up on a smaller scale around the world. There is some hope for big change in academic culture if a lot of different people at a lot of different institutions experiment, learn, and create small changes that can grow together as they bump into one another.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

November 09, 2013 12:25 PM

An Unusual Day

My university is hosting an on-campus day to recruit HS students and transfer students today. At a day like this, I usually visit with one or two potential majors and chat with one or two others who might be interested in a CS or programming class. All are usually men.

Today was unusual.

Eight people visited the department to learn about the major.

I spoke with three people who intend to major in other areas, such as accounting and physics, and want to take a minor in CS.

I spoke with a current English major here is set to graduate in May but now is thinking about employability and considering picking up a second degree in CS.

I spoke with three female students who are interested in CS. These include the English major and a student who has taken several advanced math courses at a good private school nearby, really likes them, and is thinking of combining math and CS in a major here.

The third is a high school freshman who has taken taken all the tech courses available at her schools, helps the tech teacher with the schools computers, and wants to learn more. She told me, "I just think it would be cool to write programs and make things happen."

Some recruiting days are better than others. This is one.


Posted by Eugene Wallingford | Permalink | Categories: General

October 30, 2013 11:41 AM

Discipline Can Be Structural As Well As Personal

There is a great insight in an old post by Brian Marick, Discipline and Skill, which I re-read this week. The topic sentence asserts:

Discipline can be a personal virtue, but it must also be structural.

Extreme Programming illustrates this claim. It draws its greatest power from the structural discipline it creates for developers. Marick goes on:

For example, one of the reasons to program in pairs is that two people are less likely to skip a test than one is. Removing code ownership makes it more likely someone within glaring distance will see that you didn't leave code as clean as you should have. The business's absolute insistence on getting working -- really working -- software at frequent intervals makes the pain of sloppiness strike home next month instead of next year, stiffening the resolve to do the right thing today.

P consists of a lot of relatively simple actions, but simple actions can be hard to perform, especially consistently and especially in opposition to deeply ingrained habits. XP practices work together to create structural discipline that helps developers "do the right thing".

We see the use of social media playing a similar role these days. Consider diet. People who are trying to lose weight or exercise more have to do some pretty simple things. Unfortunately, those things are not easy to do consistently, and they are opposed by deep personal and cultural habits. In order to address this, digital tool providers like FitBit make it easy for users to sync their data to a social media account and share with others.

This is a form of social discipline, supported by tools and practices that give structure to the actions people want to take. Just like XP. Many behaviors in life work this way.

(Of course, I'm already on record as saying that XP is a self-help system. I have even fantasized about XP's relationship to self-help in the cinema.)


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

October 12, 2013 11:27 AM

StrangeLoop: This and That, Volume 3

[My notes on StrangeLoop 2013: Table of Contents]

Six good talks a day is about my limit. Seven for sure. Each creates so much mental activity that my brain soon loses the ability to absorb more. Then, I need a walk.

~~~~

After Jenny Finkel's talk on machine, someone asked if Prismatic's system had learned any features or weights that she found surprising. I thought her answer was interesting. I paraphrase: "No. As a scientist, you should understand why the system is the way that it is, or find the bug if it shouldn't be that way."

In a way, this missed the point. I'm guessing the questioner was looking to hear about a case that required them to dig in because the answer was correct but they didn't know why yet, or incorrect and the bug wasn't obvious. But Finkel's answer shows how matter-of-fact scientists can be about what they find. The world is as it is, and scientists try to figure out why. That's all.

~~~~

The most popular corporate swag this year was stickers to adorn one's laptop case. I don't put stickers on my gear, but I like looking at other people's stickers. My favorites were the ones that did more than simply display the company name. Among them were asynchrony:

asynchrony laptop sticker

-- which is a company name but also a fun word in its own right -- and data-driven:

O'Reilly laptop sticker

-- by O'Reilly. I also like the bound, graph-paper notebooks that O'Reilly hands out. Classy.

~~~~

In a previous miscellany I mentioned Double Multitasking Guy. Not me, not this time. I carried no phone, as usual, and this time I left my laptop back in the hotel room. Not having any networked technology in hand creates a different experience, if not a better one.

Foremost, having no laptop affects my blogging. I can't take notes as quickly, or as voluminously. One of the upsides of this is that it's harder for me to distract myself by writing complete sentences or fact-checking vocabulary and URLs. Quick, what is the key idea here? What do I need to look up? What do I need to learn next?

~~~~

With video recording now standard at tech conferences, and with StrangeLoop releasing its videos so quickly now, a full blow-by-blow report of each talk becomes somewhat less useful. Some people find summary reports helpful, though, because they don't want to watch the full talks or have the time to do so. Short reports let these folks keep their pulse on the state of the world. Others are looking for some indication of whether they want to invest the time to watch.

For me, the reports serve another useful purpose. They let me do a little light analysis and share my personal impressions of what I hear and learn. Fortunately, that sort of blog entry still finds an audience.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

September 28, 2013 12:17 PM

StrangeLoop: This and That, Volume 2

[My notes on StrangeLoop 2013: Table of Contents]

I am at a really good talk and look around the room. So many people are staring at their phones, scrolling away. So many others are staring at their laptops, typing away. The guy next to me: doing both at the same time. Kudos, sir. But you may have missed the point.

~~~~

Conference talks are a great source of homework problems. Sometimes, the talk presents a good problem directly. Others, watching the talk sets my subconscious mind in motion, and it creates something useful. My students thank you. I thank you.

~~~~

Jenny Finkel talked about the difference between two kinds of recommenders: explorers, who forage for new content, and exploiters, who want to see what's already popular. The former discovers cool new things occasionally but fails occasionally, too. The latter is satisfied most of the time but rarely surprised. As conference goes, I felt this distinction at play in my own head this year. When selecting the next talk to attend, I have to take a few risks if I ever hope to find something unexpected. But when I fail, a small regret tugs at me.

~~~~

We heard a lot of confident female voices on the StrangeLoop stages this year. Some of these speakers have advanced academic degrees, or at least experience in grad school.

~~~~

The best advice I received on Day 1 perhaps came not from a talk but from the building:

The 'Do not Climb on Bears' sign on a Peabody statue

"Please do not climb on bears." That sounds like a good idea most everywhere, most all the time.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

September 23, 2013 4:22 PM

StrangeLoop: This and That, Volume 1

[My notes on StrangeLoop 2013: Table of Contents]

the Peabody Opera House's Broadway series poster

I'm working on a post about the compiler talks I attended, but in the meantime here are a few stray thoughts, mostly from Day 1.

The Peabody Opera House really is a nice place to hold a conference of this size. If StrangeLoop were to get much larger, it might not fit.

I really don't like the word "architected".

The talks were scheduled pretty well. Only once in two days did I find myself really wanting to go to two talks at the same time. And only once did I hear myself thinking, "I don't want to hear any of these...".

My only real regret from Day 1 was missing Scott Vokes's talk on data compression. I enjoyed the talk I went to well enough, but I think I would have enjoyed this one more.

What a glorious time to be a programming language theory weenie. Industry practitioners are going to conferences and attending talks on dependent types, continuations, macros, immutable data structures, and functional reactive programming.

Moon Hooch? Interesting name, interesting sound.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

August 29, 2013 4:31 PM

Asimov Sees 2014, Through Clear Eyes and Foggy

Isaac Asimov, circa 1991

A couple of years ago, I wrote Psychohistory, Economics, and AI, in which I mentioned Isaac Asimov and one way that he had influenced me. I never read Asimov or any other science fiction expecting to find accurate predictions of future. What drew me in was the romance of the stories, dreaming "what if?" for a particular set of conditions. Ultimately, I was more interested in the relationships among people under different technological conditions than I was in the technology itself. Asimov was especially good at creating conditions that generated compelling human questions.

Some of the scenarios I read in Asimov's SF turned out to be wildly wrong. The world today is already more different from the 1950s than the world of the Foundation, set thousands of years in the future. Others seem eerily on the mark. Fortunately, accuracy is not the standard by which most of us judge good science fiction.

But what of speculation about the near future? A colleague recently sent me a link to Visit to the World's Fair of 2014, an article Asimov wrote in 1964 speculating about the world fifty years hence. As I read it, I was struck by just how far off he was in some ways, and by how close he was in others. I'll let you read the story for yourself. Here are a few selected passages that jumped out at me.

General Electric at the 2014 World's Fair will be showing 3-D movies of its "Robot of the Future," neat and streamlined, its cleaning appliances built in and performing all tasks briskly. (There will be a three-hour wait in line to see the film, for some things never change.)

3-D movies are now common. Housecleaning robots are not. And while some crazed fans will stand in line for many hours to see the latest comic-book blockbuster, going to a theater to see a movie has become much less important part of the culture. People stream movies into their homes and into their hands. My daughter teases me for caring about the time any TV show or movie starts. "It's on Hulu, Dad." If it's not on Hulu or Netflix or the open web, does it even exist?

Any number of simultaneous conversations between earth and moon can be handled by modulated laser beams, which are easy to manipulate in space. On earth, however, laser beams will have to be led through plastic pipes, to avoid material and atmospheric interference. Engineers will still be playing with that problem in 2014.

There is no one on the moon with whom to converse. Sigh. The rest of this passage sounds like fiber optics. Our world is rapidly becoming wireless. If your device can't connect to the world wireless web, does it even exist?

In many ways, the details of technology are actually harder to predict correctly than the social, political, economic implications of technological change. Consider:

Not all the world's population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.

Spot on.

When my colleague sent me the link, he said, "The last couple of paragraphs are especially relevant." They mention computer programming and a couple of its effects on the world. In this regard, Asimov's predictions meet with only partial success.

The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being. Mankind will therefore have become largely a race of machine tenders. Schools will have to be oriented in this direction. ... All the high-school students will be taught the fundamentals of computer technology will become proficient in binary arithmetic and will be trained to perfection in the use of the computer languages that will have developed out of those like the contemporary "Fortran" (from "formula translation").

The first part of this paragraph is becoming truer every day. Many people husband computers and other machines as they do tasks we used to do ourselves. The second part is, um, not true. Relatively few people learn to program at all, let alone master a programming language. And how many people understand this t-shirt without first receiving an impromptu lecture on the street?

Again, though, Asimov is perhaps closer on what technological change means for people than on which particular technological changes occur. In the next paragraph he says:

Even so, mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity. This will have serious mental, emotional and sociological consequences, and I dare say that psychiatry will be far and away the most important medical specialty in 2014. The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.

This is still speculation, but it is already more true than most of us would prefer. How much truer will it be in a few years?

My daughters will live most of their lives post-2014. That worries the old fogey in me a bit. But it excites me more. I suspect that the next generation will figure the future out better than mine, or the ones before mine, can predict it.

~~~~

PHOTO. Isaac Asimov, circa 1991. Britannica Online for Kids. Web. 2013 August 29. http://kids.britannica.com/comptons/art-136777.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

August 28, 2013 3:07 PM

Risks and the Entrepreneurs Who Take Them

Someone on the SIGCSE mailing list posted a link to an article in the Atlantic, that explores a correlation between entrepreneurship, teenaged delinquency, and white male privilege. The article starts with

It does not strike me as a coincidence that a career path best suited for mild high school delinquents ends up full of white men.

and concludes with

To be successful at running your own company, you need a personality type that society is a lot more forgiving of if you're white.

The sender of the link was curious what educational implications these findings have, if any, for how we treat academic integrity in the classroom. That's an interesting question, though my personal tendency to follow rules and not rock the boat has always made me more sensitive to the behavior of students who employ the aphorism "ask for forgiveness, not permission" a little too cavalierly for my taste.

My first reaction to the claims of this article was tied to how I think about the kinds of risks that entrepreneurs take.

When most people in the start-up world talk about taking risks, they are talking about the risk of failure and, to a lesser extent, the risk of being unconventional, not the risk of being caught doing something wrong. In my personal experience, the only delinquent behavior our entrepreneurial former students could be accused of is not doing their homework as regularly as they should. Time spent learning for their business is time not spent on my course. But that's not delinquent behavior; it's curiosity focused somewhere other than my classroom.

It's not surprising, though, that teens who were willing take legal risks are more likely willing to take business risk, and (sadly) legal risks in their businesses. Maybe I've simply been lucky to have worked with students and other entrepreneurs of high character.

Of course, there is almost certainly a white male privilege associated with the risk of failure, too. White males are often better positioned financially and socially than women or minorities to start over when a company fails. It's also easier to be unconventional and stand out from the crowd when you don't already stand out from the crowd due to your race or gender. That probably accounts for the preponderance of highly-educated white men in start-ups better than a greater willingness to partake in "aggressive, illicit, risk-taking activities".


Posted by Eugene Wallingford | Permalink | Categories: General

July 24, 2013 11:44 AM

Headline: "Dinosaurs Object to Meteor's Presence"

Don't try to sell a meteor to a dinosaur...

Nate Silver recently announced that he is leaving the New York Times for ESPN. Margaret Sullivan offers some observations on the departure, into how political writers at the Times viewed Silver and his work:

... Nate disrupted the traditional model of how to cover politics.

His entire probability-based way of looking at politics ran against the kind of political journalism that The Times specializes in: polling, the horse race, campaign coverage, analysis based on campaign-trail observation, and opinion writing. ...

His approach was to work against the narrative of politics. ...

A number of traditional and well-respected Times journalists disliked his work. The first time I wrote about him I suggested that print readers should have the same access to his writing that online readers were getting. I was surprised to quickly hear by e-mail from three high-profile Times political journalists, criticizing him and his work. ...

Maybe Silver decided to acquiesce to Hugh MacLeod's advice. Maybe he just got a better deal.

The world changes, whether we like it or not. The New York Times and its journalists probably have the reputation, the expertise, and the strong base they need to survive the ongoing changes in journalism, with or without Silver. Other journalists don't have the luxury of being so cavalier.

I don't know any more attitudes inside the New York Times than what I see reported in the press, but Sullivan's article made me think of one of Anil Dash's ten rules of the internet:

When a company or industry is facing changes to its business due to technology, it will argue against the need for change based on the moral importance of its work, rather than trying to understand the social underpinnings.

I imagine that a lot of people at the Times are indeed trying to understand the social underpinnings of the changes occurring in the media and trying to respond in useful ways. But that doesn't mean that everyone on the inside is, or even that the most influential and high-profile people in the trenches are. And that's adds an internal social challenge to the external technological challenge.

Alas, we see much the same dynamic playing out in universities across the country, including my own. Some dinosaurs have been around for a long time. Others are near the beginning of their careers. The internal social challenges are every bit as formidable as the external economic and technological ones.


Posted by Eugene Wallingford | Permalink | Categories: General

July 23, 2013 9:46 AM

Some Meta-Tweeting Silliness

my previous tweet
missed an opportunity,
should have been haiku

(for @fogus)


Posted by Eugene Wallingford | Permalink | Categories: General

July 15, 2013 2:41 PM

Version Control for Writers and Publishers

Mandy Brown again, this time on on writing tools without memory:

I've written of the web's short-term memory before; what Manguel trips on here is that such forgetting is by design. We designed tools to forget, sometimes intentionally so, but often simply out of carelessness. And we are just as capable of designing systems that remember: the word processor of today may admit no archive, but what of the one we build next?

This is one of those places where the software world has a tool waiting to reach a wider audience: the version control system. Programmers using version control can retrieve previous states of their code all the way back to its creation. The granularity of the versions is limited only by the frequency with which they "commit" the code to the repository.

The widespread adoption of version control and the existence of public histories at place such as GitHub have even given rise to a whole new kind of empirical software engineering, in which we mine a large number of repositories in order to understand better the behavior of developers in actual practice. Before, we had to contrive experiments, with no assurance that devs behaved the same way under artificial conditions.

Word processors these days usually have an auto-backup feature to save work as the writer types text. Version control could be built into such a feature, giving the writer access to many previous versions without the need to commit changes explicitly. But the better solution would be to help writers learn the value of version control and develop the habits of committing changes at meaningful intervals.

Digital version control offers several advantages over the writer's (and programmer's) old-style history of print-outs of previous versions, marked-up copy, and notebooks. An obvious one is space. A more important one is the ability to search and compare old versions more easily. We programmers benefit greatly from a tool as simple as diff, which can tell us the textual differences between two files. I use diff on non-code text all the time and imagine that professional writers could use it to better effect than I.

The use of version control by programmers leads to profound changes in the practice of programming. I suspect that the same would be true for writers and publishers, too.

Most version control systems these days work much better with plain text than with the binary data stored by most word processing programs. As discussed in my previous post, there are already good reasons for writers to move to plain text and explicit mark-up schemes. Version control and text analysis tools such as diff add another layer of benefit. Simple mark-up systems like Markdown don't even impose much burden on the writer, resembling as they do how so many of us used to prepare text in the days of the typewriter.

Some non-programmers are already using version control for their digital research. Check out William Turkel's How To for doing research with digital sources. Others, such The Programming Historian and A Companion to Digital Humanities, don't seem to mention it. But these documents refer mostly to programs for working with text. The next step is to encourage adoption of version control for writers doing their own thing: writing.

Then again, it has taken a long time for version control to gain such widespread acceptance even among programmers, and it's not yet universal. So maybe adoption among writers will take a long time, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 11, 2013 2:57 PM

Talking to the New University President about Computer Science

Our university recently hired a new president. Yesterday, he and the provost came to a meeting of the department heads in humanities, arts, and sciences, so that he could learn a little about the college. The dean asked each head to introduce his or her department in one minute or less.

I came in under a minute, as instructed. Rather than read a litany of numbers that he can read in university reports, I focused on two high-level points:

  • Major enrollment has recovered nicely since the deep trough after the dot.com bust and is now steady. We have near-100% placement, but local and state industry could hire far more graduates.
  • For the last few years we have also been working to reach more non-majors, which is a group we under-serve relative to most other schools. This should be an important part of the university's focus on STEM and STEM teacher education.

I closed with a connection to current events:

We think that all university graduates should understand what 'metadata' is and what computer programs can do with it -- enough so that they can understand the current stories about the NSA and be able to make informed decisions as a citizen.

I hoped that this would be provocative and memorable. The statement elicited laughs and head nods all around. The president commented on the Snowden case, asked me where I thought he would land, and made an analogy to The Man Without a Country. I pointed out that everyone wants to talk about Snowden, including the media, but that's not even the most important part of the story. Stories about people are usually of more interest than stories about computer programs and fundamental questions about constitutional rights.

I am not sure how many people believe that computer science is a necessary part of a university education these days, or at least the foundations of computing in the modern world. Some schools have computing or technology requirements, and there is plenty of press for the "learn to code" meme, even beyond the CS world. But I wonder how many US university graduates in 2013 understand enough computing (or math) to understand this clever article and apply that understand to the world they live in right now.

Our new president seemed to understand. That could bode well for our department and university in the coming years.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 08, 2013 1:05 PM

A Random Thought about the Metadata and Government Surveillance

In a recent mischievous mood, I decided it might be fun to see the following.

The next whistleblower with access to all the metadata that the US government is storing on its citizens assembles a broad list of names: Republican and Democrat; legislative, executive, and judicial branches; public official and private citizens. The only qualification for getting on the list is that the person has uttered any variation of the remarkably clueless statement, "If you aren't doing anything wrong, then you have nothing to hide."

The whistleblower thens mine the metadata and, for each person on this list, publishes a brief that demonstrates just how much someone with that data can conclude -- or insinuate -- about a person.

If they haven't done anything wrong, then they don't have anything to worry about. Right?


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

June 10, 2013 2:41 PM

Unique in Exactly the Same Way

Ah, the idyllic setting of my youth:

When people refer to "higher education" in this country, they are talking about two systems. One is élite. It's made up of selective schools that people can apply to -- schools like Harvard, and also like U.C. Santa Cruz, Northeastern, Penn State, and Kenyon. All these institutions turn most applicants away, and all pursue a common, if vague, notion of what universities are meant to strive for. When colleges appear in movies, they are verdant, tree-draped quadrangles set amid Georgian or Gothic (or Georgian-Gothic) buildings. When brochures from these schools arrive in the mail, they often look the same. Chances are, you'll find a Byronic young man reading "Cartesian Meditations" on a bench beneath an elm tree, or perhaps his romantic cousin, the New England boy of fall, a tousle-haired chap with a knapsack slung back on one shoulder. He is walking with a lovely, earnest young woman who apparently likes scarves, and probably Shelley. They are smiling. Everyone is smiling. The professors, who are wearing friendly, Rick Moranis-style glasses, smile, though they're hard at work at a large table with an eager student, sharing a splayed book and gesturing as if weighing two big, wholesome orbs of fruit. Universities are special places, we believe: gardens where chosen people escape their normal lives to cultivate the Life of the Mind.

I went to a less selective school than the ones mentioned here, but the vague ideal of higher education was the same. I recognized myself, vaguely, in the passage about the tousle-haired chap with a knapsack, though on a Midwestern campus. I certainly pined after a few lovely, earnest young women with a fondness for scarves and the Romantic poets in my day. These days, I have become the friendly, glasses-wearing, always-smiling prof in the recruiting photo.

The descriptions of movie scenes and brochures, scarves and Shelley and approachable professors, reminded me most of something my daughter told me as she waded through recruiting literature from so many schools a few years ago, "Every school is unique, dad, in exactly the same way." When the high school juniors see through the marketing facade of your pitch, you are in trouble.

That unique-in-the-same-way character of colleges and university pitches is a symptom of what lies at the heart of the coming "disruption" of what we all think of as higher education. The traditional ways for a school to distinguish itself from its peers, and even from schools it thinks of as lesser rivals, are becoming less effective. I originally wrote "disappearing", but they are now ubiquitous, as every school paints the same picture, stresses the same positive attributes, and tries not to talk too much about the negatives they and their peers face. Too many schools chasing too few tuition-paying customers accelerates the process.

Trying to protect the ideal of higher education is a noble effort now being conducted in the face of a rapidly changing landscape. However, the next sentence of the recent New Yorker article Laptop U, from which the passage quoted above comes, reminds us:

But that is not the kind of higher education most Americans know. ...

It is the other sort of higher education that will likely be the more important battleground on which the higher ed is disrupted by technology.

We are certainly beginning to have such conversations at my school, and we are starting to hear rumblings from outside. My college's dean and our new university president recently visited the Fortune 100 titan that dominates local industry. One of the executives there gave them several documents they've been reading there, including "Laptop U" and the IPPR report mentioned in it, "An Avalanche is Coming: Higher Education and the Revolution Ahead".

It's comforting to know your industry partners value you enough to want to help you survive a coming revolution. It's also hard to ignore the revolution when your partners begin to take for granted that it will happen.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 07, 2013 1:53 PM

Sentences to Ponder

Henry Rollins:

When one beast dumps you, summon the guts to find another. If it tries to kill you, the party has definitely started. Otherwise, life is a slow retirement.

Rollins is talking about why he's not making music anymore, but his observation applies to other professions. We all know programmers who are riding out the long tail of an intellectual challenge that died long ago. College professors, too.

I have to imagine that this is a sad life. It certainly leaves a lot of promise unfulfilled.

If you think you have a handle on the beast, then the beast has probably moved on. Find a new beast with which to do battle.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

June 04, 2013 2:43 PM

A Simple Confession

My Unix-toting brethren may revoke my CS card for saying this, but I really do like to install programs this way:

    Installing ApplicationX

1. Open the disk image 2. Drag ApplicationX to your Applications folder 3. Eject the disk image

The app loses points if I really have to drag it to the Applications folder. The Desktop should do.

I understand the value in ./configure and ./make and setting paths and... but it sure is nice when I don't have to use them.


Posted by Eugene Wallingford | Permalink | Categories: General

May 31, 2013 1:44 PM

Quotes of the Week, in Four Dimensions

Engineering.

Michael Bernstein, in A Generation Ago, A Thoroughly Modern Sampling:

The AI Memos are an extremely fertile ground for modern research. While it's true that what this group of pioneers thought was impossible then may be possible now, it's even clearer that some things we think are impossible now have been possible all along.

When I was in grad school, we read a lot of new and recent research papers. But the most amazing, most educational, and most inspiring stuff I read was old. That's often true today as well.

Science.

Financial Agile tweets:

"If it disagrees with experiment, it's wrong". Classic.

... with a link to The Scientific Method with Feynman, which has a wonderful ten-minute video of the physicist explaining how science works. Among its important points is that guessing is huge part of science. It's just that scientists have a way of telling which guesses are right and which are wrong.

Teaching.

James Boyk, in Six Words:

Like others of superlative gifts, he seemed to think the less gifted could do as well as he, if only they knew a few powerful specifics that could readily be conveyed. Sometimes he was right!

"He" is Leonid Hambro, who played with Victor Borge and P. D. Q. Bach but was also well-known as a teacher and composer. Among my best teachers have been some extraordinarily gifted people. I'm thankful for the time they tried to convey their insights to the likes of me.

Art.

Amanda Palmer, in a conference talk:

We can only connect the dots that we collect.

Palmer uses this sentence to explain in part why all art is about the artist, but it means something more general, too. You can build, guess, and teach only with the raw materials that you assemble in your mind and your world. So collect lots of dots. In this more prosaic sense, Palmer's sentence applies to not only to art but also to engineering, science, and teaching.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

May 26, 2013 9:45 AM

Programming Magic and Business Skeuomorphism

Designer Craig Mod offers Marco Arment's The Magazine as an exemplar of Subcompact Publishing in the digital age: "No cruft, all substance. A shadow on the wall."; a minimal disruptor that capitalizes on the digital medium without tying itself down with the strictures of twentieth-century hardcopy technology.

After detailing the advantages of Arment's approach, Mod points out the primary disadvantage: you have to be able to write an iOS application. Which leads to this gem

The fact that Marco -- a programmer -- launched one of the most 'digitally indigenous' contemporary tablet publications is indicative of two things:
  1. Programmers are today's magicians. In many industries this is obvious, but it's now becoming more obvious in publishing. Marco was able to make The Magazine happen quickly because he saw that Newsstand was underutilized and understood its capabilities. He knew this because he's a programmer. Newsstand wasn't announced at a publishing conference. It was announced at the WWDC.
  2. The publishing ecosystem is now primed for complete disruption.

If you are a non-programmer with ideas, don't think I just need a programmer; instead think, I need a technical co-founder. A lot of people think of programming as Other, as a separate world from what they do. Entrepreneurs such as Arment, and armies of young kids writing video games and apps for their friends, know instead that it is a tool they can use to explore their interests.

Mod offers an a nice analogy from the design world to explain why entrenched industry leaders and even prospective entrepreneurs tend to fall into the trap of mimicking old technology in their new technologies: business skeuomorphism.

For example, designers "bring the mechanical camera shutter sound to digital cameras because it feels good" to users. In a similar way, a business can transfer a decision made under the constraints of one medium or market into a new medium or market in which the constraints no longer apply. Under new constraints, and with new opportunities, the decision is no longer a good one, let alone necessary or optimal.

As usual, I am thinking about how these ideas relate to the disruption of university education. In universities, as in the publishing industry, business skeuomorphism is rampant. What is the equivalent of the Honda N360 in education? Is it Udacity or Coursera? Enstitute? Or something simpler?


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 17, 2013 3:26 PM

Pirates and Tenure

I recently read The Sketchbook of Susan Kare, "the Artist Who Gave Computing a Human Face", which referred to the Apple legend of the Pirate Flag:

[Kare's] skull-and-crossbones design would come in handy when Jobs issued one of his infamous motivational koans to the Mac team: "It's better to be a pirate than join the Navy."

For some reason, that line brought to mind a favorite saying of one of my friends, Sid Kitchel:

Real men don't accept tenure.

If by some chance they do accept tenure, they should at least never move into administration, even temporarily. It's a bad perch from which to be a pirate.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

April 23, 2013 4:16 PM

"Something Bigger Than Me"

In this interview with The Setup, Patric King talks about his current work:

Right now, my focus is bringing well-designed marketing to industries I believe in, to help them develop more financing. ... It is not glamorous, but it is the right thing to do. Designing pretty things is nice, but it's time for me to do something bigger than me.

Curly says, 'one thing... just one thing'

That's a pretty good position to be in: bringing value to a company or industry you believe in. Sometimes, we find such positions by virtue of the career path we choose. Those of us who teach as a part of our jobs are lucky in this regard.

Other times, we have to make a conscious decision to seek positions of this sort, or create the company we want to be in. That's what King has done. His skill set gives him more latitude than many people have. Those of us who can create software have more freedom than most other people, too. What an opportunity.

King's ellipsis is filled with the work that matters to him. As much as possible, when the time is right, we all should find the work that replaces our own ellipses with something that really matters to us, and to the world.


Posted by Eugene Wallingford | Permalink | Categories: General

April 14, 2013 6:25 PM

Scientists Being Scientists

Watson and Crick announced their discovery of the double helix structure of DNA in Molecular Structure of Nucleic Acids, a marvel of concise science writing. It has been widely extolled for how much information it packs into a single page, including the wonderfully understated line, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material."

As I read this paper again recently, though, this passage stood out:

The previously published X-ray data on deoxyribose nucleic acid are insufficient for a rigorous test of our structure. So far as we can tell, it is roughly compatible with the experimental data, but it must be regarded as unproved until it has been checked against more exact results.

They are unpretentious sentences. They do nothing special, stating simply that more and better data are needed to test their hypothesis. This is not a time for hyperbole. It is a time to get back to the lab.

Just scientists being scientists.


Posted by Eugene Wallingford | Permalink | Categories: General

April 10, 2013 4:03 PM

Minor Events in the Revolution at Universities

This morning I ran across several articles that had me thinking yet again about the revolution I see happening in the universities (*).

First, there was this recent piece in the New York Times about software that grades essays. Such software is probably essential for MOOCs in many disciplines, but it would also be useful in large lecture sections of traditional courses at many universities. The software isn't perfect, and skeptics abound. But the creator of the EdX software discussed in the article says:

This is machine learning and there is a long way to go, but it's good enough and the upside is huge.

It's good enough, and the upside is huge. Entrenched players scoff. Classic disruption at work.

Then there was this piece from the Nieman Journalism Lab about an online Dutch news company that wants readers to subscribe to individual journalists. Is this really news in 2013? I read a lot of technical and non-technical material these days via RSS feeds from individual journalists and bloggers. Of course, that's not the model yet for traditional newspapers and magazines.

... but that's the news business. What about the revolution in universities? The Nieman Lab piece reminded me of an old article in Vanity Fair about Politico, a news site founded by a small group of well-known political journalists who left their traditional employers to start the company. They all had strong "personal brands" and journalistic credentials. Their readers followed them to their new medium. Which got me to thinking...

What would happen if the top 10% of the teachers at Stanford or Harvard or Williams College just walked out to start their own university?

Of course, in the time since that article was published, we have seen something akin to this, with the spin-off of companies like Coursera and Udacity. However, these new education companies are partnering with traditional universities and building off the brands of their partners. At this point in time, the brand of a great school still trumps the individual brands of most all its faculty. But one can imagine a bolder break from tradition.

What happens when technology gives a platform to a new kind of teacher who bypasses the academic mainstream to create and grow a personal brand? What happens when this new kind of teacher bands together with a few like-minded renegades to use the same technology to scale up to the size of a traditional university, or more?

That will never happen, or so many of us in the academy are saying. This sort of thinking is what makes the Dutch news company mentioned above seem like such a novelty in the world of journalism. Many journalists and media companies, though, now recognize the change that has happened around them.

Which leads to a final piece I read this morning, a short blog entry by Dave Winer about Ezra Klein's epiphany on how blogging and journalism are now part of a single fabric. Winer says:

It's tragic that it took a smart guy like Klein so long to understand such a basic structural truth about how news, his own profession, has been working for the last 15 years.

I hope we aren't saying the same thing about the majority of university professors fifteen or twenty years from now. As we see in computers that grade essays, sometimes a new idea is good enough, and the upside is huge. More and more people will experiment with good-enough ideas, and even ideas that aren't good enough yet, and as they do the chance of someone riding the upside of the wave to something really different increases. I don't think MOOCs are a long-term answer to any particular educational problem now or in the future, but they are one of the laboratories in which these experiments can be played out.

I also hope that fifteen or twenty years from now someone isn't saying about skeptical university professors what Winer says so colorfully about journalists skeptical of the revolution that has redefined their discipline while they worked in it:

The arrogance is impressive, but they're still wrong.

~~~~

(*).   Nearly four years later, Revolution Out There -- and Maybe In Here remains one of my most visited blog entries, and one that elicits more reader comments than most. I think it struck a chord.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 09, 2013 3:16 PM

Writing a Book Is Like Flying A Spaceship

I've always liked this quote from the preface of Pragmatic Ajax, by Gehtland, Galbraith, and Almaer:

Writing a book is a lot like (we imagine) flying a spaceship too close to a black hole. One second you're thinking "Hey, there's something interesting over there," and a picosecond later, everything you know and love has been sucked inside and crushed.

Programming can be like that, too, in a good way. Just be sure to exit the black hole on the other side.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

March 30, 2013 8:43 AM

"It's a Good Course, But..."

Earlier this week I joined several other department heads to eat lunch with a bunch of high school teachers who were on campus for the Physics Olympics. The teachers were talking shop about the physics courses at their schools, and eventually the conversation turned to AP Physics. One of the teachers said, "It's a good course, but..."

A lot of these teachers would rather not offer AP Physics at all. One teacher described how in earlier days they were able to teach an advanced physics course of their own design. They had freedom to adapt to the interest of their students and to try out new ideas they encountered at conferences. Even though the advanced physics course had first-year physics as a prerequisite, they had plenty of students interested and able to take the second course.

The introduction of AP Physics created some problems. It's a good course, they all agreed, but it is yet another AP course for their students to take, and yet another AP exam for the students to prepare for. Most students can't or don't want to take all the AP courses, due to the heavier workload and often grueling pace. So in the end, they lose potential students who choose not to take the physics class.

Several of these teachers tried to make this case to heads of their divisions or to their principals, but to no avail.

This makes me sad. I'd like to see as many students taking science and math courses in high school as possible, and creating unnecessary bottlenecks hurts that effort.

There is a lot of cultural pressure these days to accelerate the work that HS students do. K-12 school districts and their administrators see the PR boon of offering more, and more advanced courses. State legislators are creating incentives for students to earn college credit while in high school, and funding for schools can reflect that. Parents love the idea of their children getting a head start on college, both because it might save money down the line and because they earn some vicarious pleasure in the achievement of their children.

On top of all this, the students themselves often face a lot of peer pressure from their friends and other fellow students to be doing and achieving more. I've seen that dynamic at work as my daughters have gone through high school.

Universities don't seem as keen about AP as they used to, but they send a mixed message to parents and students. On the one hand, many schools give weight in their admission decisions to the number of AP courses completed. This is especially true with more elite schools, which use this measure as a way to demonstrate their selectivity. Yet many of those same schools are reluctant to give full credit to students who pass the AP exam, at least as major credit, and require students to take their intro course anyway.

This reluctance is well-founded. We don't see any students who have taken AP Computer Science, so I can't commit on that exam but I've talked with several Math faculty here about their experiences with calculus. They say that, while AP Calculus teaches a lot of good material, but the rush to cover required calculus content often leaves students with weak algebra skills. They manage to succeed in the course despite these weaknesses, but when they reach more advanced university courses -- even Calc II -- these weaknesses come back to haunt them.

As a parent of current and recent high school students, I have observed the student experience. AP courses try to prepare students for the "college experience" and as a result cover a lot of material. The students see them as grueling experiences, even when they enjoy the course content.

That concerns me a bit. For students who know they want to be math or science majors, these courses are welcome challenges. For the rest of the students, who take the courses primarily to earn college credit or to explore the topic, these courses are so grueling that this dampen the fun of learning.

Call me old-fashioned, but I think of high school as a time to learn about a lot of different things, to sample broadly from all areas of study. Sure, students should build up the skills necessary to function in the workplace and go to college, but the emphasis should be on creating a broadly educated citizen, not training a miniature college student. I'd rather students get excited about learning physics, or math, or computer science, so that they will want to dive deeper when they get to college.

A more relaxed, more flexible calculus class or physics course might attract more students than a grueling AP course. This is particularly important at a time when everyone is trying to increase interest in STEM majors.

My daughters have had a lot of great teachers, both in and out of their AP courses. I wish some of those teachers had had more freedom to spark student interest in the topic, rather than student and teacher alike facing the added pressure of taking the AP exam, earning college credits, and affecting college admission decisions

It's a good course, but feel the thrill first.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 28, 2013 2:52 PM

The Power of a Good Abstract

Someone tweeted a link to Philip Greenspun's M.S. thesis yesterday. This is how you grab your reader's attention:

A revolution in earthmoving, a $100 billion industry, can be achieved with three components: the GPS location system, sensors and computers in earthmoving vehicles, and SITE CONTROLLER, a central computer system that maintains design data and directs operations. The first two components are widely available; I built SITE CONTROLLER to complete the triangle and describe it here.

Now I have to read the rest of the thesis.

You could do worse than use Greenspun's first two sentences as a template for your next abstract:

A revolution in <major industry or research area> can be achieved with <n> components: <component-1>, <component-2>, ... and <component-n>. The first <n-1> components are widely available. I built <program name> to meet the final need and describe it here.

I am adding this template to my toolbox of writing patterns, alongside Kent Beck's four-sentence abstract (scroll down to Kent's name), which generalizes the idea of one startling sentence that arrests the reader. I also like good advice on how to write concise, incisive thesis statements, such as that in Matt Might's Advice for PhD Thesis Proposals and Olin Shivers's classic Dissertation Advice.

As with any template or pattern, overuse can turn a good idea into a cliché. If readers repeatedly see the same cookie-cutter format, it begins to look stale and will cause the reader to lose interest. So play with variations on the essential theme: I have solved an important problem. This is my solution.

If you don't have a great abstract, try again. Think hard about your own work. Why is this problem important? What is the big win from my solution? That's a key piece of advice in Might's advice for graduate students: state clearly and unambiguously what you intend to achieve.

Indeed, approaching your research in a "test-driven" way makes a lot of sense. Before embarking on a project, try to write the startling abstract that will open the paper or dissertation you write when you have succeeded. If you can't identify the problem as truly important, then why start at all? Maybe you should pick something more valuable to work on, something that matters enough you can write a startling abstract for the esult. That's a key piece of advice shared by Richard Hamming in his You and Your Research.

And whatever you do, don't oversell a minor problem or a weak solution with an abstract that promises too much. Readers will be disappointed at best and angry at worst. If you oversell even a little bit too many times, you will become like the boy who cried wolf. No one will believe your startling claim even when it's on the mark.

Greenspun's startling abstract ends as strongly as it begins. Of course, it helps if you can close with a legitimate appeal to ameliorating poverty around the world:

This area is exciting because so much of the infrastructure is in place. A small effort by computer scientists could cut the cost of earthmoving in half, enabling poor countries to build roads and rich countries to clean up hazardous waste.

I'm not sure adding another automating refactoring to Eclipse or creating another database library can quite rise to the level of empowering the world's poor. But then, you may have a different audience.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

February 17, 2013 12:16 PM

The Disruption of Education: B.F. Skinner, MOOCs, and SkillShare

Here are three articles, all different, but with a connection to the future of education.

•   Matthew Howell, Teaching Programming

Howell is a software developer who decided to start teaching programming on the side. He offers an on-line course through SkillShare that introduces non-programmers to the basic concepts of computer programming, illustrated using Javascript running in a browser. This article describes some of his reasons for teaching the course and shares a few things he has learned. One was:

What is the ideal class size? Over the year, I've taught classes that ranged in size from a single person to as many as ten. Through that experience, I've settled on five as my ideal.

Anyone who has taught intro programming in a high school or university is probably thinking, um, yeah, that would be great! I once taught an intermediate programming section with fifty or so people, though most of my programming courses have ranged from fifteen to thirty-five students. All other things being equal, smaller is better. Helping people learn to write and make things almost usually benefits from one-on-one time and time for small groups to critique design together.

Class size is, of course, one of the key problems we face in education these days, both K-12 and university. For a lot of teaching, n = 5 is just about perfect. For upper-division project courses, I prefer four groups of four students, for a total of sixteen. But even at that size, the costs incurred by a university offering sections of are rising a lot faster than its revenues.

With MOOCs all the rage, Howell is teaching at the other end of spectrum. I expect the future of teaching to see a lot of activity at both scales. Those of us teaching in the middle face bleaker prospects.

•   Mike Caulfield, B. F. Skinner on Teaching Machines (1954)

Caulfield links to this video of B.F. Skinner describing a study on the optimal conditions for self-instruction using "teaching machines" in 1954. Caulfield points out that, while these days people like to look down on Skinner's behaviorist view of learning, he understood education better than many of his critics, and that others are unwittingly re-inventing many of his ideas.

For example:

[Skinner] understands that it is not the *machine* that teaches, but the person that writes the teaching program. And he is better informed than almost the entire current educational press pool in that he states clearly that a "teaching machine" is really just a new kind of textbook. It's what a textbook looks like in an age where we write programs instead of paragraphs.

That's a great crystallizing line by Caulfield: A "teaching machine" is what a textbook looks like in an age where we write programs instead of paragraphs.

Caulfield reminds us that Skinner said these things in 1954 and cautions us to stop asking "Why will this work?" about on-line education. That question presupposes that it will. Instead, he suggests we ask ourselves, "Why will this work this time around?" What has changed since 1954, or even 1994, that makes it possible this time?

This is a rightly skeptical stance. But it is wise to be asking the question, rather than presupposing -- as so many educators these days do -- that this is just another recursion of the "technology revolution" that never quite seems to revolutionize education after all.

•   Clayton Christensen in Why Apple, Tesla, VCs, academia may die

Christensen didn't write this piece, but reporter Cromwell Schubarth quotes him heavily throughout on how disruption may be coming to several companies and industries of interest to his Silicon Valley readership.

First, Christensen reminds young entrepreneurs that disruption usually comes from below, not from above:

If a newcomer thinks it can win by competing at the high end, "the incumbents will always kill you".

If they come in at the bottom of the market and offer something that at first is not as good, the legacy companies won't feel threatened until too late, after the newcomers have gained a foothold in the market.

We see this happening in higher education now. Yet most of my colleagues here on the faculty and in administration are taking the position that leaves legacy institutions most vulnerable to overthrow from below. "Coursera [or whoever] can't possibly do what we do", they say. "Let's keep doing what we do best, only better." That will work, until it doesn't.

Says Christensen:

But now online learning brings to higher education this technological core, and people who are very complacent are in deep trouble. The fact that everybody was trying to move upmarket and make their university better and better and better drove prices of education up to where they are today.

We all want to get better. It's a natural desire. My university understands that its so-called core competency lies in the niche between the research university and the liberal arts college, so we want to optimize in that space. As we seek to improve, we aspire to be, in our own way, like the best schools in their niches. As Christensen pointed out in The Innovator's Dilemma, this is precisely the trend that kills an institution when it meets a disruptive technoology.

Later in the article, Christensen talks about how many schools are getting involved in online learning, sometimes investing significant resources, but almost always in service of the existing business model. Yet other business models are being born, models that newcomers are willing -- and sometimes forced -- to adopt.

One or more of these new models may be capable of toppling even the most successful institutions. Christensen describes one such candidate, a just-in-time education model in which students learn something, go off to use it, and then come back only when they need to learn what they need to know in order to take their next steps.

This sort of "learn and use", on-the-job learning, whether online or in person, is a very different way of doing things from school as we know it. It id not especially compatible with the way most universities are organized to educate people. It is, however, plenty compatible with on-line delivery and thus offers newcomers to the market the pebble they may use to bring down the university.

~~~~

The massively open on-line course is one form the newcomers are taking. The smaller, more intimate offering enabled by the likes of SkillShare is another. It may well be impossible for legacy institutions caught in the middle to fend off challenges from both directions.

As Caulfield suggests, though, we should be skeptical. We have seen claims about technology upending schools before. But we should adopt the healthy skepticism of the scientist, not the reactionary skepticism of the complacent or the scared. The technological playing field has changed. What didn't work in 1954 or 1974 or 1994 may well work this time.

Will it? Christensen thinks so:

Fifteen years from now more than half of the universities will be in bankruptcy, including the state schools. In the end, I am excited to see that happen.

I fear that universities like mine are at the greatest risk of disruption, should the wave that Christensen predicts come. I don't know many university faculty are excited to see it happen. I just hope they aren't too surprised if it does.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 07, 2013 5:01 PM

Quotes of the Day

Computational Thinking Division. From Jon Udell, another lesson that programming and computing teach us which can be useful out in the world:

Focus on understanding why the program is doing what it's doing, rather than why it's not doing what you wanted it to.

This isn't the default approach of everyone. Most of my students have to learn this lesson as a part of learning how to program. But it can be helpful outside of programming, in particular by influencing how we interact with people. As Udell says, it can be helpful to focus on understanding why one's spouse or child or friend is doing what she is doing, rather than on why she isn't doing what you want.

Motivational Division. From the Portland Ballet, of all places, several truths about being a professional dancer that generalize beyond the studio, including:

There's a lot you don't know.
There may not be a tomorrow.
There's a lot you can't control.
You will never feel 100% ready.

So get to work, even if it means reading the book and writing the code for the fourth time. That is where the fun and happiness are. All you can affect, you affect by the work you do.

Mac Chauvinism Division. From Matt Gemmell, this advice on a particular piece of software:

There's even a Windows version, so you can also use it before you've had sufficient success to afford a decent computer.

But with enough work and a little luck, you can afford better next time.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Managing and Leading

February 06, 2013 10:06 AM

Shared Governance and the 21st Century University

Mitch Daniels, the new president of Purdue University, says this about shared governance in An Open Letter to the People of Purdue, his initial address to the university community:

I subscribe entirely to the concept that major decisions about the university and its future should be made under conditions of maximum practical inclusiveness and consultation. The faculty must have the strongest single voice in these deliberations, but students and staff should also be heard whenever their interests are implicated. I will work hard to see that all viewpoints are fairly heard and considered on big calls, including the prioritization of university budgetary investments, and endeavor to avoid surprises even on minor matters to the extent possible.

Shared governance implies shared accountability. It is neither equitable or workable to demand shared governing power but declare that cost control or substandard performance in any part of Purdue is someone else's problem. We cannot improve low on-time completion rates and maximize student success if no one is willing to modify his schedule, workload, or method of teaching.

Participation in governance also requires the willingness to make choices. "More for everyone" or "Everyone gets the same" are stances of default, inconsistent with the obligations of leadership.

I love the phrase, inconsistent with the obligations of leadership.

Daniels recently left the governor's house in Indiana for the president's house at Purdue. His initial address is balanced, open, and forward-looking. It is respectful of what universities do and forthright about the need to recognize changes in the world around us, and to change in response.

My university is hiring a new president, too. Our Board of Regents will announce its selection tomorrow. It is probably too much to ask that we hire a new president with the kind of vision and leadership that Daniels brings to West Lafayette. I do hope that we find someone up to the task of leading a university in a new century.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

February 03, 2013 11:10 AM

Faulkner Teaches How to Study

novelist William Faulkner, dressed for work

From this Paris Review interview with novelist William Faulkner:

INTERVIEWER

Some people say they can't understand your writing, even after they read it two or three times. What approach would you suggest for them?

FAULKNER

Read it four times.

The first three times through the book are sunk cost. At this moment, you don't understand. What should you do? Read it again.

I'm not suggesting you keep doing the same failing things over and over. (You know what Einstein said about insanity.) If you read the full interview, you'll see that Faulkner isn't suggesting that, either. We're suggesting you get back to work.

Studying computer science is different from reading literature. We can approach our study perhaps more analytically than the novel reader. And we can write code. As an instructor, I try to have a stable of ideas that students can try when they are having trouble grasping a new concept or understanding a reading, such as:

  • Assemble a list of specific questions to ask your prof.
  • Talk to a buddy who seems to understand what you don't.
  • Type the code from the paper in character-by-character, thinking about it as you do.
  • Draw a picture.
  • Try to explain the parts you do understand to another student.
  • Focus on one paragraph, and work backward from there to the ideas it presumes you already know.
  • Write your own program.

One thing that doesn't work very well is being passive. Often, students come to my office and say, "I don't get it." They don't bring much to the session. But the best learning is not passive; it's active. Do something. Something new, or just more.

Faulkner is quite matter-of-fact about creating and reading literature. If it isn't right, work to make it better. Technique? Method? Sure, whatever you need. Just do the work.

This may seem like silly advice. Aren't we all working hard enough already? Not all of us, and not all the time. I sometimes find that when I'm struggling most, I've stopped working hard. I get used to understanding things quickly, and then suddenly I don't. Time to read it again.

I empathize with many of my students. College is a shock to them. Things came easily in high school, and suddenly they don't. These students mean well but seem genuinely confused about what they should do next. "Why don't I understand this already?"

Sometimes our impatience is born from such experience. But as Bill Evans reminds us, some problems are too big to conquer immediately. He suggests that we accept this up front and enjoy the whole trip. That's good advice.

Faulkner shrugs his shoulders and tells us to get back to work.

~~~~

PHOTO. William Faulkner, dressed for work. Source: The Centered Librarian.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 26, 2013 5:52 PM

Computing Everywhere: Indirection

Alice: The hardest word you'll ever be asked to spell is "ichdericious".

Bob: Yikes. Which word?

A few of us have had fun with the quotations in English and Scheme over the last few days, but this idea is bigger than symbols as data values in programs or even words and strings in natural language. They are examples of a key element of computational thinking, indirection, which occurs in real life all the time.

A few years ago, my city built a new water park. To account for the influx of young children in the area, the city dropped the speed limit in the vicinity of the pool from 35 MPH to 25 MPH. The speed limit in that area has been 35 MPH for a long time, and many drivers had a hard time adjusting to the change. So the city put up a new traffic sign a hundred yards up the road, to warn drivers of the coming change. It looks like this one:

traffic sign: 40 MPH speed limit ahead

The white image in the middle of this sign is a quoted version of what drivers see down the road, the usual:

traffic sign: 40 MPH speed limit

Now, many people slow down to the new speed limit well in advance, often before reaching even the warning sign. Maybe they are being safe. Then again, maybe they are confusing a sign about a speed limit sign with the speed limit sign itself.

If so, they have missed a level of indirection.

I won't claim that computer scientists are great drivers, but I will say that we get used to dealing with indirection as a matter of course. A variable holds a value. A pointer holds the address of a location, which holds a value. A URL refers to a web page. The list goes on.

Indirection is a fundamental element in the fabric of computation. As computation becomes an integral part of nearly everyone's daily life, there is a lot to be gained by more people understanding the idea of indirection and recognizing opportunities to put it to work to mutual benefit.

Over the last few years, Jon Udell has been making a valiant attempt to bring this issue to the attention of computer scientists and non-computer scientists alike. He often starts with the idea of a hyperlink in a web page, or the URL to which it is tied, as a form of computing indirection that everyone already groks. But his goal is to capitalize on this understanding to sneak the communication strategy of pass by reference into people's mental models.

As Udell says, most people use hyperlinks every day but don't use them as well as they might, because the distinction between "pass by value" and "pass by reference" is not a part of their usual mental machinery:

The real problem, I think, is that if you're a newspaper editor, or a city official, or a citizen, pass-by-reference just isn't part of your mental toolkit. We teach the principle of indirection to programmers. But until recently there was no obvious need to teach it to everybody else, so we don't.

He has made the community calendar his working example of pass by reference, and his crusade:

In the case of calendar events, you're passing by value when you send copies of your data to event sites in email, or when you log into an events site and recopy data that you've already written down for yourself and published on your own site.

You're passing by reference when you publish the URL of your calendar feed and invite people and services to subscribe to your feed at that URL.

"Pass by reference rather than by value" is one of Udell's seven ways to think like the web, his take on how to describe computational thinking in a world of distributed, network media. That essay is a good start on an essential module in any course that wants to prepare people to live in a digital world. Without these skills, how can we hope to make the best use of technology when it involves two levels of indirection, as shared citations and marginalia do?

Quotation in Scheme and pass-by-reference are different issue, but they are related in a fundamental way to the concept of indirection. We need to arm more people with this concept than just CS students learning how programming languages work.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

January 25, 2013 4:47 PM

More on Real-World Examples of Quotation

My rumination on real-world examples of quotation to use with my students learning Scheme sparked the imaginations of several readers. Not too surprisingly, they came up with better examples than my own... For example, musician and software developer Chuck Hoffman suggested:

A song, he sang.
"A song", he sang.

The meaning of these is clearly different depending on whether we treat a song as a variable or as a literal.

My favorite example came from long-time friend Joe Bergin:

"Lincoln" has seven letters.
Lincoln has seven letters.

Very nice. Joe beat me with my own example!

As Chuck wrote, song titles create an interesting challenge, whether someone is singing a certain song or singing in a way defined by the words that happen to also be the song's title. I have certainly found it hard to find words both that are part of a title or a reference and that flow seamlessly in a sentence.

This turns out to be a fun form of word play, independent of its use as a teaching example. Feel free to send me your favorites.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

January 20, 2013 10:28 AM

Scored Discussions

My wife has been on a long-term substitute teaching assignment for the last few weeks. Yesterday, I ran across the following rubric used by one of the middle school teachers there to grade "scored discussions". The class reads a book, which they discuss as a group. Students are evaluated by their contribution to the discussion, including their observable behavior.

Productive behavior
  • Uses positive body language and eye contact (5)
  • Makes a relevant comment (1)
  • Offers supporting evidence (2)
  • Uses an analogy (3)
  • Asks a clarifying question (2)
  • Listens actively -- rephrases comment before responding (3)
  • Uses good speaking skills -- clear speech, loud enough, not too fast (2)

Nonproductive behavior

  • Not paying attention (-2)
  • Interrupting (-3)
  • Irrelevant comment (-2)
  • Monopolizing (-3)

Most adults, including faculty, should be glad that their behavior is not graded according to this standard. I daresay that many of us would leave meetings with a negative score more often that we would like to admit.

I think I'll use this rubric to monitor my own behavior at the next meeting on my calendar.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

December 31, 2012 8:22 AM

Building Things and Breaking Things Down

As I look toward 2013, I've been thinking about Alan Kay's view of CS as science [ link ]:

I believe that the only kind of science computing can be is like the science of bridge building. Somebody has to build the bridges and other people have to tear them down and make better theories, and you have to keep on building bridges.

In 2013, what will I build? What will I break down, understand, and help others to understand better?

One building project I have in mind is an interactive text. One analysis project in mind involves functional design patterns.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns

December 29, 2012 8:47 AM

Beautiful Sentences

Matthew Ward, in the Translator's Note to "The Stranger" (Vintage Books, 1988):

I have also attempted to venture further into the letter of Camus's novel, to capture what he said and how he said it, not what he meant. In theory, the latter should take care of itself.

This approach works pretty well for most authors and most books, I imagine.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

December 12, 2012 4:18 PM

Be a Driver, Not a Passenger

Some people say that programming isn't for everyone, just as knowing how to tinker under the hood of one's car isn't for everyone. Some people design and build cars; other people fix them; and the rest of us use them as high-level tools.

Douglas Rushkoff explains why this analogy is wrong:

Programming a computer is not like being the mechanic of an automobile. We're not looking at the difference between a mechanic and a driver, but between a driver and a passenger. If you don't know how to drive the car, you are forever dependent on your driver to take you where you want to go. You're even dependent on that driver to tell you when a place exists.

This is CS Education week, "a highly distributed celebration of the impact of computing and the need for computer science education". As a part of the festivities, Rushkoff was scheduled to address members of Congress and their staffers today about "the value of digital literacy". The passage quoted above is one of ten points he planned to make in his address.

As good as the other nine points are -- and several are very good -- I think the distinction between driver and passenger is the key, the essential idea for folks to understand about computing. If you can't program, you are not a driver; you are a passenger on someone else's trip. They get to decide where you go. You may want to invent a new place entirely, but you don't have the tools of invention. Worse yet, you may not even have the tools you need to imagine the new place. The world is as it is presented to you.

Don't just go along for the ride. Drive.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

December 09, 2012 5:12 PM

Just Build Things

The advantage of knowing how to program is that you can. The danger of knowing how to program is that you will want to.

From Paul Graham's How to Get Startup Ideas:

Knowing how to hack also means that when you have ideas, you'll be able to implement them. That's not absolutely necessary..., but it's an advantage. It's a big advantage, when you're considering an idea ..., if instead of merely thinking, "That's an interesting idea," you can think instead, "That's an interesting idea. I'll try building an initial version tonight."

Writing programs, like any sort of fleshing out of big ideas, is hard work. But what's the alternative? Not being able to program, and then you'll just need a programmer.

If you can program, what should you do?

[D]on't take any extra classes, and just build things. ... But don't feel like you have to build things that will become startups. That's premature optimization. Just build things.

Even the professor in me has to admit this is true. You will learn a lot of valuable theory, tools, and practices in class. But when a big idea comes to mind, you need to build it.

As Graham says, perhaps the best way that universities can help students start startups is to find ways to "leave them alone in the right way".

Of course, programming skills are not all you need. You'll probably need to be able to understand and learn from users:

When you find an unmet need that isn't your own, it may be somewhat blurry at first. The person who needs something may not know exactly what they need. In that case I often recommend that founders act like consultants -- that they do what they'd do if they'd been retained to solve the problems of this one user.

That's when those social science courses can come in handy.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

November 23, 2012 9:34 AM

In the Spirit of the Weekend

I am thankful for human beings' capacity to waste time.

We waste it in the most creative ways. My life is immeasurably better because other people have wasted time and created art and literature. Even much of the science and technology I enjoy came from people noodling around in their free time. The universe has blessed me, and us.

~~~~

At my house, Thanksgiving lasts the whole weekend. I don't mind writing a Thanksgiving blog the day after, even though the rest of the world has already moved on to Black Friday and the next season on the calendar. My family is, I suppose, wasting time.

This note of gratitude was prompted by reading a recent joint interview with Brian Eno and Ha-Joon Chang, oddities in their respective disciplines of music and economics. I am thankful for oddities such as Eno and Chang, who add to the world in ways that I cannot. I am also thankful that I live in a world that provides me access to so much wonderful information with such ease. I feel a deep sense of obligation to use my time in a way that repays these gifts I have been given.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

November 20, 2012 12:20 PM

The Paper Was Rejected, But Do Readers Care?

The research paper I discussed in a recent blog entry on student use of a new kind of textbook has not been published yet. It was rejected by ICER 2012, a CS education conference, for what are surely good reasons from the reviewers' perspective. The paper neither describes the results of an experiment nor puts the evaluation in the context of previous work. As the first study of this sort, though, that would be difficult to do.

That said, I did not hesitate to read the paper and try to put its findings to use. The authors have a solid reputation for doing good work, and I trust them to have done reasonable work and to have written about it honestly. Were there substantial flaws with the study or the paper, I trusted myself to take them into account as I interpreted and used the results.

I realize that this sort of thing happens every day, and has for a long time: academics reading technical reports and informal papers to learn from the work of their colleagues. But given the state of publishing these days, both academic and non-academic, I couldn't help but think about how the dissemination of information is changing.

Guzdial's blog is a perfect example. He has developed a solid reputation as a researcher and as an interpreter of other people's work. Now, nearly every day, we can all read his thoughts about his work, the work of others, and the state of the world. Whether the work is published in a journal or conference or not, it will reach an eager audience. He probably still needs to publish in traditional venues occasionally in order to please his employer and to maintain a certain stature, but I suspect that he no longer depends upon that sort of publication in the way researchers ten or thirty years ago.

True, Guzdial developed his reputation in part by publishing in journals and conferences, and they can still play that role for new researchers who are just developing their reputations. But there are other ways for the community to discover new work and recognize the quality of researchers and writers. Likewise, journals and conferences still can play a role in archiving work for posterity. But as the internet and web reach more and more people, and as we learn to do a better job of archiving what we publish there, that role will begin to fade.

The gates really are coming down.


Posted by Eugene Wallingford | Permalink | Categories: General

October 11, 2012 3:21 PM

Writing Advice for Me

I'm not a big fan of Top Ten lists on the web, unless they come from fellow Hoosier David Letterman. But I do like Number 9 on this list of writing tips:

Exclude all words that just don't add anything. This was the very best piece of advice I read when I first started blogging. Carefully re-read posts that you have written and try to remove all the extraneous words that add little or nothing.

This advice strikes a chord in me because I struggle to follow it, even when I am writing about it.


Posted by Eugene Wallingford | Permalink | Categories: General

October 01, 2012 7:40 AM

StrangeLoop 9: This and That

the Peabody Opera House

Every conference leaves me with unattached ideas floating around after I write up all my entries. StrangeLoop was no different. Were I master of Twitter, one who live-posted throughout the conference, many of this might have been masterful tweets. Instead, they are bullets in a truly miscellaneous blog entry.

~~~~

The conference was at the Peabody Opera House (right), an 80-year-old landmark in downtown St. Louis. It shares a large city block with the ScottTrade Center, home of the NHL Blues, and a large parking garage ideally positioned for a conference goer staying elsewhere. The main hall was perfect for plenary sessions, and four side rooms fit the parallel talks nicely.

~~~~

When I arrived at 8:30 AM on Monday, the morning refreshment table contained, in addition to the perfunctory coffee, Diet Mountain Dew in handy 12-ounce bottles. Soda was available all day. This made me happy.

Sadly, the kitchen ran out of Diet Dew before Tuesday morning. Such is life. I still applaud the conference for meeting the preferences of its non-coffee drinkers.

~~~~

During the Akka talk, I saw some code on a slide that made me mutter Ack! under my breath. That made me chuckle.

~~~~

"Man, there are a lot of Macs and iPads in this room."
-- me, at every conference session

~~~~

the St. Louis Arch, down the street from the Opera House

On Monday, I saw @fogus across the room in his Manfred von Thun jersey. I bow to you, sir. Joy is one of my favorites.

After seeing @fogus's jersey tweet, I actually ordered one for myself. Unfortunately, it didn't arrive in time for the conference. A nice coincidence: Robert Floyd spent most of his career at Stanford, whose mascot is... the Cardinal. (The color, not the bird.)

~~~~

During Matthew Flatt's talk, I couldn't help but think Alan Kay would be proud. This is programming taken to the extreme. Kay always said that Smalltalk didn't need an operating system; just hook those primitives directly to the underlying metal. Racket might be able to serve as its own OS, too.

~~~~

I skipped a few talks. During lunch each day, I went outside to walk. That's good for my knee as well as my head. Then I skipped one talk that I wanted to see at the end of each day, so that I could hit the exercise bike and pool. The web will surely provide me reports of both ( The Database as a Value and The State of JavaScript ). Sometimes, fresh air and exercise are worth the sacrifice.

~~~~

my StrangeLoop 2012 conference badge

I turned my laptop off for the last two talks of the conference that I attended. I don't think that the result was being able to think more or better, but I definitely did thought differently. Global connections seemed to surface more quickly, whereas typing notes seemed to keep me focused on local connections.

~~~~

Wednesday morning, as I hit the road for home, I ran into rush hour traffic driving toward downtown St. Louis. It took us 41 minutes to travel 12 miles. Love St. Louis and this conference as much as I do, I was glad to be heading home to a less crowded place.

~~~~

Even though I took walks at lunch, I was able to sneak into the lunch talks late. Tuesday's talk on Plato (OOP) and Aristotle (FP) brought a wistful smile. I spent a couple of years in grad school drawing inspiration for our lab's approach to knowledge-based systems from the pragmatists, in contrast to the traditional logical views of much of the AI world.

That talk contained two of my favorite sentences from the conference:

Computer scientists are applied metaphysicists.

And:

We have the most exciting job in the history of philosophy.

Indeed. We can encode, implement, and experiment with every model of the world we create. It is good to be the king.

This seems like a nice way to close my StrangeLoop posts for now. Now, back to work.


Posted by Eugene Wallingford | Permalink | Categories: General

September 19, 2012 4:57 PM

Don't Stop The Car

I'm not a Pomodoro guy, but this advice from The Timer Knows Best applies more generally:

Last month I was teaching my wife to drive [a manual transmission car], and it's amazing how easy stick shifting is if the car is already moving.... However, when the car is stopped and you need to get into 1st gear, it's extremely difficult. [So many things can go wrong:] too little gas, too much clutch, etc. ...

The same is true with the work day. Once you get going, you want to avoid coming to a standstill and having to get yourself moving again.

As I make the move from runner to cyclist, I have learned how much easier to keep moving on a bike than it is to start moving on a bike.

This is true of programming, too. Test-driven development helps us get started by encouraging us to focus on one new piece of functionality to implement. Keep it small, make it work, and move on to another small step. Pretty soon you are moving, and you are on your way.

Another technique many programs use to get started is to code a failing test before you stop the day before. This failing test focuses you even more quickly and recruits your own memory for help in recreating the feeling of motion more quickly. It's like a way to leave the car running in second gear.

I'm trying to help my students, who are mostly still learning how to write code, learn how to get started when they program. Many of them seem repeatedly to find themselves sitting still, grinding their gears and trying to figure out how to write the next bit of code and get it running. Ultimately, the answer may come down to the same thing we learn when we learn to drive a stick: practice, practice, practice, and eventually you get the feel of how the gearshift works.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

August 31, 2012 3:22 PM

Two Weeks Along the Road to OOP

The month has flown by, preparing for and now teaching our "intermediate computing" course. Add to that a strange and unusual set of administrative issues, and I've found no time to blog. I did, however manage to post what has become my most-retweeted tweet ever:

I wish I had enough money to run Oracle instead of Postgres. I'd still run Postgres, but I'd have a lot of cash.

That's an adaptation of tweet originated by @petdance and retweeted my way by @logosity. I polished it up, sent it off, and -- it took off for the sky. It's been fun watching its ebb and flow, as it reaches new sub-networks of people. From this experience I must learn at least one lesson: a lot of people are tired of sending money to Oracle.

The first two weeks of my course have led the students a few small steps toward object-oriented programming. I am letting the course evolve, with a few guiding ideas but no hard-and-fast plan. I'll write about the course's structure after I have a better view of it. For now, I can summarize the first four class sessions:

  1. Run a simple "memo pad" app, trying to identify behavior (functions) and state (persistent data). Discuss how different groupings of the functions and data might help us to localize change.
  2. Look at the code for the app. Discuss the organization of the functions and data. See a couple of basic design patterns, in particular the separation of model and view.
  3. Study the code in greater detail, with a focus on the high-level structure of an OO program in Java.
  4. Study the code in greater detail, with a focus on the lower-level structure of classes and methods in Java.

The reason we can spend so much time talking about a simple program is that students come to the course without (necessarily) knowing any Java. Most come with knowledge of Python or Ada, and their experiences with such different languages creates an interesting space in which to encounter Java. Our goal this semester is for students to learn their second language as much as possible, rather than having me "teach" it to them. I'm trying to expose them to a little more of the language each day, as we learn about design in parallel. This approach works reasonably well with Scheme and functional programming in a programming languages course. I'll have to see how well it works for Java and OOP, and adjust accordingly.

Next week we will begin to create things: classes, then small systems of classes. Homework 1 has them implementing a simple array-based class to an interface. It will be our first experience with polymorphic objects, though I plan to save that jargon for later in the course.

Finally, this is the new world of education: my students are sending me links to on-line sites and videos that have helped them learn programming. They want me to check them and and share with the other students. Today I received a link to The New Boston, which has among its 2500+ videos eighty-seven beginning Java and fifty-nine intermediate Java titles. Perhaps we'll come to a time when I can out-source all instruction on specific languages and focus class time on higher-level issues of design and programming...


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 09, 2012 1:36 PM

Sentences to Ponder

In Why Read?, Mark Edmundson writes:

A language, Wittgenstein thought, is a way of life. A new language, whether we learn it from a historian, a poet, a painter, or a composer of music, is potentially a new way to live.

Or from a programmer.

In computing, we sometimes speak of Perlis languages, after one of Alan Perlis's best-known epigrams: A language that doesn't affect the way you think about programming is not worth knowing. A programming language can change how we think about our craft. I hope to change how my students think about programming this fall, when I teach them an object-oriented language.

But for those of us who spend our days and nights turning ideas into programs, a way of thinking is akin to a way of life. That is why the wider scope of Wittgenstein's assertion strikes me as so appropriate for programmers.

Of course, I also think that programmers should follow Edmundson's advice and learn new languages from historians, writers, and artists. Learning new ways to think and live isn't just for humanities majors.

(By the way, I'm enjoying reading Why Read? so far. I read Edmundson's Teacher many years ago and recommend it highly.)


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 23, 2012 3:14 PM

Letting Go of Old Strengths

Ward Cunningham commented on what it's like to be "an old guy who's still a programmer" in his recent Dr. Dobb's interview:

A lot of people think that you can't be old and be good, and that's not true. You just have to be willing to let go of the strengths that you had a year ago and get some new strengths this year. Because it does change fast, and if you're not willing to do that, then you're not really able to be a programmer.""

That made me think of the last comment I made in my posts on JRubyConf:

There is a lot of stuff I don't know. I won't run out of things to read and learn and do for a long, long, time.

This is an ongoing theme in the life of a programmer, in the life of a teacher, and the life of an academic: the choice we make each day between keeping up and settling down. Keeping up is a lot more fun, but it's work. If you aren't comfortable giving up what you were awesome at yesterday, it's even more painful. I've been lucky mostly to enjoy learning new stuff more than I've enjoyed knowing the old stuff. May you be so lucky.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 20, 2012 3:39 PM

A Philosopher of Imitation

Ian Bogost, in The Great Pretender: Turing as a Philosopher of Imitation, writes:

Intelligence -- whatever it is, the thing that goes on inside a human or a machine -- is less interesting and productive a topic of conversation than the effects of such a process, the experience it creates in observers and interlocutors.

This is a very nice one-sentence summary of Turing's thesis in Computing Machinery and Intelligence. I wrote a bit about Turing's ideas on machine intelligence a few months back, but the key idea in Bogost's essay relates more closely to my discussion in Turing's ideas on representation and universal machines.

In this centennial year of his birth, we can hardly go wrong in considering again and again the depth of Turing's contributions. Bogost uses a lovely turn of phrase in his title: a philosopher of imitation. What may sound like a slight or a trifle is, in fact, the highest of compliments. Turing made that thinkable.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 18, 2012 2:31 PM

Names, Values, and The Battle of Bull Run

the cover of 'Encyclopedia Brown Finds the Clues'

Author Donald Sobol died Monday. I know him best from his long-running series, Encyclopedia Brown. Like many kids of my day, I loved these stories. I couldn't get enough. Each book consisted of ten or so short mysteries solved by Encyclopedia or Sally Kimball, his de facto partner in the Brown Detective Agency. I wanted to be Encyclopedia.

The stories were brain teasers. Solving them required knowledge and, more important, careful observation and logical deduction. I learned to pay close attention while reading Encyclopedia Brown, otherwise I had no hope of solving the crime before Encyclopedia revealed the solution. In many ways, these stories prepared me for a career in math and science. They certainly were a lot of fun.

One of the stories I remember best after all these years is "The Case of the Civil War Sword", from the very first Encyclopedia Brown book. I'm not the only person who found it memorable; Rob Bricken ranks it #9 among the ten most difficult Encyclopedia Brown mysteries. The solution to this case turned on the fact that one battle had two different names. Northerners often named battles for nearby bodies of water or prominent natural features, while Southerners named them for the nearest town or prominent man-made features. So, the First Battle of Bull Run and the First Battle of Manassas were the same event.

This case taught me a bit of historical trivia and opened my mind to the idea that naming things from the Civil War was not trivial at all.

This story taught me more than history, though. As a young boy, it stood out as an example of something I surely already knew: names aren't unique. The same value can have different names. In a way, Encyclopedia Brown taught me one of my first lessons about computer science.

~~~~

IMAGE: the cover of Encyclopedia Brown Finds the Clues, 1966. Source: Topless Robot.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 16, 2012 3:02 PM

Refactoring Everywhere: In Code and In Text

Charlie Stross is a sci-fi writer. Some of my friends have recommended his fiction, but I've not read any. In Writing a novel in Scrivener: lessons learned, he, well, describes what he has learned writing novels using Scrivener, an app for writers well known in the Mac OS X world.

I've used it before on several novels, notably ones where the plot got so gnarly and tangled up that I badly needed a tool for refactoring plot strands, but the novel I've finished, "Neptune's Brood", is the first one that was written from start to finish in Scrivener...

... It doesn't completely replace the word processor in my workflow, but it relegates it to a markup and proofing tool rather than being a central element of the process of creating a book. And that's about as major a change as the author's job has undergone since WYSIWYG word processing came along in the late 80s....

My suspicion is that if this sort of tool spreads, the long-term result may be better structured novels with fewer dangling plot threads and internal inconsistencies. But time will tell.

Stross's lessons don't all revolve around refactoring, but being able to manage and manipulate the structure of the evolving novel seems central to his satisfaction.

I've read a lot of novels that seemed like they could have used a little refactoring. I always figured it was just me.

The experience of writing anything in long form can probably be improved by a good refactoring tool. I know I find myself doing some pretty large refactorings when I'm working on the set of lecture notes for a course.

Programmers and computer scientists have the advantage of being more comfortable writing text in code, using tools such as LaTex and Scribble, or homegrown systems. My sense, though, is that fewer programmers use tools like this, at least at full power, than might benefit from doing so.

Like Stross, I have a predisposition against using tools with proprietary data formats. I've never lost data stored in plaintext to version creep or application obsolescence. I do use apps such as VoodooPad for specific tasks, though I am keenly aware of the exit strategy (export to text or RTFD ) and the pain trade-off at exit (the more VoodooPad docs I create, the more docs I have to remember to export before losing access to the app). One of the things I like most about MacJournal is that it's nothing but a veneer over a set of Unix directories and RTF documents. The flip side is that it can't do for me nearly what Scrivener can do.

Thinking about a prose writing tool that supports refactoring raises an obvious question: what sort of refactoring operations might it provide automatically? Some of the standard code refactorings might have natural analogues in writing, such as Extract Chapter or Inline Digression.

Thinking about automated support for refactoring raises another obvious question, the importance of which is surely as clear to novelists as to software developers: Where are the unit tests? How will we know we haven't broken the story?

I'm not being facetious. The biggest fear I have when I refactor a module of a course I teach is that I will break something somewhere down the line in the course. Your advice is welcome!


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 14, 2012 11:01 AM

"Most Happiness Comes From Friction"

Last time, I mentioned again the value in having students learn broadly across the sciences and humanities, including computer science. This is a challenge going in both directions. Most students like to concentrate on one area, for a lot of different reasons. Computer science looks intimidating to students in other majors, perhaps especially to the humanities-inclined.

There is hope. Earlier this year, the Harvard Magazine ran The Frisson of Friction, an essay by Sarah Zhang, a non-CS student who decided to take CS 50, Harvard's intro to computer science. Zhang tells the story of finding a thorny, semicolon-induced bug in a program (an extension for Google's Chrome browser) on the eve of her 21st birthday. Eventually, she succeeded. In retrospect, she writes:

Plenty of people could have coded the same extension more elegantly and in less time. I will never be as good a programmer as -- to set the standard absurdly high -- Mark Zuckerberg. But accomplishments can be measured in terms relative to ourselves, rather than to others. Rather than sticking to what we're already good at as the surest path to résumé-worthy achievements, we should see the value in novel challenges. How else will we discover possibilities that lie just beyond the visible horizon?

... Even the best birthday cake is no substitute for the deep satisfaction of accomplishing what we had previously deemed impossible -- whether it's writing a program or writing a play.

The essay addresses some of the issues that keep students from seeking out novel challenges, such as fear of low grades and fear of looking foolish. At places like Harvard, students who are used to succeeding find themselves boxed in by their friends' expectations, and their own, but those feelings are familiar to students at any school. Then you have advisors who subtly discourage venturing too far from the comfortable, out of their own unfamiliarity and fear. This is a social issue as big as any pedagogical challenge we face in trying to make introductory computer science more accessible to more people.

With work, we can help students feel the deep satisfaction that Zhang experienced. Overcoming challenges often leads to that feeling. She quotes a passage about programmers in Silicon Valley, who thrive on such challenges: "Most happiness probably comes from friction." Much satisfaction and happiness come out of the friction inherent in making things. Writing prose and writing programs share this characteristic.

Sharing the deep satisfaction of computer science is a problem with many facets. Those of us who know the satisfaction know it's a problem worth solving.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

July 13, 2012 12:02 PM

How Science -- and Computing -- Are Changing History

While reading a recent Harvard Magazine article about Eric Mazur's peer instruction technique in physics teaching, I ran across a link to an older paper that fascinated me even more! Who Killed the Men of England? tells several stories of research at the intersection of history, archaeology, genomics, evolution, demography, and simulation, such as the conquest of Roman England by the Anglo Saxons.

Not only in this instance, but across entire fields of inquiry, the traditional boundaries between history and prehistory have been melting away as the study of the human past based on the written record increasingly incorporates the material record of the natural and physical sciences. Recognizing this shift, and seeking to establish fruitful collaborations, a group of Harvard and MIT scholars have begun working together as part of a new initiative for the study of the human past. Organized by [professor of medieval history Michael] McCormick, who studies the fall of the Roman empire, the aim is to bring together researchers from the physical, life, and computer sciences and the humanities to explore the kinds of new data that will advance our understanding of human history.

... The study of the human past, in other words, has entered a new phase in which science has begun to tell stories that were once the sole domain of humanists.

I love history as much as computing and was mesmerized by these stories of how scientists reading the "material record" of the world are adding to our knowledge of the human past.

However, this is more than simply a one-way path of information flowing from scientists to humanists. The scientific data and models themselves are underconstrained. The historians, cultural anthropologists, and demographers are able to provide context to the data and models and so extract even more meaning from them. This is a true collaboration. Very cool.

The rise of science is erasing boundaries between the disciplines that we all studied in school. Scholars are able to define new disciplines, such as "the study of the human past", mentioned in the passage above. These disciplines are organized with a greater focus on what is being studied than on how we are studying it.

We are also blurring the line between history and pre-history. It used to be that history required a written record, but that is no longer a hard limit. Science can read nature's record. Computer scientists can build models using genomic data and migration data that suggest possible paths of change when the written and scientific record are incomplete. These ideas become part of the raw material that humanists use to construct a coherent story of the past.

This change in how we are able to study the world highlights the importance of a broad education, something I've written about a few times recently [ 1 | 2 | 3 ] and not so recently. This sort of scholarship is best done by people who are good at several things, or at least curious and interested enough in several things to get to know them intimately. As I wrote in Failure and the Liberal Arts, it's important both not to be too narrowly trained and not to be too narrowly "liberally educated".

Even at a place like Harvard, this can leave scholars in a quandary:

McCormick is fired with enthusiasm for the future of his discipline. "It is exciting. I jump up every morning. But it is also challenging. Division and department boundaries are real. Even with a generally supportive attitude, it is difficult [to raise funds, to admit students who are excellent in more than one discipline, and so on]. ..."

So I will continue to tell computer science students to take courses from all over the university, not just from CS and math. This is one point of influence I have as a professor, advisor, and department head. And I will continue to look for ways to encourage non-CS students to take CS courses and students outside the sciences to study science, including CS. As that paragraph ends:

"... This is a whole new way of studying the past. It is a unique intellectual opportunity and practically all the pieces are in place. This should happen here--it will happen, whether we are part of it or not."

"Here" doesn't have to be Harvard. There is a lot of work to be done.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

June 30, 2012 10:52 AM

"What Were Alleged to be Ideas"

James Webb Young begins his book A Technique for Producing Ideas with a prefatory note:

The subject is properly one which belongs to the professional psychologist, which I am not. This treatment of it, therefore, can have value only as an expression of the personal experience of one who has had to earn his living by producing what were alleged to be ideas.

With a little tweaking, such as occasionally substituting a different profession for psychologist, this would make a nice disclaimer for many of my blog entries.

Come to think of it, with a little tweaking, this could serve as the basis of a disclaimer for about 98% of the web.

Thanks to David Schmüdde for a pointer to Young's delightful little book.


Posted by Eugene Wallingford | Permalink | Categories: General

June 26, 2012 4:23 PM

Adventures in Advising

Student brings me a proposed schedule for next semester.

Me: "Are you happy with this schedule?"

Student: "If I weren't, why would I have made it?"

All I can think is, "Boy, are you gonna have fun as a programmer."


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 06, 2012 3:33 PM

Advice, Platitudes, and Reasonable Decisions

I recently listened to a short clip from Seth Godin's book "The Dip". In it, he quotes Vince Lombardi as saying, "winners never quit, and quitters never win", and then says something to the effect of:

Winners quit all the time. They just quit the right stuff at the right time.

This reminded of my recent Good Ideas Aren't Always Enough, in which I talk briefly about Ward Cunningham's experience trying to create a universal mark-up language for wiki.

How did Ward know it was the right time to stop pushing for a universal mark-up? Perhaps success was right around the corner. Maybe he just needed a better argument, or a better example, or a better mark-up language.

Inherent in this sort of lesson is a generic variation of the Halting Problem. You can't be sure that an effort will fail until it fails. But the process may never fail explicitly, simply churning on forever. What then?

That's one of the problems with giving advice of the sort my entry gave, or of the sort that Godin gives in his book. The advice itself is empty, because the opposite advice is also true. You only know which advice is right in any given context after the fact -- if ever.

How did Ward know? I'm guessing a combination of:

  • knowledge about the problem,
  • experience with this problem and others like it,
  • relationship with the community of people involved,
  • and... a little luck.

And someone may come along some day with a better argument, or a better example, or a better mark-up language, and succeed. We won't know until it happens.

Maybe such advice is nothing more than platitude. Without any context, it isn't all that helpful, except as motivation to persevere in the face of a challenge (if you want to push on) or consolation in the face of a setback (if you want to focus your energy elsewhere). Still, I think it's useful to know that other people -- accomplished people -- have faced the same choice. Both outcomes are possible. Knowing that, we can use our knowledge, experience, and relationships to make choices that make sense in our current circumstances and live with the outcomes.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

June 01, 2012 4:39 PM

Good Ideas Aren't Always Enough

Ward Cunningham

In his recent Dr. Dobb's interview, Ward Cunningham talked about the wiki community's efforts to create a universal mark-up language. Despite the many advantages of a common language, the idea never took hold. Ward's post-mortem:

So the only thing I can conclude is that as nice as having a universal or portable mark-up would be, it's not nice enough to cause people to give up what they're working on when they work on their wiki.

This is an important lesson to learn, whatever your discipline or your community. It's especially important if you hope to be an agent of change. Good ideas aren't always enough to induce change, even in a community of people working together in an explicit effort to create better ideas. There needs to be enough energy to overcome the natural inertia associated with any set of practices.

Ward's next sentence embodies even more wisdom:

I accept that as the state of nature and don't worry about it too much anymore.

Denial locks you up. Either you continue in vain to push the rejected idea, or you waste precious time and energy lamenting the perceived injustice of the failure.

Acceptance frees you to move on to your project in peace.


Posted by Eugene Wallingford | Permalink | Categories: General

May 31, 2012 3:17 PM

A Department Head's Fantasy

(I recently finished re-reading Straight Man, the 1997 novel by Richard Russo. This fantasy comes straight out of the book.)

Hank Devereaux, beleaguered chair of the English department, has been called in to meet with Dickie Pope, campus CEO. He arrives at Pope's office just as the CEO is wrapping up a meeting with chief of security Lou Steinmetz and another man. Pope says, "Hank, why don't you go on in and make yourself comfortable. I want to walk these fellas to the door." We join Devereaux's narration:

When I go over to Dickie's high windows to take in the view, I'm in time to see the three men emerge below, where they continue their conversation on the steps.... Lou's campus security cruiser is parked at the curb, and the three men stroll toward it. They're seeing Lou off, I presume, .... But when they get to the cruiser, to my surprise, all three men climb into the front seat and drive off. If this is a joke on me, I can't help but admire it. In fact, I make a mental note to employ a version of it myself, soon. Maybe, if I'm to be fired today, I'll convene some sort of emergency meeting, inviting Gracie, and Paul Rourke, and Finny, and Orshee, and one or two other pebbles from my shoe. I'll call the meeting to order, then step outside on some pretext or other, and simply go home. Get Rachel [my secretary] to time them and report back to me on how long it takes them to figure it out. Maybe even get some sort of pool going.

My relationship with my colleagues is nothing like Devereaux's. Unlike him, I like my colleagues. Unlike his colleagues, mine have always treated me with collegiality and respect. I have no reason to wish them ill will or discomfort.

Still. It is a great joke. And I imagine that there are a lot of deans and department chairs and VPs out there who harbor dark fantasies of this sort all the time, especially during those inevitable stretches of politics that plague universities. Even the most optimistic among us can be worn down by the steady drip-drip-drip of dysfunction. There have certainly been days this year when I've gone home at the end of a long week with a sense of doom and a desire for recompense.

Fortunately, an occasional fantasy is usually all I need to deflate the doom and get back to business. That is the voyeuristic allure of novels like Straight Man for me.

But there may come a day when I can't resist temptation. If you see me walking on campus wearing a Groucho Marx nose and glasses, all bets are off.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

May 22, 2012 7:53 PM

A Few Days at JRubyConf

It's been fourteen months since I last attended a conference. I decided to celebrate the end of the year, the end of my compiler course, and the prospect of writing a little code this summer by attending JRubyConf 2012. I've programmed a fair amount in Ruby but have only recently begun to play with JRuby, an implementation of Ruby in Java which runs atop the JVM. There are some nice advantages to this, including the ability to use Java graphics with Ruby models and the ability to do real concurrency. It also offers me a nice combination for the summer. I will be teaching our sophomore-level intermediate computing course this fall, which focuses in large part on OO design and Java implementation, as JRuby will let me program in Ruby while doing a little class prep at the same time.

the Stone Arch Bridge in Minneapolis

Conference organizer Nick Sieger opened the event with the obligatory welcome remarks. He said that he thinks the overriding theme of JRubyConf is being a bridge. This is perhaps a natural effect of Minneapolis, a city of many bridges, as the hometown of JRuby, its lead devs, and the conference. The image above is of the Stone Arch Bridge, as seen from the ninth level of the famed Guthrie Center, the conference venue. (The yellow tint is from the window itself.)

The goal for the conference is to be a bridge connecting people to technologies. But it also aims to be a bridge among people, promoting what Sieger called "a more sensitive way of doing business". Emblematic of this goal were its Sunday workshop, a Kids CodeCamp, and its Monday workshop, Railsbridge. This is my first open-source conference, and when I look around I see the issue that so many people talk about. Of 150 or so attendees, there must be fewer than one dozen women and fewer than five African-Americans. The computing world certainly has room to make more and better connections into the world.

My next few entries will cover some of the things I learn at the conference. I start with a smile on my face, because the conference organizers gave me a cookie when I checked in this morning:

the sugar cookie JRubyConf gave me at check-in

That seems like a nice way to say 'hello' to a newcomer.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

May 11, 2012 2:31 PM

Get Busy; Time is Short

After an award-winning author had criticized popular literature, Stephen King responded with advice that is a useful reminder to us all:

Get busy. You have a short life span. You need to stop this crap about sitting there and talking about what we do, and actually do it. Because God gave you some talent, but he also gave you a certain number of years.

You don't have to be an award-winning author to waste precious time commenting on other people's work. Anyone with a web browser can fill his or her day talking about stuff, and not actually making stuff. For academics, it is a professional hazard. We need to balance the analytic and the creative. We learn by studying others' work and writing about it, but we also need to make time to make.

(The passage above comes from Stephen King, The Art of Fiction No. 189, in the wonderful on-line archive of interviews from the Paris Review.)


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

May 08, 2012 3:22 PM

Quality and Quantity, Thoroughbred Edition

I'll Have Another was not highly sought after as a yearling, when he was purchased for the relatively small sum of $11,000.

On Saturday, I'll Have Another rallied down the stretch to win the 2012 Kentucky Derby, passing Bodemeister, one of the race favorites that had led impressively from the gate. Afterward, a television commentator asked the horse's trainer, "What did you and the owner see in the horse way back that made you want to buy it?" The trainer's answer was unusually honest. He said something to the effect,

We buy a lot of horses. Some work out, and some don't. There is a lot of luck involved. You do the right things and see what happens.

This is as a good an example as I've heard in a while of the relationship between quantity and quality, which my memory often connects with stories from the book Art and Fear. People are way too fond of mythologizing successes and then romanticizing the processes that lead to them. In most vocations and most avocations, the best way to succeed is to do the right things, to work hard, be unlucky a lot, and occasionally get lucky.

This mindset does not to diminish the value of hard work and good practices. No, it exalts their value. What it diminishes is our sense of control over outcomes in a complex world. Do your best and you will get better. Just keep in mind that we often have a lot less control over success and failure than our mythology tends to tell us.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 21, 2012 3:57 PM

A Conflict Between Fashion and the Unfashionable

Passage of the day, courtesy of Dave Winer:

They have started incubators in every major city on the planet. Unfortunately it hasn't been stylish to learn how to program for a number of years, so there aren't that many programmers available to hire. And it takes years to get really good at this stuff.

Hey, they just need a programmer. Or fifty.

While we teach CS students to program, we need to cultivate an entrepreneurial spirit, too. What an opportunity awaits someone with ideas and the ability to carry them out.


Posted by Eugene Wallingford | Permalink | Categories: General

April 20, 2012 3:14 PM

Better Than Everyone, University Edition

Seth Godin recently mentioned something that Clay Shirky has said about the television industry: Forty years ago, you only had to be better than two other shows. Now you have to better than everybody.

At the same time technology makes it easier for people to put their creations in front of potential viewers, it makes it harder for established players to retain control over market share. As Godin summarized, ".. with a million choices, each show earns the attention it gets in every single moment".

I've mused here periodically about how these same technological changes will ultimately affect universities. It seems that many people agree that education, even higher ed, is "ripe for disruption". Startups such as Boundless are beginning to take their shot at what seems an obvious market, the intersection of education and the beleaguered publishing industry: textbooks.

Though on-line education has been growing now for years, I haven't written anything about it. For one thing, I don't know what I really think of it yet. As much as I think out loud when I blog, I usually at least have a well-formed thought or two. When it comes to on-line education, my brain is still mostly full of mush.

Not long ago, the threat of on-line education to the way traditional universities operate did not seem imminent. That is, I think, starting to change. When the primary on-line players were non-traditional alternatives such as the University of Phoenix, it seemed easy enough to sell the benefits of the brick-and-ivy campus-based education to people. But as these schools slowly build a track record -- and an alumni base -- they will become a common enough part of the popular landscape that they become an acceptable alternative to many people. And as the cost of brick-and-ivy education rises, it becomes harder and harder to sell people on its value.

Of course, we now see a burgeoning in the number of on-line offerings from established universities. Big-name schools like MIT and Harvard have made full courses, and even suites of courses, available on-line. One of my more experienced colleagues began to get antsy when this process picked up speed a few years ago. Who wouldn't prefer MIT's artificial intelligence course over ours? These courses weren't yet available for credit, which left us with hope. We offer our course as part of a coherent program of study that leads to a credential that students and employers value. But in time...

... that would change. And it has. Udacity has spun itself off from Stanford and is setting its sights on a full on-line curriculum. A recent Computer World article talks about MITx, a similar program growing out of MIT. These programs are still being created and will likely offer a different sort of credential than the universities that gave birth to them, at least at the start. Is there still hope?

Less and less. As the article reports, other established universities are now offering full CS programs on-line. The University of Illinois at Springfield started in 2006 and now has more computer science students enrolled in its on-line undergrad and M.S. programs (171 and 146, respectively) than their on-campus counterparts (121 and 129). In June, Oregon State will begin offering a CS degree program on-line.

The natural reaction of many schools is to join in the rush. Schools like many are putting more financial and faculty resources into the creation of on-line courses and programs, because "that's where the future lies".

I think, though, that Shirky's anecdote about the TV industry serves as an important cautionary tale. The caution has two prongs.

First, you have to adapt. When a disruptive technology comes along, you have to respond. You may think that you are good enough or dominant enough to survive the wave, but you probably aren't. Giants that retain their position atop a local maximum when a new technology redefines an industry quickly change from giants to dinosaurs.

Adapting isn't easy. Clayton Christensen and his colleagues have documented how difficult it is for a company that is very good at something and delivering value in its market to change course. Even with foresight and a vision, it is difficult to overcome inertia and external forces that push a company to stay on the same track.

Second, technology lowers barriers for producers and consumers alike. It's no longer enough to be the best teaching university in your state or neighborhood. Now you have to better than everybody. If you are a computer science department, that seems an insurmountable task. Maybe you can be better than Illinois-Springfield (and maybe not!), but how can you be better than Stanford, MIT, and Harvard?

Before joining the rush to offer programs on-line, you might want to have an idea of what it is that you will be the best at, and for whom. With degrees from Illinois-Springfield, Oregon State, Udacity, Stanford, MIT, and Harvard only a few clicks away, you will have to earn the attention -- and tuition -- you receive from every single student.

But don't dally. It's lonely as the dominant player in a market that no longer exists.


Posted by Eugene Wallingford | Permalink | Categories: General

April 09, 2012 2:53 PM

Should I Change My Major?

Recruiting brochures for academic departments often list the kinds of jobs that students get when they graduate. Brochures for CS departments tend to list jobs such as "computer programmer", "system administrator", "software engineer", and "systems analyst". More ambitious lists include "CS professor" and "entrepreneur". I've been promoting entrepreneurship as a path for our CS grads for a few years now.

This morning, I was browsing the tables at one of my college's preview day sessions and came across my new all-time favorite job title for graduates. If you major in philosophy at my university, it turns out that one of the possible future job opportunities awaiting you is...

Bishop or Pope

Learning to program gives you superhuman strength, but I'm not sure a CS major can give you a direct line to God. I say, "Go for it."


Posted by Eugene Wallingford | Permalink | Categories: General

April 06, 2012 4:29 PM

A Reflection on Alan Turing, Representation, and Universal Machines

Douglas Hofstadter speaking at UNI

The day after Douglas Hofstadter spoke here on assertions, proof's and Gödel's theorem, he gave a second public lecture hosted by the philosophy department. Ahead of time, we knew only that Hofstadter would reflect on Turing during his centennial. I went in expecting more on the Turing test, or perhaps a popular talk on Turing's proof of The Halting Problem. Instead, he riffed on Chapter 17 from I Am a Strange Loop.

In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.

Turing, he said, is another peak in the landscape occupied by Tarski and Gödel, whose work he had discussed the night before. (As a computer scientist, I wanted to add to this set contemporaries such as Alonzo Church and Claude Shannon.) Hofstadter mentioned Turing's seminal paper about the Entscheidungsproblem but wanted to focus instead on the model of computation for which he is known, usually referred to by the name "Turing machine". In particular, he asked us to consider a key distinction that Turing made when talking about his model: that between dedicated and universal machines.

A dedicated machine performs one task. Human history is replete with dedicated machines, whether simple, like the wheel, or complex, such as a typewriter. We can use these tools with different ends in mind, but the basic work is fixed in their substance and structure.

The 21st-century cell phone is, in contrast, a universal machine. It can take pictures, record audio, and -- yes -- even be used as a phone. But it can also do other things for us, if we but go to the app store and download another program.

Hofstadter shared a few of his early personal experiences with programs enabling line printers to perform tasks for which they had not been specifically designed. He recalled seeing a two-dimensional graph plotted by "printing" mostly blank lines that contained a single *. Text had been turned into graphics. Taking the idea further, someone used the computer to print a large number of cards which, when given to members of the crowd at a football game, could be used to create a massive two-dimensional message visible from afar. Even further, someone used a very specific layout of the characters available on the line printer to produce a print-out that appeared from the other side of the room to be a black-and-white photograph of Raquel Welch. Text had been turned into image.

People saw each of these displays as images by virtue of our eyes and mind interpreting a specific configuration of characters in a certain way. We can take that idea down a level into the computer itself. Consider this transformation of bits:

0000 0000 0110 1011 → 0110 1011 0000 0000

A computer engineer might see this as a "left shift" of 8 bits. A computer programmer might see it as multiplying the number on the left by 256. A graphic designer might see us moving color from one pixel to another. A typesetter may see one letter being changed into another. What one sees depends on how one interprets what the data represent and what the process means.

Alan Turing was the first to express clearly the idea that a machine can do them all.

"Aren't those really binary numbers?", someone asked. "Isn't that real, and everything else interpretation?" Hofstadter said that this is a tempting perspective, but we need to keep in mind that they aren't numbers at all. They are, in most computers, pulses of electricity, or the states of electronic components, that we interpret as 0s and 1s.

After we have settled on interpreting those pulses or states as 0s and 1s, we then interpret configurations of 0s and 1s to mean something else, such as decimal numbers, colors, or characters. This second level of interpretation exposes the flaw in popular claims that computers can do "only" process 0s and 1s. Computers can deal with numbers, colors, or characters -- anything that can be represented in any way -- when we interpret not only what the data mean but also what the process means.

(In the course of talking representations, he threw in a cool numeric example: Given an integer N, factor it as 2^a * 3^b * 5^c *7^d ... and use [a.b.c.d. ...] to stand for N. I see a programming assignment or two lying in wait.)

The dual ideas of representation and interpretation take us into a new dimension. The Principia Mathematica describes a set of axioms and formal rules for reasoning about numeric structures. Gödel saw that it could be viewed at a higher level, as a system in its own right -- as a structure of integers. Thus the Principia can talk about itself. It is, in a sense, universal.

This is the launching point for Turing's greatest insight. In I Am a Strange Loop, Hofstadter writes:

Inspired by Gödel's mapping of PM into itself, Alan Turing realized that the critical threshold for this kind of computational universality comes exactly at the point where a machine is flexible enough to read and correctly interpret a set of data that describes its own structure. At this crucial juncture, a machine can, in principle, explicitly watch how it does any particular task, step by step. Turing realized that a machine that has this critical level of flexibility can imitate any other machine, no matter how complex the latter is. In other words, there is nothing more flexible than a universal machine. Universality is as far as you can go!

Alan Turing

Thus was Turing first person to recognize the idea of a universal machine, circa 1935-1936: that a Turing machine can be given, as input, data that encodes its own instructions. This is the beginning of perhaps the biggest of the Big Ideas of computer science: the duality of data and program.

We should all be glad he didn't patent this idea.

Turing didn't stop there, of course, as I wrote in my recent entry on the Turing test. He recognized that humans are remarkably capable and efficient representational machines.

Hofstadter illustrates this with the idea of "hub", a three-letter word that embodies an enormous amount of experience and knowledge, chunked in numerous ways and accreted slowly over time. The concept is assembled in our minds out of our experiences. It is a representation. Bound up in that representation is an understanding of ourselves as actors in certain kinds of interactions, such as booking a flight on an airplane.

It is this facility with representations that distinguishes us humans from dogs and other animals. They don't seem capable of seeing themselves or others as representations. Human beings, though, naturally take other people's representations into their own. This results in a range of familiarities and verisimilitude. We "absorb" some people so well that we feel we know them intimately. This is what we mean when we say that someone is "in our soul". We use the word 'soul' not in a religious sense; we are referring to our essence.

Viewed this way, we are all distributed beings. We are "out there", in other people, as well as "in here", in ourselves. We've all had dreams of the sort Hofstadter used as example, a dream in which his deceased father appeared, seemingly as real as he ever had been while alive. I myself recently dreamt that I was running, and the experience of myself was as real as anything I feel when I'm awake. Because we are universal machines, we are able to process the representations we hold of ourselves and of others and create sensations that feel just like the ones we have when we interact in the world.

It is this sense that we are self-representation machines that gives rise to the title of his book, "I am a strange loop". In Hofstadter's view, our identity is a representation of self that we construct, like any other representation.

This idea underlies the importance of the Turing test. It takes more than "just syntax" to pass the test. Indeed, syntax is itself more than "just" syntax! We quickly recurse into the dimension of representation, of models, and a need for self-reference that makes our syntactic rules more than "just" rules.

Indeed, as self-representation machines, we are able to have a sense of our own smallness within the larger system. This can be scary, but also good. It makes life seem precious, so we feel a need to contribute to the world, to matter somehow.

Whenever I teach our AI course, I encounter students who are, for religious or philosophical reasons, deeply averse to the idea of an intelligent machine, or even of scientific explanations of who we are. When I think about identity in terms of self-representation, I can't help but feel that, at an important level, it does not matter. God or not, I am in awe of who we are and how we got to here.

So, we owe Alan Turing a great debt. Building on the work of philosophers, mathematicians, and logicians, Turing gave us the essential insight of the universal machine, on which modern computing is built. He also gave us a new vocabulary with which to think about our identity and how we understand the world. I hope you can appreciate why celebrating his centennial is worthwhile.

~~~~

IMAGE 1: a photo of Douglas Hofstadter speaking at UNI, March 7, 2012. Source: Kevin C. O'Kane.

IMAGE 2: the Alan Turing centenary celebration. Source: 2012 The Alan Turing Year.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

April 04, 2012 4:39 PM

Computational Search Answers an Important Question

Update: Well, this is embarrassing. Apparently, Mat and I were the victims of a prank by the folks at ChessBase. You'd think that, after more than twenty-five years on the internet, I would be more circumspect at this time of year. Rather than delete the post, I will leave it here for the sake of posterity. If nothing else, my students can get a chuckle from their professor getting caught red-faced.

I stand behind my discussion of solving games, my recommendation of Rybka, and my praise for My 60 Memorable Games (my favorite chess book of all time. I also still marvel at the chess mind of Bobby Fischer.

~~~~

Thanks to reader Mat Roberts for pointing me to this interview with programmer Vasik Rajlich, which describes a recent computational result of his: one of the most famous openings in chess, the King's Gambit, is a forced draw.

Games are, of course, a fertile testbed for computing research, including AI and parallel computation. Many researchers make one of their goals to "solve" a game, that is, to show that, with best play by both players, a game has a particular outcome. Games with long histories and large communities of players naturally attract a lot of interest, and solving one of them is usually considered a valuable achievement.

For us in CS, interest grows as with the complexity of the game. Solving Connect Four was cool, but solving Othello on a full-sized board would be cooler. Almost five years ago, I blogged about what I still consider the most impressive result in this domain: the solving of checkers by Jonathan Schaeffer and his team at the University of Alberta.

the King's Gambit

The chess result is more limited. Rajlich, an International Master of chess and the programmer of the chess engine Rybka, has shown results only for games that begin 1.e4 e5 2.f4 exf4. If White plays 3.Nf3 -- the most common next move -- then Black can win with 3... d6. 3.Bc4 also loses. Only one move for White can force a draw, the uncommon 3.Be2. Keep in mind that these results all assume best play by both players from there on out. White can win, lose, or draw in all variations if either player plays a sub-optimal move.

I say "only" when describing this result because it leaves a lot of chess unsolved, all games starting with some other sequence of moves. Yet the accomplishment is still quite impressive! The King's Gambit is one of the oldest and most storied opening sequences in all of chess, and it remains popular to this day among players at every level of skill.

Besides, consider the computational resources that Rajlich had to use to solve even the King's Gambit:

... a cluster of computers, currently around 300 cores [created by Lukas Cimiotti, hooked up to] a massively parallel cluster of IBM POWER 7 Servers provided by David Slate, senior manager of IBM's Semantic Analysis and Integration department -- 2,880 cores at 4.25 GHz, 16 terabytes of RAM, very similar to the hardware used by IBM's Watson in winning the TV show "Jeopardy". The IBM servers ran a port of the latest version of Rybka, and computation was split across the two clusters, with the Cimiotti cluster distributing the search to the IBM hardware.

Oh, and this set up had to run for over four months to solve the opening. I call that impressive. If you want something less computationally intensive yet still able to beat you me and everybody we know at chess, you can by Rybka, a chess engine available commercially. (An older version is available for free!)

What effect will this result have on human play? Not much, practically speaking. Our brains aren't big enough or fast enough to compute all the possible paths, so human players will continue to play the opening, create new ideas, and explore the action in real time over the board. Maybe players with the Black pieces will be more likely to play one of the known winning moves now, but results will remain uneven between White and Black. The opening leads to complicated positions.

the cover of Bobby Fischer's 'My 60 Memorable Games'

If, like some people, you worry that results such as this one somehow diminish us as human beings, take a look again at the computational resources that were required to solve this sliver of one game, the merest sliver of human life, and then consider: This is not the first time that someone claimed the King's Gambit busted. In 1961, an eighteen-year-old U.S. chess champion named Bobby Fischer published an article claiming that 1.e4 e5 2.f4 exf4 3.Nf3 was a forced loss. His prescription? 3... d6. Now we know for sure. Like so many advances in AI, this one leaves me marveling at the power of the human mind.

Well, at least Bobby Fischer's mind.

~~~~

IMAGE 1: The King's Gambit. Source: Wikimedia Commons.

IMAGE 2: a photograph of the cover of my copy of My 60 Memorable Games by Bobby Fischer. Bobby analyzes a King's Gambit or two in this classic collection of games.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

March 30, 2012 5:22 PM

A Reflection on Alan Turing, the Turing Test, and Machine Intelligence

Alan Turing

In 1950, Alan Turing published a paper that launched the discipline of artificial intelligence, Computing Machinery and Intelligence. If you have not read this paper, go and do so. Now. 2012 is the centennial of Turing's birth, and you owe yourself a read of this seminal paper as part of the celebration. It is a wonderful work from a wonderful mind.

This paper gave us the Imitation Game, an attempt to replace the question of whether a computer could be intelligent by withn something more concrete: a probing dialogue. The Imitation became the Turing Test, now a staple of modern culture and the inspiration for contests and analogies and speculation. After reading the paper, you will understand something that many people do not: Turing is not describing a way for us to tell the difference between human intelligence and machine intelligence. He is telling us that the distinction is not as important as we seem to think. Indeed, I think he is telling us that there is no distinction at all.

I mentioned in an entry a few years ago that I always have my undergrad AI students read Turing's paper and discuss the implications of what we now call the Turing Test. Students would often get hung up on religious objections or, as noted in that entry, a deep and a-rational belief in "gut instinct". A few ended up putting their heads in the sand, as Turing knew they might, because they simply didn't want to confront the implication of intelligences other than our own. And yet they were in an AI course, learning techniques that enable us to write "intelligent" programs. Even students with the most diehard objections wanted to write programs that could learn from experience.

Douglas Hofstadter, who visited campus this month, has encountered another response to the Turing Test that surprised him. On his second day here, in honor of the Turing centenary, Hofstadter offered a seminar on some ideas related to the Turing Test. He quoted two snippets of hypothetical man-machoine dialogue from Turing's seminal paper in his classic Gödel, Escher, Bach. Over the the years, he occasionally runs into philosophers who think the Turing Test is shallow, trivial to pass with trickery and "mere syntax". Some are concerned that it explores "only behavior". Is behavior all there is? they ask.

As a computer programmer, the idea that the Turing test explores only behavior never bothered me. Certainly, a computer program is a static construct and, however complex it is, we can read and understand it. (Students who take my programming languages course learn that even another program can read and process programs in a helpful way.) This was not a problem for Hofstadter either, growing up as he did in a physicist's household. Indeed, he found Turing's formulation of the Imitation Game to be deep and brilliant. Many of us who are drawn to AI feel the same. "If I could write a program capable of playing the Imitation Game," we think, "I will have done something remarkable."

One of Hofstadter's primary goals in writing GEB was to make a compelling case form Turing's vision.

Douglas Hofstadter

Those of us who attended the Turing seminar read a section from Chapter 13 of Le Ton beau de Marot, a more recent book by Hofstadter in which he explores many of the same ideas about words, concepts, meaning, and machine intelligence as GEB, in the context of translating text from one language to another. Hofstadter said the focus in this book is on the subtlety of words and the ideas they embody, and what that means for translation. Of course, these are the some of the issues that underlie Turing's use of dialogue as sufficient for us to understand what it means to be intelligent.

In the seminar, he shared with us some of his efforts to translate a modern French poem into faithful English. His source poem had itself been translated from older French into modern French by a French poet friend of his. I enjoyed hearing him talk about "the forces" that pushed him toward and away from particular words and phrases. Le Ton beau de Marot uses creative dialogues of the sort seen in GEB, this time between the Ace Mechanical Translator (his fictional computer program) and a Dull Rigid Human. Notice the initials of his raconteurs! They are an homage to Turing. The human translator, Douglas R. Hofstadter himself, is cast in the role of AMT, which shares its initials with Alan M. Turing, the man who started this conversation over sixty years ago.

Like Hofstadter, I have often encountered people who object to the Turing test. Many of my AI colleagues are comfortable with a behavioral test for intelligence but dislike that Turing considers only linguistic behavior. I am comfortable with linguistic behavior because it captures what is for me the most important feature of intelligence: the ability to express and discuss ideas.

Others object that it sets too low a bar for AI, because it is agnostic on method. What if a program "passes the test", and when we look inside the box we don't understand what we see? Or worse, we do understand what we see and are unimpressed? I think that this is beside the point. Not to say that we shouldn't want to understand. If we found such I program, I think that we would make it an overriding goal to figure out how it works. But how an entity manages to be "intelligent" is a different question from whether it is intelligent. That is precisely Turing's point!

I agree with Brian Christian, who won the prize for being "The Most Human Human" in a competition based on Turing's now-famous test. In an interview with The Paris Review, he said,

Some see the history of AI as a dehumanizing narrative; I see it as much the reverse.

Turing does not diminish what it is to be human when he suggests that a computer might be able to carry on a rich conversation about something meaningful. Neither do AI researchers or teenagers like me, who dreamed of figuring just what it is that makes it possible for humans to do what we do. We ask the question precisely because we are amazed. Christian again:

We build these things in our own image, leveraging all the understanding of ourselves we have, and then we get to see where they fall short. That gap always has something new to teach us about who we are.

As in science itself, every time we push back the curtain, we find another layer of amazement -- and more questions.

I agree with Hofstadter. If a computer could do what it does in Turing's dialogues, then no one could rightly say that it wasn't "intelligent", whatever that might mean. Turing was right.

~~~~

PHOTOGRAPH 1: the Alan Turing centenary celebration. Source: 2012 The Alan Turing Year.

PHOTOGRAPH 2: Douglas Hofstadter in Bologna, Italy, 2002. Source: Wikimedia Commons.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

March 27, 2012 4:53 PM

Faculty Workload and the Cost of Universities

This morning, @tonybibbs tweeted me a link to a Washington Post piece called Do college professors work hard enough?, wondering what I might think.

Author David Levy calls for "reforms for outmoded employment policies that overcompensate faculty for inefficient teaching schedules". Once, he says, faculty were generally underpaid relative to comparably educated professionals; now senior faculty at most state universities earn salaries roughly in line with comparable professionals.

Not changed, however, are the accommodations designed to compensate for low pay in earlier times. Though faculty salaries now mirror those of most upper-middle-class Americans working 40 hours for 50 weeks, they continue to pay for teaching time of nine to 15 hours per week for 30 weeks, making possible a month-long winter break, a week off in the spring and a summer vacation from mid-May until September.

My initial impressions after a quick read this morning were

  1. Yes, some faculty work too little.
  2. Most faculty work more than he seems to think.
  3. Changing #1 is hard.

After a second read, that still my impression. Let me expand.

Before beginning, let me note that Levy mentions three kinds of institutions: research universities, teaching universities, and community colleges. I myself can't offer informed comment on community college faculty. I have spent my professional career as a faculty member and department head at a teaching university. I also spent six years in grad school at an R-1 institution and have many colleagues and friends who work at research schools. Finally, I am in Computer Science, not a more stable discipline. These are the experience on which I draw.

First, #2. Levy seems willing to grant that faculty at research institutions work longer hours, or if not at least that the work they do is so valuable as to earn high pay. I agree. Levy seems unwilling to grant similar effort or importance to what faculty at teaching universities. He thinks himself generous in allowing that the latter might spend as much time in prep as in class and concludes that "the notion that faculty in teaching institutions work a 40-hour week is a myth".

At my school, data on faculty workload have routinely showed that on average faculty work more than fifty hours per week. When I was full time faculty, my numbers were generally closer to sixty. (As a department head for the last few years, I have generally worked more.) These numbers are self-reported, but I have good reason to trust them, having observed what faculty in my department do.

If we aren't meeting an R-1 school's stringent requirements for research, publication, and grant writing, what are we doing? We actually do spend more hours per week working outside the classroom as inside. We are preparing new course materials, meeting with students in office hours and the lab, and experimenting with new programming languages and technologies that can improve our courses (or making some of our course content obsolete). We advise undergrads and supervise their research projects. Many departments have small grad programs, which bring with them some of the duties that R-1 profs face.

We also do scholarship. Most teaching schools do expect some research and publication, though clearly not at the level expected by the R-1s. Teaching schools are also somewhat broader in the venues for publication that they accept, allowing teaching conferences such as SIGCSE or formal workshops like the SPLASH (neé OOPSLA) Educators' Symposium. Given these caveats, publishing a paper or more per year is not an unusual scholarship expectation at schools like mine.

During the summer, faculty at teaching universities are often doing research, writing, or preparing and teaching workshops for which they are paid little, if anything. Such faculty may have time for more vacation than other professionals, but I don't think many of them are sailing the Caribbean for the 20+ weeks that Levy presumes they have free.

Levy does mention service to the institution in the form of committee work. Large organizations do not run themselves. From what I remember of my time in grad school, most of my professors devoted relatively little time to committees. They were busy doing research and leading their teams. The university must have had paid staff doing a lot of the grunt work to keep the institution moving. At a school like mine, many faculty carry heavy service loads. Perhaps we could streamline the bureaucracy to eliminate some of this work, or higher staff to do it, but it really does consume a non-trivial amount of some faculty members' time and energy.

After offering these counterpoints -- which I understand may be seen as self-serving, given where I work -- what of #1? It is certainly the case that some university faculty work too little. Expectations for productivity in research and other scholarship have often been soft in the past, and only now are many schools coming to grips with the full cost of faculty productivity.

Recently, my school has begun to confront a long-term decline in real funding from the state, realizing that it cannot continue to raise tuition to make up for the gap. One administrative initiative asked department heads and faculty to examine scholarly productivity of faculty and assign professors who have not produced enough papers, grant proposals, or other scholarly results over a five-year period to teach an extra course. There were some problems in how administrators launched and communicated this initiative, but the idea is a reasonable one. If faculty are allocated time for scholarship but aren't doing much, then they can use that time to teach a course.

The reaction by most of faculty was skeptical and concerned. (This was true of department heads as well, because most of us think of ourselves as faculty temporarily playing an administrator's role.)

That brings us to #3. Changing a culture is hard. It creates uncertainty. When expectations have been implicit, it is hard to make them explicit in a way that allows enforcement while at the same time recognizing the value in what most faculty have been doing. The very word "enforcement" runs counter to the academic culture, in which faculty are left free to study and create in ways that improve their students' education and in which it is presumed faculty are behaving honorably.

In this sense, Levy's article hits on an issue that faces universities and the people who pay for them: taxpayers, students and parents who pay tuition, and granting agencies. I agree with Levy that addressing this issue is essential as universities come to live in a world with different cost structures and different social contracts. He seems to understand that change will be hard. However, I'm not sure he has an accurate view of what faculty at teaching universities are already doing.


Posted by Eugene Wallingford | Permalink | Categories: General

February 29, 2012 4:40 PM

From Mass Producing Rule Followers to Educating Creators

The bottom is not a good place to be, even if you're capable of getting there.

Seth Godin's latest manifesto, Stop Stealing Dreams, calls for a change to the way we educate our children. I've written some about how changes in technology and culture will likely disrupt universities, but Godin bases his manifesto on a simpler premise: we have to change what we achieve through education because what we need has changed. Historically, he claims, our K-12 system has excelled at one task: "churning out kids who are stuck looking for jobs where the boss tells them exactly what to do".

As negatively as that is phrased, it may well have been a reasonable goal for a new universal system of compulsory education in the first half of the 1900s. But times have changed, technology has changed, our economy has changed, and our needs have changed. Besides, universal education is a reality now, not a dream, so perhaps we should set our sights higher.

I only began to read Godin's book this afternoon. I'm curious to see how well the ideas in it apply to university education. The role of our universities has changed over time, too, including rapid growth in the number of people continuing their education after high school. The number and variety of public universities grew through the 1960s and 1970s in part to meet the new demand.

Yet, at its root, undergraduate education is, for most students, a continuation of the same model they experienced K-12: follow a prescribed program of study, attend classes, do assignments, pass tests, and follow rules. A few students avail themselves of something better as undergrads, but it's really not until grad school that most people have a chance to participate fully in the exploration for and creation of knowledge. And that is the result of self-selection: those most interested in such an education seek it out. Alas, many undergrads seem hardly prepared to begin driving their own educations, let alone interested.

That is one of the challenges university professors face. From my experience as a student and a father of students, I know that many HS teachers are working hard to open their students' minds to bigger ideas, too -- when they have the chance, that is, amid the factory-style mass production system that dominates many high schools today.

As I sat down to write this, it occurred to me that learning to program is a great avenue toward becoming a creator and an innovator. Sadly, most CS programs seem satisfied to keep doing the same old thing: to churn out people who are good at doing what they are told. I think many university professors, myself included, could do better by keeping this risk in mind. Every day as I enter the classroom, I should ask myself what today's session will do for my students: kill a dream, or empower it?

While working out this morning, my iPod served up John Hiatt's song, "Everybody Went Low" (available on YouTube). The juxtaposition of "going low" in the song and Godin's warning about striving for the bottom created an interesting mash-up in my brain. As Hiatt sings, when you are at the bottom, there is:

Nothing there to live up to
There's nothing further down
Turn it off or turn around

Big systems with lots of moving parts are hard to turn around. I hope we can do it before we get too low.


Posted by Eugene Wallingford | Permalink | Categories: General

February 25, 2012 3:04 PM

I Did the Reading, and Watched the Video

David Foster Wallace, 2006

It seems that I've been running across David Foster Wallace everywhere for the last few months. I am currently reading his collection A Supposedly Fun Thing I'll Never Do Again. I picked it up for a tennis-turned-philosophy essay titled, improbably, "Tennis Player Michael Joyce's Professional Artistry as a Paradigm of Certain Stuff about Choice, Freedom, Discipline, Joy, Grotesquerie, and Human Completeness". (You know I am a tennis fan.) On the way to reading that piece, I got hooked on the essay about filmmaker David Lynch. I am not a fan of Wallace's fiction, but his literary non-fiction arrests me.

This morning, I am listening to a lengthy uncut interview with Wallace from 2003, courtesy of fogus. In it, Wallace comes across just as he does in his written work: smart, well-read, and deeply thoughtful. He also seems so remarkably pleasant -- not the sort of thing I usually think of as a default trait in celebrities. His pleasantness feels very familiar to me as a fellow Midwesterner.

The video also offers occasional haunting images, both his mannerisms but especially his eyes. His obvious discomfort makes me uncomfortable as I watch. It occurs to me that I feel this way only because I know how his life ended, but I don't think that's true.

The interview contains many thought-provoking responses and interchanges. One particular phrase will stay with me for a while. Wallace mentions the fondness Americans have for the freedom to choice, the freedom to satisfy our desires. He reminds us that inherent in such freedom is a grave risk: a "peculiar kind of slavery", in which we feel we must satisfy our desires, we must act on our impulses. Where is the freedom in that prison?

There is also a simple line that appealed to the teacher in me: "It takes skill and education to get good enough at reading or listening to be able to derive pleasure from it." This is one of the challenges that faces teachers everywhere. Many things require skill and education -- and time -- in order for students to be able to derive satisfaction and even pleasure from them. Computer programming is one.

I recommend this interview to anyone interested in modern culture, especially American culture.

As I listened, I was reminded of this exchange from a short blog entry by Seth Godin from last year:

A guy asked his friend, the writer David Foster Wallace,

"Say, Dave, how'd y'get t'be so dang smart?"

His answer:

"I did the reading."

Wallace clearly did the reading.

~~~~

PHOTOGRAPH: David Foster Wallace at the Hammer Museum in Los Angeles, January 2006. Source: Wikimedia Commons.


Posted by Eugene Wallingford | Permalink | Categories: General

February 06, 2012 6:26 PM

Shopping Blog Entries to a Wider Audience

Over the last couple of years, our university relations department has been trying to promote more actively the university's role in the public sphere. One element of this effort is pushing faculty work and professional commentary out into wider circulation. For example, before and after the recent presidential caucuses in Iowa, they helped connect local political science profs with media who were looking for professional commentary from in the trenches.

Well, they have now discovered my blog and are interested in shopping several pieces of general interest to more traditional media outlets, such as national newspapers and professional magazines. Their first effort involves a piece I wrote about liberal education last month, which builds on two related pieces, here and here. I'm in the process of putting it into a form suitable for standalone publication. This includes polishing up some of the language, as well as not relying on links to other articles -- one of the great wins of the networked world.

Another big win of the networked world is the ease with which we can get feedback and make our ideas and our writing better. If you have any suggestions for how I might improve the presentation of the ideas in these pieces, or even the ideas themselves, please let me know. As always, I appreciate your thoughts and willingness to discuss them with me.

When I mentioned this situation in passing on Twitter recently, a former student asked whether my blog's being on the university's radar would cause me to write differently. The fact is that I always tried to respect my university, my colleagues, and my students when I write, and to keep their privacy and integrity in mind. This naturally results in some level of self-censorship. Still, I have always tried to write openly and honestly about what I think and learn.

You can rest assured. This blog remains mine alone and will continue to speak in my voice. I will write as openly and honestly as ever. That is the only way that the things I write could ever be of much interest to readers such as you, let alone to me.


Posted by Eugene Wallingford | Permalink | Categories: General

January 25, 2012 3:45 PM

Pragmatism and the Scientific Spirit

the philosopher William James

Last week, I found myself reading The Most Entertaining Philosopher, about William James. It was good fun. I have always liked James. I liked the work of his colleagues in pragmatism, C.S. Peirce and John Dewey, too, but I always liked James more. For all the weaknesses of his formulation of pragmatism, he always seemed so much more human to me than Peirce, who did the heavy theoretical lifting to create pragmatism as a formal philosophy. And he always seemed a lot more fun than Dewey.

I wrote an entry a few years ago called The Academic Future of Agile Methods, which described the connection between pragmatism and my earlier AI, as well as agile software development. I still consider myself a pragmatist, though it's tough to explain just what that means. The pragmatic stance is too often confounded with a self-serving view of the world, a "whatever works is true" philosophy. Whatever works... for me. James's references to the "cash value" of truth didn't help. (James himself tried to undo the phrase's ill effects, but it has stuck. Even in the 1800s, it seems, a good sound bite was better than the truth.)

As John Banville, the author NY Times book review piece says, "It is far easier to act in the spirit of pragmatism than to describe what it is." He then gives "perhaps the most concise and elegant definition" of pragmatism, by philosopher C. I. Lewis. It is a definition that captures the spirit of pragmatism as well as any few lines can:

Pragmatism could be characterized as the doctrine that all problems are at bottom problems of conduct, that all judgments are, implicitly, judgments of value, and that, as there can be ultimately no valid distinction of theoretical and practical, so there can be no final separation of questions of truth of any kind from questions of the justifiable ends of action.

This is what drew me to pragmatism while doing work in knowledge-based systems, as a reaction to the prevailing view of logical AI that seemed based in idealist and realist epistemologies. It is also what seems to me to distinguish agile approaches to software development from the more common views of software engineering. I applaud people who are trying to create an overarching model for software development, a capital-t Theory, but I'm skeptical. The agile mindset is, or at least can be, pragmatic. I view software development in much the way James viewed consciousness: "not a thing or a place, but a process".

As I read again about James and his approach, I remember my first encounters with pragmatism and thinking: Pragmatism is science; other forms of epistemology are mathematics.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

January 12, 2012 3:38 PM

At Least I'm Not Alone

One of the things I love about reading professional blogs and twitter feeds is reassuring myself that I am not crazy in many of my compulsions and obsessions.

On the exercise bike yesterday morning, I read Matt Might's End artificial scarcities to increase productivity. Many years ago I saw my colleague, friend, and hero Joe Bergin do something that I now do faithfully: always carry with me a pad of paper, small enough to fit comfortably in most any pocket, and a good pen. When your life is writing code, writing lectures, writing blog entries, you often want to write at the oddest of times. Now I am always ready to jot down any idea that comes into my head as soon as it does. I may throw it away later as a hare-brained scheme, but I prefer that to losing an idea for lack of a notepad.

Our house has pens, pencils, and usually paper in nearly every room. I have them in every bag I carry an in most coats I wear. The kind of pen matters some; I hate splotching and bleeding through. I have a fondness for a particular older model of Uniball pens, but I'm not obsessed with them. I do have a box of them in my desk at home, and every once in a while I'll pull one out to replace a pen that has run dry. They feel right in my hand.

Like Might, I have MacbookPro power adapters in every room in which I work, as well as one in my travel bag. The cost of having three or four adapters have been well worth the peace of mind. I even have a back-up battery or two on hand most of the time. (My Pro is one of the older ones with the removable battery.) I like to have one each in my home and school offices, where i do most of my work and from which most excursions begin.

On the bike this morning, I read Rands in Repose's bag pr0n essay from last month. Loved it! Like Lopp and many other geeks, I have at times obsessed over my bag. Back in high school I carried an attache case my parents gave me for Christmas. (Yes, I was that guy.) Since college and grad school, I've gone through several styles of bag, including freebies given out at conferences and a couple of nice ones my wife gave me as gifts. A few have provided what I desire: compactness, with a few compartments but not too many.

One of my favorites was from SIGCSE in the late 1990s. I still have it, though it shows its age and wear. Another is a bag I got at one of the PLoP conferences in the early part of the previous decade. It was perfect for an iBook, but is too small for my Pro. I still have it, too, waiting for a time when it will fit my needs again. Both were products of the days of really good conference swag. My current bag is a simple leather case that my wife gave me. It's been serving me well for a couple of years.

Each person has his or her particular point of obsession. Mine is the way the shoulder strap attaches to the body of bag. So many good bags have died too soon when the metallic clasp holding strap to body broke, or the clasp worked loose, or the fabric piece wore through.

Strange but true: One of my all time favorite bags was a $5 blue vinyl diaper bag that my wife bought at a garage sale in the early 1990s. No one knew it was a diaper bag, or so I think; at a glance it was rather inncouous. This bag was especially useful at a time when I traveled a lot, attending 4-6 conferences a year and doing more personal travel than I do these days. The changing pad served as a great sleeve to protect my laptop (first a G3 clamshell, then an iBook). The side compartments designed to hold two baby bottles were great for bottles of water or soda. This was especially handy for a long day flying -- back when we could do such crazy things as carry drinks with us. This bag also passed Rands' airport security line test. It allowed for easy in-and-out of the laptop, and then rolled nicely on its side for going through x-ray. I still think about returning to this bag some day.

I'm sure that this sort of obsessiveness is a positive trait for programmers. So many of us have it, it must be.


Posted by Eugene Wallingford | Permalink | Categories: General

December 30, 2011 11:05 AM

Pretending

Kurt Vonnegut never hesitated to tell his readers the morals of his stories. The frontispiece of his novel Mother Night states its moral upfront:

We are what we pretend to be, so we must be careful about what we pretend to be.

Pretending is a core thread that runs through all of Vonnegut's work. I recognized this as a teenager, and perhaps it is what drew me to his books and stories. As a junior in high school, I wrote my major research paper in English class on the role fantasy played in the lives of Vonnegut's characters. (My teachers usually resisted my efforts to write about authors such as Kafka, Vonnegut, and Asimov, because they weren't part of "the canon". They always relented, eventually, and I got to spend more time thinking about works I loved.)

I first used this sentence about pretending in my eulogy for Vonnegut, which includes a couple of other passages on similar themes. Several of those are from Bokononism, the religion created in his novel Cat's Cradle as a way to help the natives of the island of San Lorenzo endure their otherwise unbearable lives. Bokononism had such an effect on me that I spent part of one summer many years ago transcribing The Books of Bokonon onto the web. (In these more modern times, I share Bokonon's wisdom via Twitter.)

Pretending is not just a way to overcome pain and suffering. Even for Vonnegut, play and pretense are the ways we construct the sane, moral, kind world in which we want to live. Pretending is, at its root, a necessary component in how we train our minds and bodies to think and act as we want them to. Over the years, I've written many times on this blog about the formation of habits of mind and body, whether as a computer scientist, a student, or a distance runner.

Many people quote Aristotle as the paradigm of this truth:

We are what we repeatedly do. Excellence, then, is not an act, but a habit.

I like this passage but prefer another of his, which I once quoted in a short piece, What Remains Is What Will Matter:

Excellence is an art won by training and habituation. We do not act rightly because we have virtue or excellence, but rather we have those because we have acted rightly.

This idea came charging into my mind this morning as I read an interview with Seth Godin. He and his interviewers are discussing the steady stream of rejection that most entrepreneurs face, and how some people seem able to fight through it to succeed. What if a person's natural "thermostat" predisposes them to fold in the face of rejection? Godin says:

I think we can reset our inclinations. I'm certain that pretending we can is better than admitting we can't.

Vonnegut and Aristotle would be proud. We are what we pretend to be. If we wish to be virtuous, then we must act rightly. If we wish to be the sort of person who responds to rejection by working harder and succeeding, then we must work harder. We become the person we pretend to be.

As children, we think pretending is about being someone we aren't. And there is great fun in that. As teenagers, sometimes we feel a need to pretend, because we have so little control over our world and even over our changing selves. As adults, we tend to want to put pretending away as child's play. But this obscures a truth that Vonnegut and Aristotle are trying to teach us:

Pretending is just as much about being who we are as about being who we aren't.

As you and I consider the coming of a new year and what we might resolve to do better or differently in the coming twelve months that will make a real difference in our lives, I suggest we take a page out of Vonnegut's playbook.

Think about the kind of world you want to live in, then live as if it exists.

Think about the kind of person you want to be, then live as if you are that person.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

December 19, 2011 4:49 PM

"I Love The Stuff You Never See"

I occasionally read and hear people give advice about how to find a career, vocation, or avocation that someone will enjoy and succeed in. There is a lot of talk about passion, which is understandable. Surely, we will enjoy things we are passionate about, and perhaps then we want to put in the hours required to succeed. Still, "finding your passion" seems a little abstract, especially for someone who is struggling to find one.

This weekend, I read A Man, A Ball, A Hoop, A Bench (and an Alleged Thread)... Teller!. It's a story about the magician Teller, one half of the wonderful team Penn & Teller, and his years-long pursuit of a particular illusion. While discussing his work habits, Teller said something deceptively simple:

I love the stuff you never see.

I knew immediately just what he meant.

I can say this about teaching. I love the hours spent creating examples, writing sample code, improving it, writing and rewriting lecture notes, and creating and solving homework assignments. When a course doesn't go as I had planned, I like figuring out why and trying to fix it. Students see the finished product, not the hours spent creating it. I enjoy both.

I don't necessarily enjoy all of the behind-the-scenes work. I don't really enjoy grading. But my enjoyment of the preparation and my enjoyment of the class itself -- the teaching equivalent of "the performance" -- carries me through.

I can also say the same thing about programming. I love to fiddle with source code, organizing and rewriting it until it's all just so. I love to factor out repetition and discover abstractions. I enjoy tweaking interfaces, both the interfaces inside my code and the interfaces my code's users see. I love that sudden moment of pleasure when a program runs for the first time. Users see the finished product, not the hours spent creating it. I enjoy both.

Again, I don't necessarily enjoy everything that I have to do the behind the scenes. I don't enjoy twiddling with configuration files, especially at the interface to the OS. Unlike many of my friends, I don't always enjoy installing and uninstalling, all the libraries I need to make everything work in the current version of the OS and interpreter. But that time seems small compared the time I spend living inside the code, and that carries me through.

In many ways, I think that Teller's simple declaration is a much better predictor of what you will enjoy in a career or avocation than other, fancier advice you'll receive. If you love the stuff other folks never see, you are probably doing the right thing for you.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

December 05, 2011 4:34 PM

More on the Future of News and Universities

I remain endlessly fascinated with the evolution of the news industry in the Internet Age, and especially with the discussions of same within the industry itself. Last week, Clay Shirky posted Institutions, Confidence, and the News Crisis in response to Dean Starkman's essay in the Columbia Journalism Review, Confidence Game. It's clear that not everyone views the change enabled by the internet and the web as a good thing.

Of course, my interest in journalism quickly spills over into my interest in the future of my own institution, the university. In Revolution Out There -- And Maybe In Here, I first began to draw out the similarities between the media and the university, and since then I've written occasionally about connections [ 1 | 2 | 3 ]. Some readers have questioned the analogy, because universities aren't media outlets. But in several interesting ways, they are. Professors write textbooks, lectures, and supporting materials. Among its many purposes, a university course disseminates knowledge. Faculty can object that a course does more than that, which is true, but from many peoples' perspectives -- many students, parents, and state legislators included -- dissemination is its essential purpose.

Universities aren't solely about teaching courses. They also create knowledge, through basic and applied research, and through packaging existing work in new and more useful ways. But journalists also create and package knowledge in similar ways, through research, analysis, and writing. Indeed, one of the strongest arguments by journalism traditionalists like Starkman is that new models of journalism often make little or no account of public-interest reporting and the knowledge creation function of media institutions.

Most recently, I wrote about the possible death of bundling in university education, which I think is where the strongest similarity between the two industries lies. The biggest problems in the journalism aren't with what they do but with the way in which they bundle, sell, and pay for what they do. This is also the weak link in the armor of the university. For a hundred years, we have bundled several different functions into a whole that was paid for by the public through its governments and through peoples' willingness to pay tuition. As more and more options become available to people, the people holding the purses are beginning to ask questions about the direct and indirect value they receive.

We in the universities can complain all we want about the Khan Academy and the University of Phoenix and how what we do is superior. But we aren't the only people who get to create future. In the software development world, there has long been interest in apprenticeship models and other ways to prepare new developers that bypass the university. It's the software world's form of homeschooling.

(Even university professors are beginning to write about the weakness of our existing model. Check out Bryan Caplan's The Magic of Education for a discussion of education as being more about signaling than instruction.)

I look at my colleagues in industry who make a good living as teachers: as consultants to companies, as the authors of influential books and blogs, and as conference speakers. They are much like freelance journalists. We are even starting to see university instructors who want to teach focus on teaching leave higher education and move out into the world of consultants and freelance developers of courses and instructional material. Professors may not be able to start their own universities yet, the way doctors and lawyers can set up their own practices, but the flat world of the web gives them so many more options. As Shirky says of the journalism world, we need experiments like this to help us create the future.

In the journalism world, there is a divide between journalists arguing that we need existing media institutions to preserve the higher goals of journalism and journalists arguing that new models are arising naturally out of new technologies. Sometimes, the first group sounds like it is arguing for the preservation of institutions for their own sake, and the latter group sounds like it is rooting for existing institutions to fall, whatever the price. We in the university need to be mindful that institutions are not the same as their purpose. We have enough lead time to prepare ourselves for an evolution I think is inevitable, but only if we think hard and experiment ourselves.


Posted by Eugene Wallingford | Permalink | Categories: General

October 12, 2011 12:31 PM

Programming for Everyone -- Really?

TL;DR version: Yes.

Yesterday, I retweeted a message that is a common theme here:

Teaching students how to operate software, but not produce software, is like teaching kids to read & not write. (via @KevlinHenney)

It got a lot more action than my usual fare, both retweets and replies. Who knew? One of the common responses questioned the analogy by making another, usually of this sort:

Yeah, that would be like teaching kids how to drive a car, but not build a car. Oh, wait...

This is a sounds like a reasonable comparison. A car is a tool. A computer is a tool. We use tools to perform tasks we value. We do not always want to make our own tools.

But this analogy misses out on the most important feature of computation. People don't make many things with their cars. People make things with a computer.

When people speak of "using a computer", they usually mean using software that runs on a computer: a web browser, a word processor, a spreadsheet program. And people use many of these tools to make things.

As soon as we move into the realm of creation, we start to bump into limits. What if the tool we are given doesn't allow us to say or do what we want? Consider the spreadsheet, a general data management tool. Some people use it simply as a formatted data entry tool, but it is more. Every spreadsheet program gives us a formula language for going beyond what the creators of Excel or Numbers imagined.

But what about the rest of our tools? Must we limit what we say to what our tool affords us -- to what our tool builders afford us?

A computer is not just a tool. It is also a medium of expression, and an increasingly important one.

If you think of programming as C or Java, then the idea of teaching everyone to program may seem silly. Even I am not willing to make that case here. But there are different kinds of programming. Even professional programmers write code at many levels of abstraction, from assembly language to the highest high-level language. Non-programmers such as physicists and economists use scripting languages like Python. Kids of all ages are learning to program Scratch.

Scratch is a good example of what I was thinking when I retweeted. Scratch is programming. But Scratch is really a way to tell stories. Just like writing and speaking.

Alfred Thompson summed up this viewpoint succinctly:

[S]tudents need to be creators and not just consumers.

Kids today understand this without question. They want to make video mash-ups and interactive web pages and cutting-edge presentations. They need to know that they can do more than just use the tools we deign to give them.

One respondent wrote:

As society evolves there is an increasing gap between those that use technology and those that can create technology. Whilst this is a concern, it's not the lowest common denominator for communication: speaking, reading and writing.

The first sentence is certainly true. The question for me is: on which side of this technology divide does computing live? If you think of computation as "just" technology, then the second sentence seems perfectly reasonable. People use Office to do their jobs. It's "just a tool".

It could, however, be a better tool. Many scientists and business people write small scripts or programs to support their work. Many others could, too, if they had the skills. What about teachers? Many routine tasks could be automated in order to give them more time to do what they do best, teach. We can write software packages for them, but then we limit them to being consumers of what we provide. They could create, too.

Is computing "just tech", or more? Most of the world acts like it is the former. The result is, indeed, an ever increasing gap between the haves and the have nots. Actually, the gap is between the can dos and the cannots.

I, and many others, think computation is more than simply a tool. In the wake of Steve Jobs's death last week, many people posted his famous quote that computing is a liberal art. Alan Kay, one of my inspirations, has long preached that computing is a new medium on the order of reading and writing. The list of people in the trenches working to make this happen is too numerous to include.

More practically, software and computer technology are the basis of much innovation these days. If we teach the new medium to only a few, the "5 percent of the population over in the corner" to whom Jobs refers, we exclude the other 95% from participating fully in the economy. That restricts economic growth and hurts everyone. It is also not humane, because it restricts people's personal growth. Everyone has a right to the keys to the kingdom.

I stand in solidarity with the original tweeter and retweeter. Teaching students how to operate software, but not produce software, is like teaching kids to read but not to write. We can do better.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

October 10, 2011 2:56 PM

Making Your Own Mistakes

Earlier today, @johndmitchell retweeted a link from Tara "Miss Rogue" Hunt:

RT @missrogue: My presentation from this morning at #ennovation: The 10 Mistakes I've made...so you don't have to http://t.co/QE0DzF9tw

Danger ahead!

I liked the title and so followed the link to the slide deck. The talk includes a few good quotes and communicates some solid experience on how to fail as a start-up, and also how to succeed. I was glad to have read.

The title notwithstanding, though, be prepared. Other people making mistakes will not -- cannot -- save you from making the same mistakes. You'll have to make them yourself.

There are certain kinds of mistakes that don't need to be made again, but that happens when we eliminate an entire class of problems. As a programmer, I mostly don't have to re-make the mistakes my forebears made when writing code in assembly. They learned from their mistakes and made tools that shield me from the problems I faced. Now, I write code in a higher-level language and let the tools implement the right solution for me.

Of course, that means I face a new class of problems, or an old class of problems in a new way. So I make new kinds of mistakes. In the case of assembly and compilers, I am more comfortable working at that level and am thus glad to have been shielded from those old error traps, by the pioneers who preceded me.

Starting a start up isn't the sort of problem we are able to bypass so easily. Collectively, we aren't good at all at reliably creating successful start-ups. Because the challenges involve other people and economic forces, they will likely remain a challenge well into our future.

Warning, proceed at your risk!

Even though Hunt and other people who have tried and failed at start-ups can't save us from making these mistakes, they still do us a service when they reflect on their experiences and share with us. They put up guideposts that say "Danger ahead!" and "Don't go there!"

Why isn't that enough to save us? We may miss the signs in the noise of our world and walk into the thicket on our own. We may see the warning sign, think "My situation is different...", and proceed anyway. We may heed their advice, do everything we can to avoid the pitfall, and fail anyway. Perhaps we misunderstood the signs. Perhaps we aren't smart enough yet to solve the problem. Perhaps no one is, yet. Sometimes, we won't be until we have made the mistake once ourselves -- or thrice.

Despite this, it is valuable to read about our forebears' experiences. Perhaps we will recognize the problem part of the way in and realize that we need to turn around before going any farther. Knowing other people's experiences can leave us better prepared not to go too far down into the abyss. A mistake partially made is often better than a mistake made all the way.

If nothing else, we fail and are better able to recognize our mistake after we have made it. Other people's experience can help us put our own mistake into context. We may be able to understand the problem and solution better by bringing those other experiences to bear on our own experience.

While I know that we have to make mistakes to learn, I don't romanticize failure. We should take reasonable measures to avoid problems and to recognize them as soon as possible. That's the ultimate value in learning what Hunt and other people can teach us.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 17, 2011 3:52 PM

Remind Me Again...

This post is a mild and perhaps petulant rant about shackling free software. Feel free to skip it if you like.

I've been setting up a new iMac over the last couple of days. I ran into some difficulties installing Remind, a powerful text-based Unix calendar program, that made me sad.

First of all, I need to say "thank you" to the creator of Remind. He wrote the first version of the program back in the 1970s and has maintained and updated it over the last 30+ years. It has always been free, both as in beer and as in speech. Like many Unix folks, I became a devoted user of the program almost as soon as I discovered it.

Why am I sad? When I went to download the latest version, the server detected that I was connecting via a Mac browser and took me to a page that said only not to use Remind on an Apple product. I managed to download the source but found its compressed format incompatible with the tools, both apps and command-line programs, that I use to unstuff archives on my Mac. I finally managed to extract the source, build it, and install it. When Remind runs on my new machine, the program displays this message:

You appear to be running Remind on an Apple product. I'd rather that you didn't. Remind execution will continue momentarily.

... and delays for 30 seconds.

Wow, he really is serious about discouraging people from running his program on an Apple machine.

This is, of course, well within his rights. Like many people, he feels strongly about Apple's approach to software and apps these days. On the Remind home page, he writes:

Remind can be made to run under Mac OS X, but I prefer you not to do that. Apple is even more hostile than Microsoft to openness, using both technical and legal means to hobble what its customers and developers are allowed to do. If you are thinking of buying an Apple product, please don't. If you're unfortunate enough to already own Apple products, please consider switching to an open platform like Linux or FreeBSD that doesn't impose "1984"-like restrictions on your freedom.

I appreciate his desire to support the spirit of free software, to the point of turning long-time users away from his work. When I have downloaded earlier versions of Remind, I have noticed and taken seriously the author's remarks about Apple's closed approach. This version goes farther than offering admonition; it makes life difficult for users. I have always wondered about the stridency of some people in the free software community. I understand that they feel the only way to make a stand on their principles is to damage the freedom and openness of their own software. And they may be right. Companies like Microsoft and Apple are not going to change just because an independent software developer asks them to.

Then again, neither am I. I do take seriously the concerns expressed by Remind's author and others like him. The simple fact, though, is that I'm not likely to switch from my Mac because I find one of my command-line Unix tools no longer available. I have concerns of my own with Apple's approach to software these days, but at this point I still choose to use its products.

If it becomes too difficult to install the new versions of Remind, what will I do? Well, I could install the older version I have cached on my machine. Or perhaps I'll run a script such as rem2ics to free my data from Remind's idiosyncratic representation into the RFC2445 standard format. Then I would look for or write a new tool. Remind's author might be pleased that I wouldn't likely adopt Apple's iCal program and that I would likely make any tool I wrote for myself available to the rest of the world. I would not, however, tell users of any particular platform not to use my code. That's not my style.

I may yet choose to go that route anyway. As I continue to think about the issue, I may decide to respect the author's wishes and not use his program on my machine. If I do so, it will be because I want to show him that respect or because I am persuaded by his argument, not because I have to look at a two-line admonition or wait 30-seconds every time I run the program.

~~~~

Note: I could perhaps have avoided all the problems by using a package manager for Macs such as homebrew to download and install Remind. But I have always installed Remind by hand in the past and started down that path again this time. I don't know if homebrew's version of Remind includes the 30-second delay at execution. Maybe next time I'll give this approach a try and find out.


Posted by Eugene Wallingford | Permalink | Categories: General

August 03, 2011 7:55 PM

Psychohistory, Economics, and AI

Or, The Best Foresight Comes from a Good Model

Hari Seldon from the novel Foundation

In my previous entry, I mentioned re-reading Asimov's Foundation Trilogy and made a passing joke about psychohistory being a great computational challenge. I've never heard a computer scientist mention psychohistory as a primary reason for getting involved with computers and programming. Most of us were lucky to see so many wonderful and more approachable problems to solve with a program that we didn't need to be motivated by fiction, however motivating it might be.

I have, though, heard and read several economists mention that they were inspired to study economics by the ideas of psychohistory. The usual reason for the connection is that econ is the closest thing to psychohistory in modern academia. Trying to model the behavior of large groups of people, and reaping the advantages of grouping for predictability, is a big part of what macroeconomics does. (Asimov himself was most likely motivated in creating psychohistory by physics, which excels at predicting the behavior of masses of atoms over predicting the behavior of individual atoms.)

As you can tell from recent history, economists are no where near the ability to do what Hari Seldon did in Foundation, but then Seldon did his work more than 10,000 years in the future. Maybe 10,00 years from now economists will succeed as much and as well. Like my economist friends, I too am intrigued by economics, which also shares some important features in common with computer science, in particular a concern with the trade-offs among limited resources and the limits of rational behavior.

The preface to the third book in Asimov's trilogy, Second Foundation, includes a passage that caught my eye on this reading:

He foresaw (or he solved his [system's] equations and interpreted its symbols, which amounts to the same thing)...

I could not help but be struck by how this one sentence captured so well the way science empowers us and changes the intellectual world in which we live. Before the rapid growth of science and broadening of science education, the notion of foresight was limited to personal experience and human beings' limited ability to process that experience and generalize accurately. When someone had an insight, the primary way to convince others was to tell a good story. Foresight could be feigned and sold through stories that sounded good. With science, we have a more reliable way to assess the stories we are told, and a higher standard to which we can hold the stories we are told.

(We don't always do well enough in using science to make us better listeners, or better judges of purported foresights. Almost all of us can do better, both in professional settings and personal life.)

As a young student, I was drawn to artificial intelligence as the big problem to solve. Like economics, it runs directly into problems of limited resources and limited rationality. Like Asimov's quote above, it runs directly into the relationship between foresight and accurate models of the world. During my first few years teaching AI, I was often surprised by how fiercely my students defended the idea of "intuition", a seemingly magical attribute of men and women forever unattainable by computer programs. It did me little good to try to persuade them that their belief in intuition and "gut instinct" were outside the province of scientific study. Not only didn't they care; that was an integral part of their belief. The best thing I could do was introduce them to some of the techniques used to write AI programs and to show them such programs behaving in a seemingly intelligent manner in a situation that piqued my students' interest -- and maybe opened their minds a bit.

Over the course of teaching those early AI courses, I was eventually able to see one of the fundamental attractions I had to the field. When I wrote an AI program, I was building a model of intelligent behavior, much as Seldon's psychohistory involved building a model of collective human behavior. My inspiration did not come from Asimov, but it was similar in spirit to the inspiration my economist friends' drew from Asimov. I have never been discouraged or deterred by any arguments against the prospect of artificial intelligence, whether my students' faith-based reasons or by purportedly rational arguments such as John Searle's Chinese room argument. I call Searle's argument "purportedly rational" because, as it is usually presented, ultimately it rests on the notion that human wetware -- as a physical medium -- is capable of representing symbols in a way that silicon or other digital means cannot.

I always believed that, given enough time and enough computational power, we could build a model that approximated human intelligence as closely as we desired. I still believe this and enjoy watching (and occasionally participating in) efforts that create more and more intelligent programs. Unlike many, I am undeterred by the slow progress of AI. We are only sixty years into an enterprise that may take a few thousand years. Asimov taught me that much.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 15, 2011 4:25 PM

The Death of Bundling in University Education?

Clay Shirky's latest piece talks a bit about the death of bundling in journalism, in particular in newspapers. Bundling is the phenomenon of putting different kinds of content into a single product and selling consumers the whole. Local newspapers contain several kinds of content: local news coverage, national news, sports, entertainment, classified ads, obituaries, help columns, comics, .... Most subscribers don't consume all this content and won't pay for it all. In the twentieth century, it worked to bundle it all together, get advertisers to buy space in the package, and get consumers to buy the whole package. The internet and web have changed the game.

As usual, Shirky talking about the state and future of newspapers sets me to thinking about the state and future of universities [ 1, 2, 3, 4 ]. Let me say upfront that I do not subscribe to the anti-university education meme traversing the internet these days, which seems especially popular among software people. Many of its proponents speak too glibly about a world without the academy and traditional university education. Journalism is changing, not disappearing, and I think the same will be true of universities. The questions are, How will universities change? How should they change? Will universities be pulled downstream against there will, or will they actively redefine their mission and methods?

I wonder about the potential death of bundling in university. Making an analogy to Shirky's argument helps us to see some of the dissatisfaction with universities these days. About newspapers, he says:

Writing about the Dallas Cowboys in order to take money from Ford and give it to the guy on the City Desk never made much sense.

It's not hard to construct a parallel assertion about universities:

Teaching accounting courses in order to take money from state legislatures and businesses and give it to the humanities department never made much sense.

Majors that prepare students for specific jobs and careers are like the sports section. They put students in the seats. States and businesses want strong economies, so they are willing to subsidize students' educations, in a variety of ways. Universities use part of the money to support higher-minded educational goals, such as the liberal arts. Everyone is happy.

Well, they were in the 20th century.

The internet and web have drastically cut the cost of sharing information and knowledge. As a result, they have cut the cost of "acquiring" information and knowledge. When the world views the value of the bundle as largely about the acquisition of particular ingredients (sports scores or obituaries; knowledge and job skills), the business model of bundling is undercut, and the people footing most of the bill (advertisers; states and businesses) lose interest.

In both cases, the public good being offered by the bundle is the one most in jeopardy by unbundling. Cheap and easy access to targeted news content means that there is no one on the production side of the equation to subsidize "hard" news coverage for the general public. Cheap and easy access to educational material on-line erodes the university's leverage for subsidizing its public good, the broad education of a well-informed citizenry.

Universities are different from newspapers in one respect that matters to this analogies. Newspapers are largely paid for by advertisers, who have only one motivation for buying ads. Over the past century, public universities have largely been paid for by state governments and thus the general public itself. This funder of first resort has an interest in both the practical goods of the university -- graduates prepared to contribute to the economic well-being of the state -- and the public goods of the university -- graduates prepared to participate effectively in a democracy. Even still, over the last 10-20 years we have seen a steep decline in the amount of support provided by state governments to so-called "state universities", and elected representatives seem to lack the interest or political will to reverse the trend.

Shirky goes on to explain why "[n]ews has to be subsidized, and it has to be cheap, and it has to be free". Public universities have historically had these attributes. Well, few states offer free university education to their citizens, but historically the cost has been low enough that cost was not an impediment to most citizens.

As we enter a world in which information and even instruction are relatively easy to come by on-line, universities must confront the same issues faced by the media: the difference between what people want and what people are willing to pay for; the difference between what the state wants and what the state is willing to pay for. Many still believe in the overarching value of a liberal arts component to university education (I do), but who will pay for it, require it, or even encourage it?

Students at my university have questioned the need to take general education courses since before I arrived here. I've always viewed helping them to understand why as part of the education I help to deliver. The state was paying for most of their education because it had an interest in both their economic development and their civic development. As the adage floating around the Twitter world this week says, "If you aren't paying for the product, you are the product." Students weren't our customers; they are our product.

I still mostly believe that. But now that students and parents are paying the majority of the cost of the education, a percentage that rises every year, it's harder for me to convince them of that. Heck, it's harder for me to convince myself of that.

Shirky says other things about newspapers that are plausible when uttered about our universities as well, such as:

News has to be subsidized because society's truth-tellers can't be supported by what their work would fetch on the open market.

and:

News has to be cheap because cheap is where the opportunity is right now.

and:

And news has to be free, because it has to spread.

Perhaps my favorite analog is this sentence, which harkens back to the idea of sports sections attracting automobile dealers to advertise and thus subsidize the local government beat (emphasis added:

Online, though, the economic and technological rationale for bundling weakens -- no monopoly over local advertising, no daily allotment of space to fill, no one-size-fits-all delivery system. Newspapers, as a sheaf of unrelated content glued together with ads, aren't just being threatened with unprofitability, but incoherence.

It is so very easy to convert that statement into one about our public universities. We are certainly being threatened with unprofitability. Are we also being threatened with incoherence?

Like newspapers, the university is rapidly finding itself in need of a new model. Most places are experimenting, but universities are remarkably conservative institutions when it comes to changing themselves. I look at my own institution, whose budget situation calls for major changes. Yet it has been slow, at times unwilling, to change, for a variety of reasons. Universities that depend more heavily on state funding, such as mine, need to adapt even more quickly to the change in funding model. It is perhaps ironic that, unlike our research-focused sister schools, we take the vast majority of our students from in-state, and our graduates are even more likely to remain in the state, to be its citizens and the engines of its economic progress.

Shirky says that we need the new news environment to be chaotic. Is that true of our universities as well?


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 10, 2011 7:53 PM

Dave Barry on Media Computation

humorist Dave Barry

For Father's Day, my daughter gave me the most recent book of essays by humorist Dave Barry, subtitled his "Amazing Tales of Adulthood". She thought I'd especially enjoy the chapter on dance recitals, which reports -- with only the slightest exaggeration, I assure you -- an experience shared by nearly every father of daughters these days. He nailed it, right down to not finding your daughter on-stage until her song is ending.

However, his chapter on modern technology expresses a serious concern that most readers of this blog will appreciate:

... it bothers me that I depend on so many things that operate on principles I do not remotely understand, and which might not even be real.

He is talking about all of modern technology, including his microwave oven, but he when he lists tools that baffle him, digital technology leads the way:

I also don't know how my cell phone works, or my TV, or my computer, or my iPod, or my GPS, or my camera that puts nineteen thousand pictures inside a tiny piece of plastic, which is obviously NOT PHYSICALLY POSSIBLE, but there it is.

He knows this is "digital" technology, because...

At some point ... all media -- photographs, TV, movies, music, oven thermometers, pornography, doorbells, etc. -- became "digital". If you ask a technical expert what this means, he or she will answer that the information is, quote, "broken down into ones and zeros." Which sounds good, doesn't it? Ones and zeros! Those are digits, all right!

The problem is, he has never seen the ones and zeros. No matter how closely he looks at his high-def digital television, he can't see any ones or zeros. He goes on to hypothesize that no one really understands digital technology, that this "digital" thing is just a story to dupe users, and that such technology is a serious potential threat to humanity.

Of course, Dave is just having fun, but from 10,00 feet, he is right. Take a random sample of 100 people from this planet, and you'd be lucky to find one person who could explain how an iPod or digital camera works. I know that we don't all have to understand all the details of all our tools, otherwise we would all be in trouble. But this has become a universal, omnipresent phenomenon. Digital computations are the technology of our time. Dave could have listed even more tools that use digital technology, had he wanted (or known). If you want to talk about threats to humanity, let's start talking planes, trains, and automobiles.

For so many people, every phase of life depends on or is dominated by digital computation. Shouldn't people have some inkling of how all this stuff works? This is practical knowledge, much as knowing a little physics is useful for moving around the world. Understanding digital technology can make people better users of their tools and help them dream up improvements.

But to me, this is also humanistic knowledge. Digital technology is a towering intellectual and engineering achievement, of this or any era. It empowers us, but it also stands as a testament to humanity's potential. It reflects us.

Dave talked about a threat lying in wait, and there is one here, though not the one he mentions. We need people who understand digital technology because we need people to create it. Contrary to his personal hypothesis, this stuff isn't sent from outer space to the Chinese to be packaged for sale in America.

After reading this piece, I had two thoughts.

First, I think we could do a lot for Dave's peace of mind if we simply enrolled him in a media computation course! He is more than welcome to attend our next offering here. I'll even find a way for him to take the course for free.

Second, perhaps we could get Dave to do a public service announcement for studying computer science and digital technology. He's a funny guy and might be able to convince a young person to become the next Alan Kay or Fran Allen. He is also the perfect age to appeal to America's legislators and school board members. Perhaps he could convince them to include digital technology as a fundamental part of general K-12 education.

I am pretty sure that I will need your help to make this happen. I am no more capable of convincing Dave Barry to do this than of producing a litter of puppies. (*)

~~~~

(*) Analogy stolen shamelessly from the same chapter.


Posted by Eugene Wallingford | Permalink | Categories: General

July 03, 2011 1:16 PM

A Few Percent

Novak Djokovic

I am a huge tennis fan. This morning, I watched the men's final at Wimbledon and, as much as I admire Roger Federer and Raphael Nadal for the games and attitudes, I really enjoyed seeing Novak Djokovic break through for his first title at the All-England Club. Djokovic has been the #3 ranked player in the world for the last four years, but in 2011 he has dominated, winning 48 of 49 matches and two Grand Slam titles.

After the match, commentator and former Wimbledon champion John McEnroe asked Djokovic what he had changed about his game to become number one. What was different between this year and last? Djokovic shrugged his shoulders, almost imperceptibly, and gave an important answer:

A few percent improvement in several areas of my game.

The difference for him was not an addition to his repertoire, a brand new skill he could brandish against Nadal or Federer. It was a few percentage points' improvement in his serve, in his return, in his volley, and in his ability to concentrate. Keep in mind that he was already the best returner of service in the world and strong enough in the other elements of his game to compete with and occasionally defeat two of the greatest players in history.

That was not enough. So he went home and got a little better in several parts of his game.

Indeed, the thing that stood out to me from his win this morning against Rafa was the steadiness of his baseline play. His ground strokes were flat and powerful, as they long have been, but this time he simply hit more balls back. He made fewer errors in the most basic part of the game, striking the ball, which put Nadal under constant pressure to do the same. Instead of making mistakes, Djokovic gave his opponent more opportunities to make mistakes. This must have seemed especially strange to Nadal, because this is one of the ways in which he has dominated the tennis world for the last few years.

I think Djokovic's answer is so important because it reminds us that learning and improving our skills are often about little things. We usually recognize that getting better requires working hard, but I think we sometimes romanticize getting better as being about qualitative changes in our skill set. "Learn a new language, or a new paradigm, and change how you see the world." But as we get better this becomes harder and harder to do. Is there any one new skill that will push Federer, Nadal, or Djokovic past his challengers? They have been playing and learning and excelling for two decades each; there aren't many surprises left. At such a high level of performance, it really does come down to a few percent improvement in each area of the game that make the difference.

Even for us mortals, whether playing tennis or writing computer programs, the real challenge -- and the hardest work -- often lies in making incremental improvements to our skills. In practicing the cross-court volley or the Extract Class refactoring thousands and thousands of times. In learning to concentrate a little more consistently when we tire by trying to concentrate a little more consistently over and over.

As Nadal said in his own post-game inteview, the game is pretty simple. The challenge is to work hard and learn how to play it better.

Congratulations to Novak Djokovic for his hard work at getting a few percent better in several areas of his game. He has earned the accolade of being, for now, the best tennis player in the world.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 01, 2011 2:29 PM

What Would Have To Be True...

Yesterday, I read Esther Derby's recent post, Promoting Double Loop Learning in Retrospectives, which discusses ways to improve the value of our project retrospectives. Many people who don't do project retrospectives will still find Derby's article useful, because it's really about examining how we think and expanding possibilities.

One of the questions she uses to jump start deeper reflection is:

What would have to be true for [a particular practice] to work?

This is indeed a good question to ask when we are trying to make qualitative changes in our workplaces and organizations, for the reasons Derby explains. But it is also useful more generally as a communication tool.

I have a bad personal habit. When someone says something that doesn't immediately make sense to me, my first thought is sometimes, "That doesn't make sense." (Notice the two words I dropped...) Even worse, I sometimes say it out loud. That doesn't usually go over very well with the person I'm talking to.

Sometime back in the '90s, I read in a book about personal communication about a technique for overcoming this disrespectful tendency, which reflects a default attitude. The technique is to train yourself to think a different first thought:

What would have to be true in order for that statement to be true?

Rather than assume that what the person says is false, assume that it is true and figure out how it could be true. This accords my partner the respect he or she deserves and causes me to think about the world outside my own point of view. What I found in practice, whether with my wife or with a professional colleague, was that what they had said was true -- from their perspective. Sometimes we were starting from different sets of assumptions. Sometimes we perceived the world differently. Sometimes I was wrong! By pausing before reacting and going on the defensive (or, worse, the offensive), I found that I was saving myself from looking silly, rash, or mean.

And yes, sometimes, my partner was wrong. But now my focus was not on proving his wrong but on addressing the underlying cause of his misconception. That led to a very different sort of conversation.

So, this technique is not an exercise in fantasy. It is an exercise in more accurate perception. Sometimes, what would have to be true in the world actually is true. I just hadn't noticed. In other cases, what would have to be true in the world is how the other person perceives the world. This is an immensely useful thing to know, and it helps me to respond both more respectfully and more effectively. Rather than try to prove the statement false in some clinical way, I am better served by taking one of two paths:

  • helping the other person perceive the world more clearly, when his or her perception clashes with reality, or
  • recognizing that the world is more complicated than I first thought and that, at least for now, I am better served by acting from a state of contingency, in a world of differ possible truths.

I am still not very good at this, and occasionally I slip back into old habits. But the technique has helped me to be a better husband as well as a better colleague, department head, and teacher.

Speaking as a teacher: It is simply mazing how different interactions with students can be when, after students say something that seems to indicate they just don't get it, "What would have to be true in order for that statement to be true?" I have learned a lot about student misconceptions and about the inaccuracy of the signals I send students in my lectures and conversations just by stepping back and thinking, "What would have to be true..."

Sometimes, our imaginations are too small for our own good, and we need a little boost to see the world as it really is. This technique gives us one.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

June 30, 2011 10:26 AM

Manhood, and Programming, for Amateurs

A while back, I mentioned reading "Manhood for Amateurs". Like many of you, I collect quotes and passages. These days, I collect them in text files, a growing collection of digital commonplace books that I store in one of my folders of stuff. "Manhood for Amateurs" gave me many. Some seem as true about my life as programmer or teacher as they are about manhood or the world more generally.

... I never truly felt [the immensity of the universe] until I looked through a first-rate instrument at an unspoiled sky.

Call me crazy, but this brought to mind the summer I learned Smalltalk. I had been programming for ten years. When I first opened that Digitalk image and felt like I'd walked through the back of C.S. Lewis's wardrobe. After working through a simple tutorial from the manual, I was ready to explore. The whole world of computer science seemed to lie before me, written in simple sentences. Even arithmetic was implemented right there for me to read, to play with.

Smalltalk was my first first-rate instrument.

(Scheme was my second. There is nothing there! Just a few functions. The paucity of types, functions, objects, and libraries let me focus on what had to be there.)

the dark tide of magical boredom [... was ...] the source of all my inspiration

I wonder how much of my desire to program early on was driven by the stark fact that, if I wanted the computer to do anything, I had to teach it. There are so many distractions these days, on the computer and off. Will some possible future programmers never have a chance to create a desire out of their own bored minds' search?

If we are conducting our lives in the usual fashion, each of us serves as a constant source of embarrassment to his or her future self....

Spoken like a programmer. If you don't believe me, dig out some of your old code and read it.

Everything you love most is a lifelong focus of insufficiency.

Chabon is speaking as a man, a son, a husband, a father, and also, I presume, a writer. I feel this insufficiency as a teacher and as a programmer.

Every work of art is a secret handshake, a challenge that seeks the password, a heliograph flashed from a tower window, an act of hopeless optimism in the service of bottomless longing.

Okay, so this is more poetic than some of my programmer friends care to be, but it made me think of some of the little gems of code I have stumbled upon over the years. They were conceived in a programmer's mind and made real, then shared with the world. One of the great joys of living in this age is open-source software world and having the web and GitHub and CPAN available. It is so easy to find software created by a fellow artist out of love and hope and magic. It is so easy to share our own creations.

That leads me to one last quote, which comes from an essay in which Chabon describes his experience as a member of writers' workshops in his MFA program. He was a young poseur, being as dishonest to himself as he was to the people around him. He began to grow up when he realized something:

Without taking themselves half as seriously as I did, they were all twice as serious about what they were doing.

Take a look at all the wonderful work being done in the software world and being shared and written about for us. Then see if you can look yourself in the mirror and pretend you are anything but just another beginner on a path to being better. Then, get serious.

(Just don't mistake being serious with not having fun!)


Posted by Eugene Wallingford | Permalink | Categories: General

June 13, 2011 7:11 PM

A Few More Thoughts on the Future of Universities

Our state legislature still has not passed a budget for the next fiscal year, which leaves the the university hanging, waiting to set its course for 2011-2012. We expect another round of big cuts, the latest over more than a decade in which the funding base for state universities has eroded rapidly.

I've written before about the fault lines under higher education. I'm really not a Chicken Little sort of person, but I do think it's important that we pay attention to changes in the world and prepare them -- maybe even get ahead of the curve and actively build an institution that serves the state and its people well.

Over the weekend, I read William Deresiewicz's recent piece in The Nation, Faulty Towers: The Crisis in Higher Education, which looks at the pyramid scheme that is graduate education in the humanities. Deresiewicz writes about places like Yale, but much of what he says applies across the academy. This passage made sirens go off in my head:

As Gaye Tuchman explains in Wannabe U (2009), a case study in the sorrows of academic corporatization, deans, provosts and presidents are no longer professors who cycle through administrative duties and then return to teaching and research. Instead, they have become a separate stratum of managerial careerists, jumping from job to job and organization to organization like any other executive: isolated from the faculty and its values, loyal to an ethos of short-term expansion, and trading in the business blather of measurability, revenue streams, mission statements and the like. They do not have the long-term health of their institutions at heart. They want to pump up the stock price (i.e., U.S. News and World Report ranking) and move on to the next fat post.

...

What we have in academia, in other words, is a microcosm of the American economy as a whole: a self-enriching aristocracy, a swelling and increasingly immiserated proletariat, and a shrinking middle class. The same devil's bargain stabilizes the system: the middle, or at least the upper middle, the tenured professoriate, is allowed to retain its prerogatives -- its comfortable compensation packages, its workplace autonomy and its job security -- in return for acquiescing to the exploitation of the bottom by the top, and indirectly, the betrayal of the future of the entire enterprise.

Things aren't quite that bad at my school. Most of our administrators are home-grown, not outside hires using us as the next rung on their career ladder. But we are susceptible to other trends identified in this article, in particular the rapid growth of the non-faculty staff, both mid-level administrators and support staff for the corporate and human services elements of the university.

Likewise, the situation is different with our faculty. We have relatively few adjuncts teaching courses, and an even smaller proportion of grad students. We are a "teaching university", and our tenured and tenure-track faculty teach three courses each semester. That's great for our students, but our productivity in the classroom makes scrounging for grants and external research dollars hard to do. We may be more productive in the classroom than our research-school brethren, but with less recourse to external dollars we are more dependent on state funding. Unfortunately, our board of regents and our state government don't seem to appreciate this and leave us hanging by much thinner threads as state appropriations dwindle. Now there is talk of assigning faculty who are less productive as researchers to teach a fourth class each semester, which will only further hamper our ability to create and disseminate knowledge -- and our ability to attract external funding.

The idea of career administrators hit close to home for me personally, too, as I enter my third term as a department head. I am at heart a computer scientist and programmer, not an administrator. But it's easy to get sucked into the vortex of paperwork and meetings. I need to think of this year not as the first year of my next term but as the first year of my last term, or perhaps as my third-to-last year. Such a mindset may be a better way for me to aim at goals I think most important while preparing the department for a transition to new leadership.

One last passage in Deresiewicz's article got me to thinking. While talking about the problems with tenure, he points out one of the problems of not having tenure: Who will pursue the kind of research that cannot be converted to a quick buck if faculty can expect to be jettisoned by universities at any time, but especially as they age and become more expensive than new hires?

Doctors and lawyers can set up their own practice, but a professor can't start his own university.

I've been thinking about this idea for a while but don't think I've written about it yet. It's something that really intrigues me. There are so many obstacles lying in the way of achieving the idea, and the differential immediate applied value of the various disciplines is only one. yet it is an interesting thought experiment, one I hope to write about more in the future.


Posted by Eugene Wallingford | Permalink | Categories: General

June 09, 2011 6:17 PM

Language, Style, and Life

I should probably apologize for the florid prose in this recent post. It was written under the influence of "Manhood for Amateurs", Michael Chabon's book of essays on life, which I was reading at the time. Chabon is a novelist known for his evocative prose, and the language of "Manhood" captivated me. However, such style and vocabulary are perhaps best left to masters like Chabon. In the hands of amateur writers such as me, they pale in comparison.

I am often influenced as a writer by what I've been reading lately. There is something about the rhythm of some authors' words and sentences that vibrates in my mind and finds its way into my own words and sentences. Recognizing this tendency in myself, I often prepare for a bout of writing by reading a particular writer. For example, when I set out to write software patterns, I like to prime my brain by reading Kent Beck's Smalltalk Best Practice Patterns, for its spare, clean, readable style. I do the same when I code, sometimes. Browsing the current code base gets my mind ready to write new code and to refactor. Whenever I used to start a new Smalltalk project, I would browse the image to put myself in a Smalltalk frame of mind. These days, I'll do the same with Ruby -- github is full of projects I admire by programmers whose work I respect.

I can strongly recommend Chabon's book. It gave me as much life as any book I've read in a long while. Men will find themselves on every page of "Manhood". American men of a certain age will recognize and appreciate its cultural allusions even more. Women will find a bit of insight into the minds of the men in their lives, and receive confirmation from one particularly honest man that, most of the time, we don't have a clue what we are doing.

Is there anything in "Manhood" specifically for programmers and other computer types? No, though there are a couple of references to computers, including this one in the essay "Art of Cake":

Cooking satisfies the part of me that enjoys struggling for days to transfer an out-of-print vinyl record by Klaatu to digital format, screwing with scratch filters and noise reducers, only to have the burn fail every time at the very same track.

I love to cook in much the way Chabon describes, but I must admit that I've never had quite the drive to tinker with settings, configuration files, and boot sectors that my Linux friends seem to have. Cooking feels this need need better for me that installing the latest distribution of my operating system. My drive with computers has always been to create things with programs, and in that regard I was most at home in "Manhood" when he talked about writing.

Chabon does have an essay in the closing section of the book that echoes my observation that there is no normal, though his essay explores what it means for daily life to be normal. I usually see connections of this sort to my life, and readers of this blog won't be surprised if I write a post soon about how the truths of life that Chabon explores find themselves residing in the mind of this programmer and teacher.

One note in closing: Good Boy that I am, I must tell you that Chabon uses language I would never use, and he occasionally discusses frankly, though briefly, drug usage and sex. Fortunately, as I grew up, I learned that I could read about things I would never say or do, and benefit from the experience.


Posted by Eugene Wallingford | Permalink | Categories: General

May 23, 2011 2:34 PM

Plan A Versus Plan B

Late last week, Michael Nielsen tweeted:

"The most successful people are those who are good at Plan B." -- James Yorke

This is one of my personal challenges. I am a pretty good Plan A person. Historically, though, I am a mediocre Plan B person. This is true of creating Plan B, but more importantly of recognizing and accepting the need for Plan B.

Great athletes are good at Plan B. My favorite Plan B from the sporting world was executed by Muhammad Ali in the Rumble in the Jungle, his heavyweight title fight against George Foreman in October 1974. Ali was regarded by most at that time as the best boxer in the world, but in Foreman he encountered a puncher of immense power. At the end of Round 1, Ali realized that his initial plan of attacking Foreman was unlikely to succeed, because Foreman was also a quick fighter who had begun to figure out Ali's moves. So Ali changed plans, taking on greater short-term risk by allowing Foreman to hit him as much as he wanted, so long as the blows were not the kind likely to end the fight immediately. Over the next few rounds, Foreman began to wear down, unaccustomed to throwing so many punches for so many rounds against an opponent who did not weaken. Eventually, Ali found his opening, attacked, and ended the fight in Round 8.

This fight is burned in my mind for the all-time great Plan B moment: Ali sitting on his stool between the first and second rounds, eyes as wide and white as platters. I do not ever recall seeing fear in Muhammad Ali's eyes at any other time in his career, before or after this fight. He believed that Foreman could knock him out. But rather than succumb to the fear, he gathered himself, recalculated, and fought a different fight. Plan B. The Greatest indeed.

Crazy software developer that I am, I see seeds of Plan B thinking in agile approaches. Keep Plan A simple, so that you don't overcommit. Accept Plan B as a matter of course, refactoring in each cycle to build what you learn from writing the code back into the program. React to your pair's ideas and to changes in the requirements with aplomb.

There is good news: We can learn how to be better at Plan B. It takes effort and discipline, just as changing any of our habits does. For me, it is worth the effort.

~~~~

If you would like to learn more about the Rumble in the Jungle, I strongly recommend the documentary film When We Were Kings, which tells the story of this fight and how it came to be. Excellent sport. excellent art, and you can see Ali's Plan B moment with your own eyes.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Software Development

April 26, 2011 4:41 PM

Students Getting There Faster

I saw a graphic over at Gaping Void this morning that incited me to vandalism:

teaching is the art of getting people to where they need to be

A lot of people at our colleges and universities seem to operate under the assumption that our students need us in order to get where they need to be. In A.D. 1000, that may have been true. Since the invention of the printing press, it has been becoming increasingly less true. With the invention of the digital computer, the world wide web, and more and more ubiquitous network access, it's false, or nearly so. I've written about this topic from another perspective before.

Most students don't need us, not really. In my discipline, a judicious self-study of textbooks and all the wonderful resources available on-line, lots of practice writing code, and participation in on-line communities of developers can give most students a solid education in software development. Perhaps this is less true in other disciplines, but I think most of us greatly exaggerate the value of our classrooms for motivated students. And changes in technology put this sort of self-education within reach of more students in more disciplines every day.

Even so, there has never been much incentive for people not to go to college, and plenty of non-academic reasons to go. The rapidly rising cost of a university education is creating a powerful financial incentive to look for alternatives. As my older daughter prepares to head off to college this fall, I appreciate that incentive even more than I did before.

Yet McLeod's message resonates with me. We can help most students get where they need to be faster than they would get there without us.

In one sense, this has always been true. Education is more about learning than teaching. In the new world created by computing technologies, it's even more important that we in the universities understand that our mission is to help people get where they need to be faster and not try to sell them a service that we think is indispensable but which students and their parents increasingly see as a luxury. If we do that, we will be better prepared for reality as reality changes, and we will do a better job for our students in the meantime.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 18, 2011 4:44 PM

At the Penumbra of Intersections

Three comments on my previous post, Intersections, in decreasing order of interest to most readers.

On Mediocrity. Mediocitry is a risk if we add so many skills to our portfolio that we don't have the ability or energy to be good at all of them. This article talks about start-ups companies, but I think its lesson applies more broadly to the idea of carving out one's niche. For start-ups as in life, mediocrity is often a worse outcome than failure. When we fail, we know to move on and do. When we achieve mediocrity, sometimes we are just good enough to feel comfortable. It's hard to come up with the willingness to give up the security, or the energy it takes to push ourselves out of the local maximum. But then we miss out on the chance to rach our full potential.

Who Is "Non-Technical"? My post said, "my talk considered the role of programming in the future of people who study human communication, history, and other so-called non-technical fields". I qualified "non-technical", but still I wonder: How many disciplines are non-technical these days, in the era of big data and computation everywhere? How many of these disciplines will be non-technical in the same way 20 years from now?

Keller McBride's color spray artwork

Yak Shaving. I went looking for Venn diagrams to illustrate my post, and then realized I should just create my own. As I played with a couple of tools, I remembered a cool CS 1 assignment I used several years and one student's solution in particular. Suddenly I was obsessed with using my own homegrown tool. That meant finding the course archive and Keller's solution. Then finding my own solution, which had a couple of extra features. Then digging out Dr. Java and making it work with a current version of the media comp tools. Then extending the simple graphics language we used, and refactoring my code, and... The good news is that I now have running again a nice, very simple tool for drawing simple graphics, one that can be used to annotate existing images. And I got to have fun tickereing with code for a\ while on a cloudy Sunday afternoon.


Posted by Eugene Wallingford | Permalink | Categories: General

April 17, 2011 12:43 PM

Intersections

It seems I've been running into intersections everywhere.

In one of Richard Feynman letter, he wrote of two modes for scientists: deep and broad. Scientists who focus on one thing typically win the big awards, but Feynman reassured his correspondent that scientists work broadly in the intersections of multiple disciplines can make valuable contributions.

Scott Adams wrote about the value of combining skills. John Cook commented on Adams's idea, and one of Cook's readers commented on Cook's comment.

A week ago Friday, I spoke at a gathering of professors, students, and local business people who are interested in interactive digital technologies. Among other things, my talk considered the role of programming in the future of people who study human communication, history, and other so-called non-technical fields. One of my friends and former students, now a successful entrepreneur who employs many of our current and former students, spoke about how to succeed in business as a start-up. His talk inspired the audience withe power of passion, but he also gave some practical advice. It is difficult to be the best at any one thing, but if you are very good at two or three or five, then you can be the best in a particular market niche. The power of the intersection.

Wade used a Venn diagram to express his idea:

a Venn diagram of two intersecting sets

The more skills -- "core competencies", in the jargon of business and entrepreneurship -- you add, the more unique your niche:

a Venn diagram of four intersecting sets

As I thought about intersections in all these settings, a few ideas began to settle in my mind:

Adding more circles to your Venn diagram is a good thing, even if you feel they limit your ability to excel in one of the other areas. Each circle adds depth to your niche at the intersection. Having several skills gives you the agility to shift your focus as the world changes -- and as you change.

At some point, adding more circles to your Venn diagram starts to hurt you, not help you. For most of us, there is a limit to the number of different areas we can realistically be good in. If we are unable to perform at a high level in all the areas, or keep up with the changes they evolve, we end up being mediocre. Mediocrity isn't usually good enough to excel in the market, and it isn't a fun place to live.

The fact that we can create intersections in which to excel is a great opportunity for people who do not have the interest or inclination to focus on any one area too narrowly. Perhaps we can't all be Nobel Prize-winning physicists, but in principle we all can make our own niche.

The challenge is that you still have to work hard. This isn't about being the sort of dilettante who skims along the surface of knowledge without ever getting wet. It's about being good at several things, and that takes time and energy.

Of course, that is what makes Nobel Prize winners, too: hard work. They simply devote nearly all of their time and energy to one discipline.

I think it's good news that hard work is the common denominator of nearly all success. We may not control many things in this world, but we have control over how hard we work.


Posted by Eugene Wallingford | Permalink | Categories: General

April 12, 2011 7:55 PM

Commas, Refactoring, and Learning to Program

The most important topic in my Intelligent Systems class today was the comma. Over the last week or so, I had grading their essays on communicating the structure and intent of programs. I was not all that surprised to find that their thoughts on communicating the structure and intent of programs were not always reflected in their essays. Writing well takes practice, and these essays are for practice. But the thing that stood out most glaringly from most of the papers was the overuse, misuse, and occasional underuse of the comma. So after I gave a short lecture on case-based reasoning, we talked about commas. Fun was had by all, I think.

On a more general note, I closed our conversation with a suggestion that perhaps they could draw on lessons they learn writing, documenting, and explaining programs to help them write prose. Take small steps when writing new content, not worrying as much about form as about the idea. Then refactor: spend time reworking the prose, rewriting, condensing, and clarifying. In this phase, we can focus on how well our text communicates the ideas it contains. And, yes, good structure can help, whether at the level of sentences, paragraphs, or then whole essay.

I enjoyed the coincidence of later reading this passage in Roy Behrens's blog, The Poetry of Sight:

Fine advice from poet Richard Hugo in The Triggering Town: Lectures and Essays on Poetry and Writing (New York: W.W. Norton, 1979)--

Lucky accidents seldom happen to writers who don't work. You will find that you may rewrite and rewrite a poem and it never seems quite right. Then a much better poem may come rather fast and you wonder why you bothered with all that work on the earlier poem. Actually, the hard work you do on one poem is put in on all poems. The hard work on the first poem is responsible for the sudden ease of the second. If you just sit around waiting for the easy ones, nothing will come. Get to work.

This is an important lesson for programmers, especially relative beginners, to learn. The hard work you do on one program is put in on all programs. Get to work. Write code. Refactor. Writing teaches writing.

~~~~

Long-time readers of this blog may recall that I once recommended The Triggering Town in an entry called Reading to Write. It is still one of my favorites -- and due for another reading soon!


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

April 04, 2011 7:26 PM

Saying "Hell Yeah! or No" to "Hell Yeah! or No"

Sometimes I find it hard to tell someone 'no',
but I rarely regret it
.

I have been thinking a lot lately about the Hell Yeah! or No mindset. This has been the sort of year that makes me want to live this way more readily. It would be helpful when confronting requests that come in day to day, the small stuff that so quickly clutters a day. It would also be useful when facing big choices, such as "Would you like another term as department head?"

Of course, like most maxims that wrap up the entire universe in a few words, living this philosophy is not as simple as we might like it to be.

The most salient example of this challenge for me right now has to do with granularity. Some "Hell Yeah!"s commit me to other yesses later, whether I feel passionate about them or not. If I accept another term as head, I implicitly accept certain obligations to serve the department, of course, and also the dean. As a department head, I am a player on the dean's team, which includes serving on certain committees across the college and participating in college-level discussions of strategy and tactics. The 'yes' to being head is, in fact, a bundle of yesses, more like a project in Getting Things Done than a next action.

Another thought came to mind while ruminating on this philosophy, having to do with opportunities. If I do not find myself with the chance to say "Hell Yeah!" very often, then I need to make a change. Perhaps I need to change my attitude about life, to accept the reality of where and who I am. More likely, though, I need to change my environment. I need to put myself in more challenging and interesting situations, and hang out with people who are more likely to ask the questions that provoke me to say "Hell Yeah!"


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

March 31, 2011 8:06 PM

My Erdos Number

Back in the early days of my blog, I wrote about the H number as a measure of a researcher's contribution to the scientific community. In that article, the mathematician Paul Erdos makes a guest appearance in a quoted discussion about the trade-off between a small number of highly influential articles and a large number of articles having smaller effect.

Erdos is perhaps the best example of the former. By most accounts, he published more papers than any other mathematician in history, usually detailing what he called "little theorems". He is also widely know for the number of different coauthors with whom he published, so much so that one's Erdos number is a badge of honor among mathematicians and computer scientists. The shorter the path between a researcher and Erdos in the collaboration graph of authors and co-authors, the more impressive.

Kevlin Henney recently pointed me in the direction of Microsoft's VisualExplorer, which finds the shortest paths between any author and Erdos. Now I know that my Erdos number is 3. To be honest, I was surprised to find that my number was so small. There are many paths of lengths four and five connecting me to Erdos, courtesy of several of my good buddies and co-authors who started their professional lives in mathematics. (Hey to Owen and Joe.)

But thanks to Dave West, I have a path of length 3 to Erdos. I have worked with Dave at OOPSLA and at ChiliPLoP on a new vision for computer science, software development, and university education. Like me, Dave has not published a huge number of papers, but he has an eclectic set of interests and collaborators. One of his co-authors published with Erdos. 1-2-3!

In the world of agile software development, we have our own graph-theoretic badge of honor, the Ward number. If you have pair-programmed with Ward Cunningham, your Ward number is 1... and so on. My Ward number is 2, via the same Joe in my Erdos network, Bergin.

Back in even earlier days of my blog, I wrote an entry connected to Erdos, via his idea of Proofs from THE BOOK. Erdos was a colorful character!

Yes, computer scientists and mathematicians like to have fun, even if their fun involves graphs and path-finding algorithms.


Posted by Eugene Wallingford | Permalink | Categories: General

March 28, 2011 8:14 PM

A Well-Meaning Headline Sends an Unfortunate Signal

Last week, the local newspaper ran an above-the-fold front-page story about the regional Physics Olympics competition. This is a wonderful public-service piece. It extols young local students who spend their extracurricular time doing math and physics, and it includes a color photo showing two students who are having fun. If you would like to see the profile of science and math raised among the general public, you could hardly ask for more.

Unless you read the headline:

Young Einsteins

I don't want to disparage the newspaper's effort to help the STEM cause, but the article's headline undermines the very message it is trying to send. Science isn't fun; it isn't for everyone; it is for brains. We're looking for smart kids. Regular people need not apply.

Am I being too sensitive? No. The headline sends a subtle message to students and parents. It sends an especially dangerous signal to young women and minorities. When they see a message that says, "Science kids are brainiacs", they are more likely than other kids to think, "They don't mean me. I don't belong."

I don't want anyone to mislead people about the study of science, math, and CS. They are not the easiest subjects to study. Most of us can't sleep through class, skip homework, and succeed in these courses. But discipline and persistence are more important ingredients to success than native intelligence, especially over the long term. Sometimes, when science and math come too easily to students early in their studies, they encounter difficulties later. Some come to count on "getting it" quickly and, when it no longer comes easily, they lose heart or interest. Others skate by for a while because they don't have to practice and, when it no longer comes easily, they haven't developed the work habits needed to get over the hump.

If you like science and math enough to work at them, you will succeed, whether you are an Einstein or not. You might even do work that is important enough to earn a Nobel Prize.


Posted by Eugene Wallingford | Permalink | Categories: General

February 28, 2011 4:16 PM

Unsurprising Feelings of Success

As reported in this New York Times obit, Arthur Ashe once said this about the whooping he put on my old favorite bad boy, Jimmy Connors, in the 1975 Wimbledon championship:

"If you're a good player," he said softly, "and you find yourself winning easily, you're not surprised."

I've never been a good enough tennis player to have his feeling on court. Likewise with running, where I am usually competing more with my own expectations than with other runners. In that self-competition, though, I have occasionally had the sort of performance that tops my realistic expectations but doesn't really surprise me. Preparation makes success possible.

In the non-competitive world of programming, I sometimes encounter this feeling. I dive into a new language, tool, or framework, expecting slow and unsteady progress toward competence or mastery. But then I seem to catch a wave and cruise forward, deeper and more confident than I had right to expect. In those moments, it's good to step back and remember: we are never entitled to feel that way, but our past work has made them possible.

When those moments come, they are oh, so sweet. They make even more palatable the tough work we do daily, moving one shovel of dirt from here to there.


Posted by Eugene Wallingford | Permalink | Categories: General

February 19, 2011 9:55 AM

Takedown

As a blogger who sometimes worries that I am a person you've never heard of, "writing uninterestingly about the unexceptional", and that other people have already written about whatever I have to say, only better, I really enjoyed William Zinsser's recent takedown of a New York Times Book Review editor on the subject of writing memoirs. His paragraph that starts

Sorry to be so harsh, but I don't like people telling other people they shouldn't write about their life.

and ends

The Times can use its space more helpfully than by allowing a critic to hyperventilate on an exhausted subject. We don't have that many trees left.

is one of my favorite paragraphs in recent memory. I admit a juvenile fondness for hoisting someone with his own petard, and Zinsser does so masterfully.

One of the great beauties of the web is that anyone can write and publish. Readers choose to give their attention to the work that matters to them. No trees are harmed along the way.

In reading and writing together, we can all become better writers -- and better thinkers.

~~~~

I recommended Zinsser's On Writing Well, among several other books, in an earlier entry on reading to write. It remains at the top of my suggested reading list.


Posted by Eugene Wallingford | Permalink | Categories: General

February 05, 2011 9:57 AM

You Are Not Your Work

It is easy for me to get sucked into a mindset in which I equate myself with what I do. In Five Lies I No Longer Believe, Todd Henry writes:

I am not my work, and I am especially not defined by how my work is received. That is not an excuse for laziness; it's permission to engage fully and freely and to bless everyone I encounter without worrying about what they think of me. This is hard medicine to swallow in a culture that celebrates title and the little spaces we carve for ourselves in the marketplace. Not me, not any longer.

"I am what and whom and how I teach."

"I am the programs I create, the papers I write, the grants I receive."

"I am the head of the department."

It is dangerous to think this way when I fail in any one of these arenas. It undervalues who I am and what I can offer. It closes my eyes to other parts of my life.

It is also dangerous to think this way when I succeed. Even in success, this view diminishes me. And it creates an endless cycle of having to succeed again in order to be.

When we think we are what we do, we often constrain our actions based on what other people will think of us. That makes it hard to teach the hard truths, to make tough decisions, to lead people. It makes it hard to create things that matter.

Even if we tune out what other people think, we find that we are always judging ourselves. This is as restrictive and as counterproductive as worrying about other peoples' idea of us.

Having different roles in life and doing different kinds of things can help us avoid this trap. Activity, success, and failure in one arena are only a part of who we are. We have to be careful, though, not to equate ourselves with our the sum of our activities, successes, and failures in all these areas. Whatever that sum is, we are more.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

January 08, 2011 10:41 AM

A Healthy Diet for the Mind

"You are what you eat." You probably heard this bon mot as a child. It encouraged us to eat healthy foods, so that we could grow up to be big and strong.

I think the same thing is true of what we read. When we eat, we consume nutrients and vitamins. When we read, we consume ideas. Some are meat and potatoes, others fruits and vegetables. Some are the broad base of a healthy diet, like breads and grains. Others are junk food. Ideas may even occasionally be roughage!

There probably isn't an idea analogue to the food pyramid. Even more than with the food we eat, there is no right answer for what and how much of any kind of literature we should read. There are many ways for any particular person to read a healthy diet. Still, there are kinds of literature that offer us ideas in different forms, different concentrations, and different modalities. Fiction is where most children start, whether historical, fantastical, or simply about life. Non-fiction, too, comes in many categories: biography, history, science, ... too many to mention.

However, I do think that writer Matthew Kelly is right when he says, "We need a diet of the mind just as much as we need a diet of the body." Just as we should be mindful of what we put in our bodies, we should be mindful of what we put in our minds.

Each person needs to find the reading balance that makes them healthy and happy. I tend to read a lot of technical literature in my own discipline. Academics are pone to this imbalance. One of my challenges is to read enough other kinds of things to maintain a balanced intellectual life. It turns out that reading outside my discipline can make me a better computer scientist, because it gives me more kinds of ideas to use. But the real reason to read more broadly is to have a balanced mind and life.

I know people who wonder why they need to bother reading fiction at all. It doesn't make them better programmers. It doesn't help them change the world via political action. Both of these statements are so, so wrong. Shakespeare and Euripides and Kurt Vonnegut can teach us about how to change the world and even how to become better programmers! But that's not the point. They also make us better people.

Whenever I encounter this sentiment, I always send my friends to Tim O'Reilly's The Benefits of a Classical Education. Most programmers I know hold O'Reilly Media in near reverence, so perhaps they'll listen to its founder when he says, "Classical stories come often to my mind, and provide guides to action". The fiction I've read has shaped how I think about life and problems and given me ways to think about solutions and actions. That's true not only of the classics but also of Kurt Vonnegut and Isaac Asimov, Arthur Clarke and Franz Kafka.

As I wrote recently, I've been reading Pat Conroy's My Reading Life. Near the end of the book, he tells a powerful story about him and his mom reading Thomas Wolfe's Look Homeward, Angel when he was a teenager. This book gave them a way to talk about their own tortured lives in a way they could never have done without a story standing between them and the truths they lived but could not speak. As Conroy says, "Literature can do many things; sometimes it can do even the most important things." I might go one step further: sometimes, only literature can do the most important things.

Sure, there is plenty junk food for the mind, too. It is everywhere, in our books and our blogs, and on our TV and movie screens. But just as with food, we need not eliminate all sweets from our diets; we simply need to be careful about how much we consume. A few sweets are okay, maybe even necessary in some people's diets. We all have our guilty pleasures when it comes to reading. However, when my diet is dominated by junk, my mind becomes weaker. I become less healthy.

Some people mistakenly confuse medium with nutritional value. I hear people talk about blogs and Twitter as if they offer only the emptiest of empty calories, the epitome of junk reading. But the medium doesn't determine nutritional value. My Twitter feed is full of links to marvelous articles and conversation between solid thinkers about important ideas. Much is about computer science and software development, but I also learn about art, literature, human affairs, and -- in well-measured doses -- politics. My newsreader serves up wonderful articles, essays, analyses, and speculations. Sure, both come with a little sugar now and then, but that's just part of what makes it all so satisfying.

People should be more concerned when a medium that once offered nutritional value is now making us less healthy. Much of what we call "news" has in my mind gone from being a member of the grain food group to being junk food.

We have to be careful to consume only the best sources of ideas, at least most of the time, or risk wasting our minds. And when we waste our minds, we waste our gifts.

You are what you read. You become the stories you listen to. Be mindful of the diet of ideas you feed your mind.


Posted by Eugene Wallingford | Permalink | Categories: General

December 27, 2010 9:21 PM

Reading "My Reading Life"

Author Pat Conroy

I first became acquainted with Pat Conroy's work when, as a freshman in college, I watched The Great Santini as a requirement for one of my honors courses. This film struck me as both sad and hopeful. Since then, I have seen a couple of other movies adapted from his novels.

A few years ago, I read his My Losing Season, a memoir of his final season as a basketball player at The Citadel, 1966-1967. Conroy's story is essentially that of a mediocre college-level athlete coming to grips with the fact that he cannot excel at the thing he loves the most. This is a feeling I can appreciate.

The previous paragraph comes from a short review of My Losing Season written only a few weeks before I began writing this blog. That page gives summaries of several books I had read and enjoyed in the preceding months. (It also serves as an indication of how eager I was to write for a wider audience.)

This break I am reading Conroy's latest book, My Reading Life. It is about his love affair with words and writing, and with the people who brought him into contact with either. I stand by something I wrote in that earlier review: "Conroy is prone to overwrought prose and to hyperbole, but he's a good story teller." And I do love his stories.

I also value many of the things he values in life. In a moving chapter on the high school English teacher who became his first positive male role model and a lifelong friend and confidant, Conroy writes:

If there is more important work than teaching, I hope to learn about it before I die.

Then, in a chapter on words, he says:

Writing is the only way I have to explain my own life to myself.

Even more than usual, teaching and writing, whether prose or program, are very much on my mind these days. Reading My Reading Life is a great way to end my year.


Posted by Eugene Wallingford | Permalink | Categories: General

December 20, 2010 3:32 PM

From Occasionally Great to Consistently Good

Steve Martin's memoir, "Born Standing Up", tells the story of how Martin's career as a stand-up comedian, from working shops at Disneyland to being the biggest-selling concert comic ever at his peak. I like hearing people who have achieved some level of success talk about the process.

This was my favorite passage in the book:

The consistent work enhanced my act. I Learned a lesson: It was easy to be great. Every entertainer has a night when everything is clicking. These nights are accidental and statistical: Like the lucky cards in poker, you can count on them occurring over time. What was hard was to be good, consistently good, night after night, no matter what the abominable circumstances.

"Accidental greatness" -- I love that phrase. We all like to talk about excellence and greatness, but Martin found that occasional greatness was inevitable -- a statistical certainty, even. If you play long enough, you are bound to win every now and then. Those wines are not achievement of performance so much as achievements of being there. It's like players and coaches in athletics who break records for the most X in their sport. "That just means I've been around a long time," they say.

The way to stick around a long time, as Martin was able to do, is to be consistently good. That's how Martin was able to be present when lightning struck and he became the hottest comic in the world for a few years. It's how guys like Don Sutton won 300+ games in the major leagues: by being good enough for a long time.

Notice the key ingredients that Martin discovered to becoming consistently good: consistent work; practice, practice, practice, and more practice; continuous feedback from audiences into his material and his act.

We can't control the lightning strikes of unexpected, extended celebrity or even those nights when everything clicks and we achieve a fleeting moment of greatness. As good as those feel, they won't sustain us. Consistent work, reflective practice, and small, continuous improvements are things we can control. They are all things that any of us can do, whether we are comics, programmers, runners, or teachers.


Posted by Eugene Wallingford | Permalink | Categories: General, Running, Software Development, Teaching and Learning

December 01, 2010 3:45 PM

"I Just Need a Programmer"

As head of the Department of Computer Science at my university, I often receive e-mail and phone calls from people with The Next Great Idea. The phone calls can be quite entertaining! The caller is an eager entrepreneur, drunk on their idea to revolutionize the web, to replace Google, to top Facebook, or to change the face of business as we know it. Sometimes the caller is a person out in the community; other times the caller is a university student in our entrepreneurship program, often a business major. The young callers project an enthusiasm that is almost infectious. They want to change the world, and they want me to help them!

They just need a programmer.

Someone has to take their idea and turn it into PHP, SQL, HTML, CSS, Java, and Javascript. The entrepreneur knows just what he or she needs. Would I please find a CS major or two to join the project and do that?

Most of these projects never find CS students to work on them. There are lots of reasons. Students are busy with classes and life. Most CS students have jobs they like. Those jobs pay hard cash, if not a lot of it, which is more attractive to most students than the promise of uncertain wealth in the future. The idea does not excite other people as much as the entrepreneur, who created the idea and is on fire with its possibilities.

A few of the idea people who don't make connections with a CS student or other programmer contact me a second and third time, hoping to hear good news. The younger entrepreneurs can become disheartened. They seem to expect everyone to be as excited by their ideas as they are. (The optimism of youth!) I always hope they find someone to help them turn their ideas into reality. Doing that is exciting. It also can teach them a lot.

Of course, it never occurs to them that they themselves could learn how to program.

A while back, I tweeted something about receiving these calls. Andrei Savu responded with a pithy summary of the phenomenon I was seeing:

@wallingf it's sad that they see software developers as commodities. product = execution != original idea

As I wrote about at greater length in a recent entry, the value of a product comes from the combination of having an idea and executing the idea. Doing the former or having the ability to do the latter aren't worth much by themselves. You have to put the two together.

Many "idea people" tend to think most or all of the value inheres to having the idea. Programmers are a commodity, pulled off the shelf to clean up the details. It's just a small matter of programming, right?

On the other side, some programmers tend to think that most or all of the value inheres to executing the idea. But you can't execute what you don't have. That's what makes it possible for me and my buddy to sit around over General Tsao's chicken and commiserate about lost wealth. It's not really lost; we were never in its neighborhood. We were missing a vital ingredient. And there is no time machine or other mechanism for turning back the clock.

I still wish that some of the idea people had learned how to program, or were willing to learn, so that they could implement their ideas. Then they, too, could know the superhuman strength of watching ideas become tangible. Learning to program used to be an inevitable consequence of using computers. Sadly, that's no longer true. The inevitable consequence of using computers these days seems to be interacting with people we may or may not know well and watching videos.

Oh, and imagining that you have discovered The Next Great Thing, which will topple Google or Facebook. Occasionally, I have an urge to tell the entrepreneurs who call me that their ideas almost certainly won't change the world. But I don't, for at least two reasons. First, they didn't call to ask my opinion. Second, every once in a while a Microsoft or Google or Facebook comes along and does change the world. How am I to know which idea is that one in a gazillion that will? If my buddy and I could go back to 2000 and tell our younger and better-looking selves about Facebook, would those guys be foresightful enough to sit down and write it? I suspect not.

How can we know which idea is that one that will change the world? Write the program, work hard to turn it into what people need and want, and cross our fingers. Writing the program is the ingredient the idea people are missing. They are doing the right thing to seek it out. I wonder what it would be like if more people could implement their own ideas.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

November 22, 2010 2:18 PM

Tragedy and the Possibilities of Unlimited Storage

I've spent considerable time this morning cleaning out the folder on my desktop where I keep stuff. In one the dozens of notes files I've created over the last year or so, I found this unattributed quote:

In 1961, the scholar and cultural critic George Steiner argued in a controversial book, "The Death of Tragedy", that theatrical tragedies had begun their steady decline with the rise of rationalism and the Enlightenment in the 17th century. The Greek notion of tragedy was premised on man's inability to control his fate in the face of capricious, often brutal gods. But the point of a tragedy was to dramatize man's ability to make choices whatever his uncontrollable end.

The emphasis was not on the death -- what the gods usually had in store -- but on what the hero died for: the state, love, his own dignity. Did he die nobly? Or with shame? For a worthy cause? Or pitiless self-interest? Tragedies, then, were ultimately "an investigation into the possibilities of human freedom", as Walter Kerr put it in "Tragedy and Comedy" (1967).

I like this passage now as much as I must have when I typed it up from some book I was reading. (I'm surprised I did not write down the source!) It reminds me that I face and make choices every day that reveal who I am. Indeed, the choices I make create who I am. That message feels especially important to me today.

And yes, I know there are better tools for keeping notes than dozens of text files thrown into nearly as many folders. I take notes using a couple of them as well. Sometimes I lack the self-discipline I need to leave an ordered life!


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

November 18, 2010 3:43 PM

The Will to Run, or Do Anything Else

In "How Do You Do It?", an article in the latest issue of Running Times about how to develop the intrinsic motivation to do crazy things like run every morning at 5:00 AM, ultrarunner Eric Grossman writes:

The will to run emerges gradually where we cultivate it. It requires humility -- we can't just decide spontaneously and make it happen. Yet we must hold ourselves accountable for anything about which we can say, "I could have done differently."

Cultivation, humility, patience, commitment, accountability -- all features of developing the habits I need to run on days I'd rather stay in bed. After a while, you do it, because that's what you do.

I think this paragraph is true of whatever habit of thinking an doing that you are trying to develop, whether it's object-oriented programming, playing piano, or test-driven design.

~~~~

Eugene speaking at Tech Talk Cedar Valley, 2010/11/17

Or functional programming. Last night I gave a talk at Tech Talk Cedar Valley, a monthly meet-up of tech and software folks in the region. Many of these developers are coming to grips with a move from Java to Scala and are peddling fast to add functional programming style to their repertoires. I was asked to talk about some of the basic ideas of functional programming. My talk was called "Don't Drive on the Railroad Tracks", referring to Bill Murray's iconic character in the movie Groundhog Day. After hundreds or thousands of days reliving February 2 from the same starting point, Phil Connors finally comes to understand the great power of living in a world without side effects. I hope that my talk can help software developers in the Cedar Valley reach that state of mind sooner than a few years from now.

If you are interested, check out the slides of the talk (also available on SlideShare) and the code. both Ruby and Scheme, that I used to illustrate some of the ideas.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

November 06, 2010 11:04 AM

Outer Space, Iowa, and Unexpected Connections

Poet Marvin Bell tells this story in his collection, A Marvin Bell Reader:

[In Star Trek, Captain] Kirk is eating pizza in a joint in San Francisco with a woman whose help he will need, when he decides to fess up about who he is and where he has come from. The camera circles the room, then homes in on Kirk and his companion as she bursts out with, "You mean your from outer space?"

"No," says Kirk, "I'm from Iowa. I just work in outer space."

My life is in some ways a complement to Kirk's. I often feel like I'm from outer space. I just work in Iowa.

I briefly met Bell, the former poet laureate of Iowa, when he gave the keynote address at a camouflage conference at a few years ago. I gave a talk there on steganography, which is a form of digital camouflage. While Bell's quote comes from his own book, I found it in the front matter of Cook Book: Gertrude Stein, William Cook, and Le Corbusier, a delightful little book by Roy Behrens. Longtime readers of this blog will recognize Behrens's name; his writing has led me to many interesting books and ideas. I have written also written of Behrens's own scholarly work several times, most notably Teaching as Subversive Inactivity, Feats of Association, and Reviewing a Career Studying Camouflage. I am fortunate to have Roy as friend and colleague, right here in Cedar Falls, Iowa.

Cook Book tells the story of William Cook, a little known Iowa artist who left America for Europe as a young man and became a longtime friend of the celebrated American writer and expatriate Gertrude Stein. He later used his inheritance to hire a young, unknown Le Corbusier to design his new home on the outskirts of Paris. Behrens grew up in Cook's hometown of Independence, Iowa. If you would like a taste of the story before reading the book, read this short essay.

I am no longer surprised to learn of surprising connections among people, well-known and unknown alike. Yet I am always surprised at the particular connections that exist. A forgotten Iowa artist was a dear friend of one of America's most famous writers of the early 1900s? He commissioned one of the pioneers of modern architecture before anyone had heard of him? Pope Pius makes a small contribution to the expatriate Iowan's legacy?

Busy, busy, busy.

October 21, 2010 8:50 AM

Strange Loop Redux

StrangeLoop 2010 logo

I am back home from St. Louis and Des Moines, up to my next in regular life. I recorded some of my thoughts and experiences from Strange Loop in a set of entries here:

Unlike most of the academic conferences I attend, Strange Loop was not held in a convention center or in a massive conference hotel. The primary venue for the conference was the Pageant Theater, a concert nightclub in the Delmar Loop:

The Pageant Theater

This setting gave the conference's keynotes something of an edgy feel. The main conference lodging was the boutique Moonrise Hotel a couple of doors down:

The Pageant Theater

Conference session were also held in the Moonrise and in the Regional Arts Commission building across the street. The meeting rooms in the Moonrise and the RAC were ordinary, but I liked being in human-scale buildings that had some life to them. It was a refreshing change from my usual conference venues.

It's hard to summarize the conference in only a few words, other than perhaps to say, "Two thumbs up!" I do think, though, that one of the subliminal messages in Guy Steele's keynote is also a subliminal message of the conference. Steele talked for half an hour about a couple of his old programs and all of his machinations twenty-five or forty years to make them run in the limited computing environments of those days. As he went to all the effort to reconstruct the laborious effort that went into those programs in the first place, the viewer can't help but feel that the joke's on him. He was programming in the Stone Age!

But then he gets to the meat of his talk and shows us that how we program now is the relic of a passing age. For all the advances we have made, we still write code that transitions from state to state to state, one command at a time, just like our cave-dwelling ancestors in the 1950s.

It turns out that the joke is on us.

The talks and conversations at Strange Loop were evidence that one relatively small group of programmers in the midwestern US are ready to move into the future.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Software Development

October 13, 2010 10:20 PM

Serendipitous Connections

I'm in St. Louis now for Strange Loop, looking at the program and planning my schedule for the next two days. The abundant options nearly paralyze me... There are so many things I don't know, and so many chances to learn. But there are a limited number of time slots in any day, so the chances overlap.

I had planned to check in at the conference and then eat at The Pasta House, a local pasta chain that my family discovered when we were here in March. (I am carbo loading for the second half of my travels this week.) But after I got the motel, I was tired from the drive and did not relish getting into my car again to battle the traffic again. So I walked down the block to Bartolino's Osteria, a more upscale Italian restaurant. I was not disappointed; the petto di pollo modiga was exquisite. I'll hit the Pasta House tomorrow.

When I visit big cities, I immediately confront the fact that I am, or have become, a small-town guy. Evening traffic in St. Louis overwhelms my senses and saps my energy. I enjoy conferences and vacations in big cities, but when they end I am happy to return home.

That said, I understand some of the advantages to be found in large cities. Over the last few weeks, many people have posted this YouTube video of Steven Johnson introducing his book, "Where Good Ideas Come From". Megan McArdle's review of the book points out one of the advantages that rises out of all that traffic: lots of people mean lots of interactions:

... the adjacent possible explains why cities foster much more innovation than small towns: Cities abound with serendipitous connections. Industries, he says, may tend to cluster for the same reason. A lone company in the middle of nowhere has only the mental resources of its employees to fall back on. When there are hundreds of companies around, with workers more likely to change jobs, ideas can cross-fertilize.

This is one of the most powerful motivations for companies and state and local governments in Iowa to work together to grow a more robust IT industry. Much of the focus has been on Des Moines, the state capitol and easily the largest metro area in the state, and on the Cedar Rapids/Iowa City corridor, which connects our second largest metro area with our biggest research university. Those areas are both home to our biggest IT companies and also home to a lot of people.

The best IT companies and divisions in those regions are already quite strong, but they will be made stronger by more competition, because that competition will bring more, and more diverse, people into the mix. These people will have more, and more diverse, ideas, and the larger system will create more opportunities for these ideas to bounce off one another. Occasionally, they'll conjoin to make something special.

The challenge of the adjacent possible makes me even more impressed by start-ups in my small town. People like Wade Arnold at T8 Webware are working hard to build creative programming and design shops in a city without many people. They rely on creating their own connections, at places like Strange Loop all across the country. In many ways, Wade has to think of his company as an incubator for ideas and a cultivator of people. Whereas companies in Des Moines can seek a middle ground -- large enough to support the adjacent possible but small enough to be comfortable -- companies like T8 must create the adjacent possible in any way they can.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

September 06, 2010 12:45 PM

Empiricism, Bias, and Confidence

This morning, Mike Feathers tweeted a link to an old article by Donald Norman, Simplicity Is Highly Overrated and mentioned that he disagrees with Norman. Many software folks disagreed with Norman when he first wrote the piece, too. We in software, often being both designers and users, have learned to appreciate simplicity, both functionally and aesthetically. And, as Kent Beck suggested, products such as the iPod are evidence contrary to the claim that people prefer the appearance of complexity. Norman offered examples in support of his position, too, of course, and claimed that he has observed them over many years and in many cultures.

This seems like a really interesting area for study. Do people really prefer the appearance of complexity as a proxy for functionality? Is the iPod an exception, and if so why? Are software developers different from the more general population when it comes to matters of function, simplicity, and use?

When answering these questions, I am leery of relying on self-inspection and anecdote. Norman said it nicely in the addendum to his article:

Logic and reason, I have to keep explaining, are wonderful virtues, but they are irrelevant in describing human behavior.

He calls this the Engineer's Fallacy. I'm glad Norman also mentions economists, because much of the economic theory that drives our world was creating from deep analytic thought, often well-intentioned but usually without much evidence to support it, if any at all. Many economists themselves recognize this problem, as in this familiar quote:

If economists wished to study the horse, they wouldn't go and look at horses. They'd sit in their studies and say to themselves, "What would I do if I were a horse?"

This is a human affliction, not just a weakness of engineers and economists. Many academics accepted the Sapir-Whorf Hypothesis, which conjectures that our language restricts how we think, despite little empirical support for a claim so strong. The hypothesis affected in disciplines such as psychology, anthropology, and education, as well as linguistics itself. Fortunately, others subjected the hypothesis to study and found it lacking.

For a while, it was fashionable to dismiss Sapir-Whorf. Now, as a recent New York Times article reports, researchers have begun to demonstrate subtler and more interesting ways in which the language we speaks shapes how we think. The new theories follow from empirical data. I feel a lot more confident in believing the new theories, because we have derived them from more reliable data than we ever had for the older, stronger claim.

(If you read the Times article, you will see that Whorf was an engineer, so maybe the tendency to develop theories from logical analysis and sparse data really is more prominent in those of us trained in the creation of artifacts to solve problems...)

We see the same tendencies in software design. One of the things I find attractive about the agile world is its predisposition toward empiricism. Just yesterday Jason Gorman posted a great example, Reused Abstractions Principle. For me, software abstractions that we discover empirically have a head-start toward confident believability on the ones we design aforethought. We have seen them instantiated in actual code. Even more, we have seen them twice, so they have already been reused -- in advance of creating the abstraction.

Given how frequently even domain experts are wrong in their forecasts of the future and their theorizing about the world, how frequently we are all betrayed by our biases and other subconscious tendencies, I prefer when we have reliable data to support claims about human preferences and human behavior. A flip but too often true way to say "design aforethought" is "make up".


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

August 24, 2010 4:30 PM

Dreaming, Doing, Perl, and Language Translation

Today, I quoted Larry Wall's 2000 Atlanta Linux Showcase Talk in the first day of my compilers course. In that talk, he gives a great example of using a decompiler to port code -- in this case, from Perl 5 to Perl 6. While re-reading the talk, I remembered something that struck me as wrong when I read it the first time:

["If you can dream it, you can do it"--Walt Disney]

"If you can dream it, you can do it"--Walt Disney. Now this is actually false (massive laughter). I think Walt was confused between necessary and sufficient conditions. If you *don't* dream it, you can't do it; that is certainly accurate.

I don't think so. I think this is false, too. (Laugh now.)

It is possible to do things you don't dream of doing first. You certainly have to be open to doing things. Sometimes we dream something, set out to do it, and end up doing something else. The history of science and engineering are full of accidents and incidental results.

I once was tempted to say, "If you don't start it, you can't do it; that is certainly accurate." But I'm not sure that's true either, because of the first "it". These days, I'm more inclined to say that if you don't start doing something, you probably won't do anything.

Back to Day 1 of the compilers: I do love this course. The Perl quote in my lecture notes is but one element in a campaign to convince my students that this isn't just a compilers course. The value in the course material and in the project itself go far beyond the creation of an old-style source language-to-machine language translator. Decompilers, refactoring browsers, cross-compilers, preprocessors, interpreters, and translators for all sorts of domain-specific languages -- a compilers course will help you learn about all of these tools, both how they work and how to build them. Besides, there aren't many better ways to consolidate your understanding of the breadth of computer science than to build a compiler.

The official title of my department's course is "Translation of Programming Languages". Back in 1994, before the rebirth of mainstream language experimentation and the growth of interest in scripting languages and domain-specific languages, this seemed like a daring step. These days, the title seems much more fitting than "Compiler Construction". Perhaps my friend and former colleague Mahmoud Pegah and I had a rare moment of foresight. More likely, Mahmoud had the insight, and I was simply wise enough to follow.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

August 18, 2010 5:04 PM

You May Be in the Software Business

In the category programming for all, Paul Graham's latest essay explains his ideas about What Happened to Yahoo. (Like the classic Marvin Gaye album and song, there is no question mark.) Most people may not care about programming, but they ought to care about programs. More and more, the success of an organization depends on software.

Which companies are "in the software business" in this respect? ... The answer is: any company that needs to have good software.

If this was such a blind spot for an Internet juggernaut like Yahoo, imagine how big a surprise it must be for everyone else.

If you employ programmers, you may be tempted to stay within your comfort zone and treat your tech group just like the rest of the organization. That may not work very well. Programmers are a different breed, especially great programmers. And if you are in the software business, you want good programmers.

Hacker culture often seems kind of irresponsible. ... But there are worse things than seeming irresponsible. Losing, for example.

Again: If this was such a blind spot for an Internet juggernaut like Yahoo, imagine how big an adjustment it would be for everyone else.

I'm in a day-long retreat with my fellow department heads in the arts and sciences, and it's surprising how often software has come up in our discussions. This is especially true in recruitment and external engagement, where consistent communication is so important. It turns out the university is in the software business. Unfortunately, the university doesn't quite get that.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

August 10, 2010 3:36 PM

In Praise of Attacking Hard Problems

With the analysis of Deolalikar's P != NP paper now under way in earnest, I am reminded of a great post last fall by Lance Fortnow, The Humbling Power of P v NP. Why should every theorist try to prove P = NP and P != NP?

Not because you will succeed but because you will fail. No matter what teeth you sink into P vs NP, the problem will bite back. Think you solved it? Even better. Not because you did but because when you truly understand why your proof has failed you will have achieved enlightenment.

You might even succeed, though I'm not sure if the person making the attempt achieves the same kind of enlightenment in that case.

Even if Deolalikar's proof holds up, Fortnow's short essay will still be valuable and true.

We'll just use a different problem as our standard.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

August 09, 2010 3:43 PM

Fail Early, Fail Often... But Win in the End

I seem to be running across the fail early, fail often meme a lot lately. First, in an interview on being wrong, Peter Norvig was asked how Google builds tolerance for the inevitable occasional public failures of its innovations "into a public corporation that's accountable to its bottom line". He responded:

We do it by trying to fail faster and smaller.

One of the ways they do this is by keeping iterations short and teams small.

Then this passage from Seth Godin's recent e-book, Insubordinate, jumped out as another great example:

As a result of David [Seuss]'s bias for shipping, we failed a lot. Products got returned. Commodore 64 computers would groan in pain as they tried to run software that was a little too advanced for their puny brains. It didn't matter, because we were running so fast that the successes supported us far more than the failures slowed us down.

In a rapidly changing environment, not to change is often a bigger risk than to change. In an environment most people don't understand well, in which information is unavailable and unevenly distributed, not to change is often a bigger risk than to change.

However, it's important not to fetish-ize failure, as some people seem to do. Dave Winer reminds us, embracing failure is a good way to fail. Sometimes, you have to look at what failure will mean and muster a level of determination that denies failure in order to succeed.

This all seems so contradictory... but it's not. As we humans often do, we create rules for behavior that are underspecified in terms of context and the problem being solved. There are a lot of trade-offs in the mix when we talk about success and failure. For example, we need to distinguish between failure in the large and failure in small. When an agile developer is taking small steps, she can afford to fail on a few -- especially if the failure teaches her something about how to succeed more reliably in the future. The new information gained is worth the cost of the loss.

In the passage from Godin, successes happened, too, not only losses, and the wins more than offset the losses. In that context, it seems that the advice is not about failure so much as getting over fear of failure. When we fear failure so much that we do not act, we deprive ourselves not only of losses but also of wins. Not failing gets in way of succeeding, of not learning and growing.

Winer was talking about something different, what I'm calling in my mind "ultimate failure": sending the employees home, shutting the doors, and turning the lights off for good. That is different than the "transitory failures" Godin was talking about, the sort of failures we experience when we learn in a dynamic, poorly understood environment. Still, Winer might not take any comfort in that idea. His company was at the brink, and only making the product work and sell was good enough to pull it back. At that moment, he probably wasn't interested in thinking about what he could learn from his next failure.

Sometimes, even the small failures can close doors, at least for a while. That's why so many entrepreneurs and commentators on start-up companies encourage people to fail early, before too many resources have been sunk into the venture, before too many people have been drawn into the realm affected by success or failure -- when a failure means that the entrepreneur simply must start over with her primary assets: her energy, determination, and effort.

When I was decorating my first college dorm room, I hung three small quotes on the wall over the desk. One of them comes to mind now. It was from Don Shula, the head coach of my favorite pro football team, the Miami Dolphins:

Failure isn't fatal, and success isn't final.

This seemed like a good mantra to keep in mind as I embarked on a journey into the unknown. It has served me well for many years now, including my time as a programmer and a teacher.


Posted by Eugene Wallingford | Permalink | Categories: General

July 30, 2010 2:36 PM

Notes on Entry Past

I've been killing loose minutes this week by going through my stuff folder, moving files I want to keep to permanent homes and pitching files I have lost interest in or won't have time for anytime soon. As I sometimes do, I've run across quotes I stashed away for use in blog entries. Alas, some of the quotes would have been useful in pieces I wrote recently, but now they aren't likely to find a home any time soon.

I recall reading this quote from A. E. Stallings in a short essay by Tim O'Reilly on the value of a classical education:

[The ancients] showed me that technique was not the enemy of urgency, but the instrument.

But it would have been perfect in Form Matters. Improving your form doesn't slow you down in the long run, it makes you faster.

I read this in an entry by Philip Windley about how he had averted a potential disaster:

Automate everything.

... and this in a response to that entry by Gordon Weakliem:

You are not a machine, so stop repeating yourself.

Of course, I was immediately reminded of my own disaster, unaverted. As I look back at Weakliem's article, it is interesting to see how programming principles such as "don't repeat yourself" and practices such as pair programming show up in different contexts outside of software.

Finally, I found this snippet, a tweet by @KentBeck:

as your audience grows, the cost of failure rises. put positively, it'll never be cheaper to fail than today.

This would have been a great part of any number of entries about my agile software development course in May and my software engineering course last fall. Kent is a master of crystallizing ideas into neat little catchphrases I never forget. Perhaps this one would have stuck with a few students as they tried to move toward shorter and shorter iterations of the test-code-refactor cycles.


Posted by Eugene Wallingford | Permalink | Categories: General

July 06, 2010 1:03 PM

Vindicated

By H. G. Wells, no less:

"You have turned your back on common men, on their elementary needs and their restricted time and intelligence," H.G. Wells complained to Joyce after reading "Finnegans Wake." That didn't faze him. "The demand that I make of my reader," Joyce said, "is that he should devote his whole life to reading my works." To which the obvious retort is: Life's too short.

This passage comes from an article on the complexity of modern art. Some modern art works for me, but I long ago lost interest in writers who complicate their work seemingly with the goal of proving to me how smart they are. Some of my friends love such writers and look at me in the same way they look at children and puppies. I must admit, with no small measure of guilt, that I have occasionally wondered how much their interest in these writers rested in a hidden desire to show how smart they are.

I've mentioned at least a couple of times that I prefer small books to large, and on that criterion alone I could bypass "Ulysses" and "Finnegans Wake". Joyce compounds their length with sentence structures and made-up words that numb my small brain and squanders my limited time. Their complexity and deeply-woven literary allusions may well reward the reader who devotes his life to studying Joyce. But for me, life is indeed too short.

I must admit that I very much enjoyed Joyce's comparatively svelte "Portrait Of The Artist As A Young Man". I also enjoyed H. G. Wells's science fiction, though as literature it never rises anywhere near the level of "Portrait".


Posted by Eugene Wallingford | Permalink | Categories: General

June 24, 2010 8:04 AM

Remembrance and Sustenance

All those things for which
we have no words are lost.
-- Annie Dillard, "Total Eclipse"

My family and I spent eight days on the road last week, with a couple of days in the Jamestown/Williamsburg area of Virginia and then a few days in Washington, D.C. I'd never spent more than a couple of hours in my nation's capital and enjoyed seeing the classic buildings in which our leaders work and the many monuments to past leaders and conflicts.

The Korean War Veterans Memorial in Washington, DC

The Korean War Veterans Memorial caught me by surprise. No one ever talks about this memorial, much like the war itself. I had no idea what it looked liked, no expectations. When we came upon it from the rear, I became curious. Standing amid the soldiers trudging through a field, I was unnerved. They look over their shoulders, or they make eye contact with one another, or they stare ahead, blankly. This is no sterile monument of while limestone. It is alive, even as it reminds us of men who no longer are. When we reached the front of the memorial, we saw a wreath with a note of thanks from the Korean people. It brought tears to my eyes, and to my daughter's.

As touched as I was by the National Mall, most of my memories of the trip are of artwork we saw in the several galleries and gardens. I came to remember how much I like the paintings of Monet. This time, it was his "The Seine at Giverny" that gave me the most joy. I learned how much I enjoy the work of Camille Pissarro, another of the French impressionists who redefined what a painting could be and say in the 1800s. I even saw a few abstract pieces by Josef Albers, whom I quoted not long ago. That quote came back to me as I walked amid the creations of men, oblivious to computer programming and the debits and credits of department budgets. What happens happens mostly without me. Indeed.

One Hiroshi Sugimoto's seascape photographs

I left Washington with a new inspiration, Hiroshi Sugimoto. My daughter and I entered one of the gallery rooms to find a bunch of canvasses filled with blacks, grays, and whites. "More modern nothingness," I thought at first. As we absorbed the images, though, one of us said out loud, "These look like pictures of the ocean. See here...?" We looked closer and saw different times of day, different clouds and fog, horizons crisp and horizons that were no more than imperceptible points on a continuum from dark ocean to light sky. Only upon leaving the room did we learn that these images were in fact seascapes. "This is modern art that works for me," said my daughter. I nodded agreement.

Sugimoto's seascapes are only one element of his work. I have many more of his images to discover.

I did not get through my eight days away without any thoughts of computer science. In the National Gallery of Art, we ran across this piece by Edward Ruscha, featured here:

Edward Ruscha's 'Lisp'

I vaguely recall seeing this image many years ago in a blog post at Lemonodor, but this time it grabbed me. My inner programmer was probably feeling the itch of a few days away from the keyboard. Perhaps Ruscha has an his own inner programmer. When I did a Google Image search to find the link above, I found that he had also created works from the words 'self' and 'ruby'. We programmers can make our own art using Lisp, Self, and Ruby. Our art, like that of Monet, Pissarro, Sugimoto, and Ruscha, sustains us.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

April 24, 2010 1:25 PM

Futility and Ditch Digging

First, Chuck Hoffman tweeted, The life of a code monkey is frequently depressingly futile.

I had had a long week, filled with the sort of activities that can make a programmer pine for days as a code monkey, and I replied, Life in many roles is frequently depressingly futile. Thoreau was right.

The ever-timely Brian Foote reminded me:

Sometimes utility feels like futility, but someone's gotta do it.

Thanks, Brian. I needed to hear that.

I remember hearing an interview with musician John Mellencamp many years ago in which he talked about making the movie Falling from Grace. Th interviewer was waxing on about the creative process and how different movies were from making records, and Mellencamp said something to the effect of, "A lot of it is just ditch digging: one more shovel of dirt." Mellencamp knew about that sort of manual labor because he had done it, digging ditches and stringing wire for a telephone company before making it as an artist. And he's right: an awful lot of every kind of working is moving one more shovel of dirt. It's not romantic, but it gets the job done.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

April 20, 2010 9:58 PM

Computer Code in the Legal Code

You may have heard about a recent SEC proposal that would require issuers of asset-backed securities to submit "a computer program that gives effect to the flow of funds". What a wonderful idea!

I have written a lot in this blog about programming as a new medium, a way to express processes and the entities that participate in them. The domain of banking and finance is a natural place for us to see programming enter into the vernacular as the medium for describing problems and solutions more precisely. Back in the 1990s, Smalltalk had a brief moment in the sunshine as the language of choice used by financial houses in the creation of powerful, short-lived models of market behavior. Using a program to describe models gave the investors and arbitrageurs not only a more precise description of the model but also a live description, one they could execute against live data, test and tinker with, and use as an active guide for decision-making.

We all know about the role played by computer models in the banking crisis over the last few years, but that is an indictment of how the programs were used and interpreted. The use of programs itself was and is the right way to try to understand a complex system of interacting, independent agents manipulating complex instruments. (Perhaps we should re-consider whether to traffic in instruments so complex that they cannot be understand without executing a complex program. But that is a conversation for another blog entry, or maybe a different blog altogether!)

What is the alternative to using a program to describe the flow of funds engendered by a particular asset-backed security? We could describe these processes using text in a natural language such as English. Natural language is supremely expressive but fraught with ambiguity and imprecision. Text descriptions rely on the human reader to do most of the work figuring out what they mean. They are also prone to gratuitous complexity, which can be used to mislead unwary readers.

We could also describe these processes using diagrams, such as a flow chart. Such diagrams can be much more precise than text, but they still rely on the reader to "execute" them as she reads. As the diagrams grow more complex, the more difficult it is for the reader to interpret the diagram correctly.

A program has the virtue of being both precise and executable. The syntax and semantics of a programming are (or at least can be) well-defined, so that a canonical interpreter can execute any program written in the language and determine its actual value. This makes describing something like the flow of funds created by a particular asset-backed security as precise and accurate as possible. A program can be gratuitously complex, which is a danger. Yet programmers have at their disposal tools for removing gratuitous complexity and focusing on the essence of a program, moreso than we have for manipulating text.

The behavior of the model can still be complex and uncertain, because it depends on the complexity and uncertainty of the environment in which it operates. Our financial markets and the economic world in which asset-backed securities live are enormously complex! But at least we have a precise description of the process being proposed.

As one commentator writes:

When provisions become complex beyond a point, computer code is actually the simplest way to describe them... The SEC does not say so, but it would be useful to add that if there is a conflict between the software and textual description, the software should prevail.

Using a computer program in this way is spot on.

After taking this step, there are still a couple of important issues yet to decide. One is: What programming language should we use? A lot of CS people are talking about the proposal's choice of Python as the required language. I have grown to like Python quite a bit for its relative clarity and simplicity, but I am not prepared to say that it is the right choice for programs that are in effect "legal code". I'll let people who understand programming language semantics better than I make technical recommendations on the choice of language. My guess is that a language with a simpler, more precisely defined semantics would work better for this purpose. I am, of course, partial to Scheme, but a number of functional languages would likely do quite nicely.

Fortunately, the SEC proposal invites comments, so academic and industry computer scientists have an opportunity to argue for a better language. (Computer programmers seem to like nothing more than a good argument about language, even writing programs in their own favorite!)

The most interesting point here, though, is not the particular language suggested but that the proposers suggest any programming language at all. They recognize how much more effectively a computer program cab describe a process than text or diagrams. This is a triumph in itself.

Other people are reminding us that mortgage-backed CDOs at the root of the recent financial meltdown were valued by computer simulations, too. This is where the proposal's suggestion that the code be implemented in open-source software shines. By making the source code openly available, everyone has the opportunity and ability to understand what the models do, to question assumptions, and even to call the authors on the code's correctness or even complexity. The open source model has worked well in the growth of so much of our current software infrastructure, including the simple in concept but complex in scale Internet. Having the code for financial models be open brings to bear a social mechanism for overseeing the program's use and evolution that is essential in a market that should be free and transparent.

This is also part of the argument for a certain set of languages as candidates for the code. If the language standard and implementations of interpreters are open and subject to the same communal forces as the software, this will lend further credibility to the processes and models.

I spend a lot of time talking about code in this blog. This is perhaps the first time I have talked about legal code -- and even still I get to talk about good old computer code. It's good to see programs recognized for what they are and can be.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

April 08, 2010 8:56 PM

Baseball, Graphics, Patterns, and Simplicity

I love these graphs. If you are a baseball fan or a lover of graphics, you will, too. Baseball is the most numbers-friendly of all sports, with a plethora of statistics that can extracted easily from its mano-a-mano confrontations between pitchers and batters, catchers and baserunners, American League and National. British fan Craig Robinson goes a step beyond the obvious to create beautiful, information-packed graphics that truths both quirky and pedestrian.

Some of the graphs are more complex than others. Baseball Heaven uses concentric rings, 30-degree wedges, and three colors to show that the baseball gods smile their brightest on the state of Arizona. I like some of these complex graphs, but I must admit that sometimes they seem like more work than they should be. Maybe I'm not visually-oriented in the right way.

I notice that many of my favorites have something in common. Consider this chart showing the intersection of the game's greatest home run hitters and the steroid era:

home run hitters and performance-enhancing drugs

It doesn't take much time looking at this graph for a baseball fanatic to sigh with regret and hope that Ken Griffey, Jr., has played clean. (I think he has.) A simple graphic, a poignant bit of information.

Next, take a look at this graph that answers the question, how does baseball's winningest team fare in the World Series?:

win/loss records and World Series performance

This is a more complex than the previous one, but the idea is simple: sort teams by win/loss record, identify the playoff and World Series teams by color, and make the World Series winners the min axis of the graph. Who would have thought that the playoff team with the worst record would win the World Series almost as often as the the team with the best record?

Finally, take a look at what is my current favorite from the site, an analysis of interleague play's winners and losers.

winners and losers in interleague play

I love this one not for its information but for its stark beauty. Two grids with square and rectangular cells, two primary colors, and two shades of each are all we need to see that the two leagues have played pretty evenly overall, with the American League dominating in recent years, and that the AL's big guns -- the Yankees, Red Sox, and Angels -- are big winners against their NL counterparts. This graph is so pretty, I want to put a poster-sized print of it on my wall, just so that I can look at it every day.

The common theme I see among these and my other favorite graphs is that they are variations of the unpretentious bar chart. No arcs, line charts with doubly-labeled axes, or 3D effects required. Simple colors, simple labels, and simple bars illuminating magnitudes of interest.

Why am I drawn to these basic charts? Am I too simple to appreciate the more complex forms, the more complex interweaving of dimensions and data?

I notice this as a common theme across domains. I like simple patterns. I am most impressed when writers and artists employ creative means to breathe life into unpretentious forms. It is far more creative to use a simple bar chart in a nifty or unexpected way than it is to use spirals, swirls of color, concentric closed figures, or multiple interlocking axes and data sources. To take a relationship, however complex, and boil its meaning down to the simplest of forms -- taken with a twist, perhaps, but unmistakably straightforward nonetheless -- that is artistry.

I find that I have similar tastes in programming. The simplest patterns learned by novice programmers captivate me: a guarded action or linear search; structural recursion over a BNF definition or a mutual recursion over two; a humble strategy object or factory method. Simple tools used well, adapted to the unique circumstances of a problem, exposing just the right amount of detail and shielding us from all that doesn't matter. A pattern used a million times never in the same way twice. My tastes are simple, but I can taste a wide range of flavors.

Now that I think about it, I think this theme explains a bit of what I love about baseball. It is a simple game, played on a simple form with simple equipment. Though its rules address numerous edge cases, at bottom they, too, are as simple as one can imagine: throw the ball, hit the ball, catch the ball, and run. Great creativity springs from these simple forms when they are constrained by simple forms. Maybe this is why baseball fans see their sport as an art form, and why people like Craig Robinson are driven to express its truths in art of their own.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

March 29, 2010 7:25 PM

This and That, Volume 2

[A transcript of the SIGCSE 2010 conference: Table of Contents]

Some more miscellaneous thoughts from a few days at the conference...

Deja Vu All Over Again

Though it didn't reach the level of buzz, concurrency and its role in the CS curriculum made several appearances at SIGCSE this year. At a birds-of-a-feather session on concurrency in the curriculum, several faculty talked about the need to teach concurrent programming and thinking right away in CS1. Otherwise, we teach students a sequential paradigm that shapes how they view problems. We need to make a "paradigm shift" so that we don't "poison students' minds" with sequential thinking.

I closed my eyes and felt like I was back in 1996, when people were talking about object-oriented programming: objects first, early, and late, and poisoning students' minds with procedural thinking. Some things never change.

Professors on Parade

How many professors throw busy slides full of words and bullet points up on the projector, apologize for doing so, and then plow ahead anyway? Judging from SIGCSE, too many.

How many professors go on and on about importance of active learning, then give straight lectures for 15, 45, or even 90 minutes? Judging from SIGCSE, too many.

Mismatches like these are signals that it's time to change what we say, or what we do. Old habits die hard, if at all.

Finally, anyone who thinks professors are that much different than students, take note. In several sessions, including Aho's talk on teaching compilers, I saw multiple faculty members in the audience using their cell phones to read e-mail, surf the web, and play games. Come on... We sometimes say, "So-and-so wrote the book on that", as a way to emphasize the person's contribution. Aho really did write the book on compilers. And you'd rather read e-mail?

I wonder how these faculty members didn't pay attention before we invented cell phones.

An Evening of Local Cuisine

Some people may not be all that excited by Milwaukee as a conference destination, but it is a sturdy Midwestern industrial town with deep cultural roots in its German and Polish communities. I'm not much of a beer guy, but the thought of going to a traditional old German restaurant appealed to me.

My last night in town, I had dinner at Mader's Restaurant, which dates to 1902 and features a fine collection of art, antiques, and suits of medieval armour "dating back to the 14th century". Over the years they have served political dignitaries such as the Kennedys and Ronald Reagan and entertainers such as Oliver Hardy (who, if the report is correct, ate enough pork shanks on his visit to maintain his already prodigious body weight).

I dined with Jim Leisy and Rick Mercer. We started the evening with a couple of appetizers, including herring on crostinis. For dinner, I went with the Ritter schnitzel, which came with German mashed potatoes and Julienne vegetables, plus a side order of spaetzele. I closed with a light creme brulee for dessert. After these delightful but calorie-laden dishes, I really should have run on Saturday morning!

Thanks to Jim and Rick for great company and a great meal.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

March 12, 2010 9:49 PM

SIGCSE This and That, Volume 1

[A transcript of the SIGCSE 2010 conference: Table of Contents]

Day 2 brought three sessions worth their own blog entries, but it was also a busy day meeting with colleagues. So those entries will have to wait until I have a few free minutes. For now, here are a few miscellaneous observations from conference life.

On Wednesday, I checked in at the table for attendees who had pre-registered for the conference. I told the volunteer my name, and he handed me my bag: conference badge, tickets to the reception and Saturday luncheon, and proceedings on CD -- all of which cost me in the neighborhood of $150. No one asked for identification. I though, what a trusting registration.

This reminded me of picking up my office and building keys on my first day at my current job. The same story: "Hi, I'm Eugene", and they said, "Here are your keys." When I suggested to a colleague that this was perhaps too trusting, he scoffed. Isn't it better to work at a place where people trust you, at least until we have a problem with people who violate that trust? I could not dispute that.

The Milwaukee Bucks are playing at home tonight. At OOPSLA, some of my Canadian and Minnesotan colleagues and I have a tradition of attending a hockey game whenever we are in an NHL town. I'm as big a basketball fan as they are hockey fans, so maybe I should check out an NBA game at SIGCSE? The cheapest seat in the house is $40 or so and is far from the court. I would go if I had a posse to go with, but otherwise it's a rather expensive way to spend a night alone watching a game.

SIGCSE without my buddy Robert Duvall feels strange and lonely. But he has better things to do this week: he is a proud and happy new daddy. Congratulations, Robert!

While I was writing this entry, the spellchecker on my Mac flagged www.cs.indiana.edu and suggested I replace it with www.cs.iadiana.edu. Um, I know my home state of Indiana is part of flyover country to most Americans, but in what universe is iadiana an improvement?

People, listen to me: problem-solve is not a verb. It is not a word at all. Just say solve problems. It works just fine. Trust me.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

March 08, 2010 8:41 PM

A Knowing-and-Doing Gathering at SIGCSE?

SIGCSE 2010 logo

I'm off tomorrow to SIGCSE. I'm looking forward to several events, among them the media computation workshop, the New Educators Roundtable, several sessions on programming languages and compilers, and especially a keynote address by physics Nobel laureate Carl Wiemann, who lots to say about using science to teach science. It should be a busy and fun week.

A couple of readers have indicated interest in visiting with me over a coffee break at the conference. Reader Matthew Hertz suggested something more: an informal meeting of Knowing and Doing readers. The lack of comments on this blog notwithstanding, I love hearing from readers, whether they have ideas to share or concerns with my frequently sketchy logic. As a reader myself, I often like to put a face on the authors I read. A few readers of this blog feel the same. My guess is that readers of my blog probably have a lot in common, and they might gain as much from meeting each other as meeting me!

So. If you are interested in meeting up with me at SIGCSE, partaking in an informal gathering of Knowing and Doing readers, or both, drop me a line by e-mail or on Twitter @wallingf. I'll gauge interest and let everyone know the verdict. I'm sure that, if there's interest, we can find a time and space to connect.


Posted by Eugene Wallingford | Permalink | Categories: General

February 22, 2010 6:56 PM

I'll Do Homework, But Only for a Grade

In the locker room one morning last week, I overheard two students talking about their course work. One of the guys eventually got himself pretty worked up while talking about one professor, who apparently gives tough exams, and exclaimed, "We worked two and a half hours on that homework, and he didn't even grade it!"

Yesterday, I was sitting with my daughters while they did some school work. One of them casually commented, "We all stopped putting too much effort into Teacher Smith's homework when we figured out s/he never grades it."

I know my daughter's situation up close and so know what she means. She tends to go well beyond the call of duty on her assignments, in large part because she is in search of a perfect grade. With time an exceedingly scarce and valuable resource, she faces an optimization problem. It turns out she can put in less effort on her homework than she ordinarily does and still do fine on her test. With no prospect of a higher grade from putting more time into the assignment to pull her along, she is willing to economize a bit and spend her time elsewhere.

Maybe that's just what the college student meant when I overheard him that morning. Perhaps he is actually overinvesting in his homework relative to its value for learning, because he seeks a higher grade on the homework component of the course. That's not the impression I got from my unintentional eavesdropping, though. I left the locker room thinking that he sees value in doing the homework only if it is graded, only if it contributes to his course grade.

This is the impression too many college students give their instructors. If it doesn't "count", why do it?

Maybe I was like that in college, too. I know that grades were important to me, and as double-major trying to graduate in four years after spending much of my freshman year majoring in something else, I was taking a heavy class load. Time was at premium. Who has time or energy to do things that don't count?

Even if I did not understand then, I know now that the practice itself is an invaluable part of how I learned. Without lots of practice writing code, we don't even learn the surface details of our language, such as syntax and idiom, let alone reach a deep understanding of solving problems. In the more practical terms expressed by the student in the locker room, without lots of practice, most every exam will seem too long, look to be difficult, and seem to be graded harshly. That prof of his has found a way to get the student to invest time in learning. What a gift!

We cannot let the professor off the hook, though. If s/he tells the class that the assignment will be graded, or even simply gives students the impression that it "counts for something", then not to grade the assignment is a deception. Such a tactic is justified only in exceptional circumstances, and not only moral grounds. As Teacher Smith has surely learned by now, students are smart enough not to fall for a lie too many times before they direct their energies elsewhere.

In general, though, homework is a gift: a chance to learn under controlled conditions. I'm pretty sure that students don't see it this way. This reminds me a conversation I had with my colleague Mark Jacobson a couple of weeks ago. We were discussing the relative abundance and paucity of a grateful attitude among faculty in general. He recalled that, in his study of the martial arts, he had encountered two words for "thank you". One, suki, from the Japanese martial arts, means to see events in our lives as opportunity or gift. Another, sugohasameeda, comes from Korean Tae Kwon Do and is used to say, "Thank you for the workout".

Suki and sugohasameeda are related. One expresses suki when things do not go the way we wish, such as when we have a flat tire or when a work assignment doesn't match or desires. One expresses sugohasameeda in gratitude to one's teacher for the challenging and painful work that make us grow, such as workouts that demand our all. I see elements of both in the homework we are assigned. Sugohasameeda seems to be spot-on with homework, yet suki comes into play, too, in cases such as the instructor going counter to our expectations and not grading an assignment.

I do not find myself in the role of student as much these days, but I can see so many ways that I can improve my own sense of gratefulness. I seem to live sugohasameeda more naturally these days, though incompletely. I am far too often lacking in suki. My daily life would be more peaceful and whole if I could recognize the opportunity to grow through undesired events with gratitude.

One final recollection. Soon after taking my current job, I met an older gentleman who had worked in a factory for 30+ years. He asked where I worked, and when I said, "I teach at the university", he said, "That beats workin' for a livin'". My first reaction was akin to amused indignation. He obviously didn't know anything about what my job was like.

Later I realized that there was a yin to that yang. I am grateful to have a career in which I can do so many cool things, explore ideas whenever they call to me, and work with students who learn and help me to learn -- to do things I love every day. So, yeah, I guess my job does beat "workin' for a livin'".

I just wish more students would take their homework seriously.

~~~~

My colleague Mark also managed to connect his ideas about gratitude from the martial arts to the 23rd Psalm of the Christian Bible. The green pastures to which it famously refers are not about having everything exactly as I want it, but seeing all things as they are -- as gift, as opportunity, as suki. I continue to learn from him.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 11, 2010 5:40 PM

Creativity and the Boldness of Youth

While casting about Roy Behrens's blog recently, I came across a couple of entries that connected with my own experience. In one, Behrens discusses Arthur Koestler and his ideas about creativity. I enjoyed the entire essay, but one of its vignettes touched a special chord with me:

In 1971, as a graduate student at the Rhode Island School of Design, I finished a book manuscript in which I talked about art and design in relation to Koestler's ideas. I mailed the manuscript to his London home address, half expecting that it would be returned unopened. To my surprise, not only did he read it, he replied with a wonderfully generous note, accompanied by a jacket blurb.

My immediate reaction was "Wow!", followed almost imperceptibly by "I could never do such a thing." But then my unconscious called my bluff and reminded me that I had once done just such a thing.

Back in 2004, I chaired the Educators' Symposium at OOPSLA. As I first wrote back then, Alan Kay gave the keynote address at the Symposium. He also gave a talk at the main conference, his official Turing Award lecture. The Educators' Symposium was better, in large part because we gave Kay the time he needed to say what he wanted to say.

2004 was an eventful year for Kay, as he won not only the Turing Award but also the Draper Prize and Kyoto Prize. You might guess that Kay had agreed to give his Turing address at OOPSLA, given his seminal influence on OOP and the conference, and then consented to speak a second time to the educators.

But his first commitment to speak was to the Educators' Symposium. Why? At least in part because I called him on the phone and asked.

Why would an associate professor at a medium-sized regional public university dare to call the most recent Turing Award winner on the phone and ask him to speak at an event on the undercard of a conference? Your answer is probably as good as mine. I'll say one part boldness, one part hope, and one part naivete.

All I know is that I did call, hoping to leave a message with his secretary and hoping that he would later consider my request. Imagine my surprise when his secretary said, "He's across the hall just now; let me get him." My heart began to beat in triple time. He came to the phone, said hello, and we talked.

For me, it was a marvelous conversation, forty-five minutes chatting with a seminal thinker in my discipline, of whose work I am an unabashed fan. We discussed ideas that we share about computer science, computer science education, and universities. I was so caught up in our chat that I didn't consider just how lucky I was until we said our goodbyes. I hung up, and the improbability of what had just happened soaked in.

Why would my someone of Kay's stature agree to speak at a second-tier event before he had even been contacted to speak at the main event? Even more, why would he share so much time talking to me? There are plenty of reasons. The first that comes to mind is most important: many of the most accomplished people in computer science are generous beyond my ken. This is true in most disciplines, I am sure, but I have experienced it firsthand many times in CS. I think Kay genuinely wanted to help us. He was certainly willing to talk to me at some length about my hopes for the symposium and the role he could play.

I doubt that this was enough to attract him, though. The conference venue being Vancouver helped a lot; Kay loves Vancouver. The opportunity also to deliver his Turing Award lecture at OOPSLA surely helped, too. But I think the second major reason was his longstanding interest in education. Kay has spent much of his career working toward a more authentic kind of education for our children, and he has particular concerns with the state of CS education in our universities. He probably saw the Educators' Symposium as an opportunity to incite revolution among teachers on the front-line, to encourage CS educators to seek a higher purpose than merely teaching the language du jour and exposing students to a kind of computing calcified since the 1970s. I certainly made that opportunity a part of my pitch.

For whatever reason, I called, and Kay graciously agreed to speak. The result was a most excellent keynote address at the symposium. Sadly, his talk did not incite a revolt. It did plant seeds in the minds of at least of a few of us, so there is hope yet. Kay's encouragement, both in conversation and in his talk, inspire me to this day.

Behrens expressed his own exhilaration "to be encouraged by an author whose books [he] had once been required to read". I am in awe not only that Behrens had the courage to send his manuscript to Koestler but also that he and Koestler continued to correspond by post for over a decade. My correspondence with Kay since 2004 has been only occasional, but even that is more than I could have hoped for as a undergrad, when I first heard of Smalltalk or, as a grad student, when I first felt the power of Kay's vision by living inside a Smalltalk image for months at a time.

I have long hesitated to tell this story in public, for fear that crazed readers of my blog would deluge his phone line with innumerable requests to speak at conferences, workshops, and private parties. (You know who you are...) Please don't do that. But for a few moments once, I felt compelled to make that call. I was fortunate. I was also a recipient of Kay's generosity. I'm glad I did something I never would do.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Personal

February 10, 2010 6:43 PM

Recent Connections: Narrative and Computation

Reader Clint Wrede sent me a link to A Calculus of Writing, Applied to a Classic, another article about author Zachary Mason and his novel The Lost Books of the Odyssey. I mentioned Mason and his book in a recent entry, Diverse Thinking, Narrative, Journalism, and Software, which considered the effect of Mason's CS background on his approach to narrative. In "A Calculus of Writing", makes that connection explicit:

"What I'm interested in scientifically is understanding thought with computational precision," he explained. "I mean, the romantic idea that poetry comes from this deep inarticulable ur-stuff is a nice idea, but I think it is essentially false. I think the mind is articulable and the heart probably knowable. Unless you're a mystic and believe in a soul, which I don't, you really don't have any other conclusion you can reach besides that the mind is literally a computer."

I'm not certain whether the mind is or is not a computer, but I share Mason's interest in "understanding thought with computational precision". Whether poets and novelists create through a computational process or not, building ever-more faithful computational models of what they do interests to people like Mason and me. It also seems potentially valuable as a way to understand what it means to be human, a goal scientists and humanists share.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

February 09, 2010 7:13 PM

Programs as Art

In my previous entry I mentioned colleague and graphic designer Roy Behrens. My first blog articles featuring Behrens mentioned or centered on material from Ballast Quarterly Review, a quarterly commonplace book he began publishing in the mid-1980s. I was excited to learn recently that Behrens is beginning to reproduce material from BALLAST on-line in his new blog, The Poetry of Sight. He has already posted both entries I've seen before and entries new to me. This is a wonderful resource for someone who likes to make connections between art, design, psychology, literature, and just about any other creative discipline.

All this is prelude to my recent reading of the entry Art as Brain Surgery, which recounts a passage from an interview with film theorist Ray Carney that begins the idea behind the entry's title:

The greatest works [of art] do brain surgery on their viewers. They subtly reprogram our nervous systems. They make us notice and feel things we wouldn't otherwise.

I read this passage as a potential challenge to an idea I had explored previously: programming is art. That article looked at the metaphor from poet William Stafford's perspectives on art. Carney looks at art from a different position, one which places a different set of demands on the metaphor. For example,

One of the principal ways [great works of art] do this is through the strangeness of their styles. Style creates special ways of knowing. ... Artistic style induces unconventional states of awareness and sensitivity.

This seems to contradict a connection to programming, a creative discipline in which we seem to prefer -- at least in our code -- convention over individuality, recognizability or novelty, and the obvious over the subtle. When we have to dig into an unfamiliar mass of legacy code, the last thing we want are "unconventional states of awareness and sensitivity". We want to grok the code, and now, so that we can extend and modify it effectively and confidently.

Yet I think we find beauty in programming styles that extend our way of thinking about the world. Many OO and procedural programmers encounter functional programming and see it as beautiful, in part because it does just what Carney says great art does:

It freshens and quickens our responses. It limbers up our perceptions and teaches us new possibilities of feeling and understanding.

The ambitious among us then try to take these new possibilities back to their other programming styles and imbue our code there with the new possibilities. We turn our new perceptions into the conventions and patterns that make our code recognizable and obvious. But this also makes our code subtle in its own, bearing a foreign beauty and sense of understanding in the way it solves the work-a-day problems found in the program's specs. The best software patterns do this: they not only solve a problem but teach us that it can be solved at all, often by bringing an outside influence to our programs.

Perhaps it's just me, but there is something poetic in how I experience the emotional peaks of writing programs. I feel what Carney says:

The greatest works of art are not alternatives to or escapes from life, but enactments of what it feels like to live at the highest pitch of awareness -- at a level of awareness most people seldom reach in their ordinary lives.

The first Lisp interpreter, which taught us that code is data. VisiCalc, which brought program as spreading activation to our desktops, building on AI work in the 1950s and 1960s. Smalltalk. Unix. Quicksort and mergesort, implemented in thousands of programs in thousands of ways, always different but always perceptibly the same. Programmers experience these ideas and programs at the highest pitch of awareness. I walk away from the computer some days hoping that other people get to feel the way I am feeling, alive with fire deep in my bones.

The greatest works are inspired examples of some of the most exciting, demanding routes that can be taken through experience. They bring us back to life.

These days, more than ever, I relish the way even reading a good program can bring me back to life. That's to say nothing of writing one.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

February 08, 2010 2:19 PM

Calling C.P. Snow

A lot has been going on at my university the last few months to keep me busy. With significant budget cuts and a long-term change in state funding of higher education, we are beginning to see changes across campus. Last month our provost announced a move that will affect me and my department intimately: the merger of the College of Natural Sciences (CNS) with the College of Humanities and Fine Arts (CHFA). Computer Science will go from being one department among seven science/math/technology departments to a member of a college twice as large and at least that much more diverse.

The merger came as a surprise to many of us on campus, so there is a lot to do beyond simply combining operating budgets and clerical staffs. I expect everything to work out fine in the end. Colleges of arts and sciences are a common way to organize universities like ours, both of the existing colleges contain good people and many good programs, and we have a dean especially well-suited to lead the merger. Still, the next eighteen months promise to deliver a lot of uncertainty and change. Change is hard, and the resulting college will be something quite different from who we are now. Part of me is excited... There are some immediate benefits for me and CS, as we will now be in the same college with colleagues such as Roy Behrens, and with the departments with whom we have been working on a new major in digital media. Multidisciplinary work is easier to do at the university when they fall under the same administrative umbrella.

We are only getting started on working toward the merger, but I've already noticed some interesting differences between the two faculties. For example, at the first meeting of the department heads in my college with a faculty leader from the other college, we learned that the humanities folks have been working together on a college-wide theme of internationalization. As part of this, they have been reading a common book and participating in reading groups to discuss it.

This is a neat idea. The book provides a common ground for their faculty and helps them to work together toward a common goal. The discussion unifies their college. Together, they also create a backdrop against which many of them can do their scholarly work, share ideas, and collaborate.

Now that we are on the way to becoming one college, the humanities faculty have invited us to join them in the conversation. This is a gracious offer, which creates an opportunity for us all to unify as a single faculty. The particular theme for this year, internationalization, is one that has relevance in both the humanities and the sciences. Many faculty in the sciences are deeply invested in issues of globalization. For this reason, there may well be some cross-college discussion that results, and this interaction will likely promote the merger of the colleges.

That said, I think the act of choosing a common book to read and discuss in groups may reflect a difference between the colleges, one that is either a matter of culture or a matter of practice. For the humanities folks, this kind of discussion is a first-order activity. It is what they do within and across their disciplines. For the science folks, this kind of discussion is a second-order activity. There are common areas of work across the science departments, such as bioinformatics, but even then the folks in biology, chemistry, computer science, and math are all working on their own problems in their own ways. A general discussion of issues in bioinformatics is viewed by most scientists as about bioinformatics, not bioinformatics itself.

I know that this is a superficial analysis and that it consists of more shades of gray than sharp lines. At its best, it is a simplification. Still I found it interesting to see and hear how science faculty responded to the offer.

Over the longer term, it will be interesting to see how the merger of colleges affects what we in the sciences do, and how we do it. I expect something positive will happen overall, as we come into more frequent contact with people who think a little differently than we do. I also expect the day-to-day lives of most science faculty (and humanities faculty as well) will go on as they are now. Letterhead will change, the names of secretaries will change, but scholarly lives will go on.,

The changes will be fun. Getting out of ruts is good for the brain.


Posted by Eugene Wallingford | Permalink | Categories: General

February 01, 2010 10:22 PM

A Blogging Milestone -- 10**3

Thanks to Allyn Bauer for noticing that my recent entry on The Evolution of the Textbook was the 1000th posting to this blog. Five and half years is a long time. I am glad I'm still at it. The last few months have been difficult on the non-teaching and non-CS side of my life, and I feel like my inspiration to write about interesting ideas has been stop-and-go. But I am glad I'm still at it.

While thinking about my 1000th post, I decided to take a look back at the other digits:

Number 100 refers to #99, a review of the Kary Mullis's Dancing Naked in the Mind Field. That article, in combination with the entries on algorithmic patterns and textbooks, seems pretty consistent with how I think about this blog: ideas encountered, considered, and applied. Looking at numbers 1 and 10 led to me to read over the monthly archive for July 2004. Revisiting old thoughts evokes -- or creates -- a strange sort of memory, one that I enjoy.

I hope that the next 1000 entries are as much fun to write.


Posted by Eugene Wallingford | Permalink | Categories: General

January 29, 2010 7:01 PM

Diverse Thinking, Narrative, Journalism, and Software

A friend sent me a link to a New York Times book review, Odysseus Engages in Spin, Heroically, by Michiko Kakutani. My friend and I both enjoy the intersection of different disciplines and people who cross boundaries. The article reviews "The Lost Books of the Odyssey", a recent novel Kakutani calls "a series of jazzy, post-modernist variations on 'The Odyssey'" and "an ingeniously Borgesian novel that's witty, playful, moving and tirelessly inventive". Were the book written by a classicist, we might simply add the book to our to-read list and move on, but it's not. Its author, Zachary Mason is a computer scientist specializing in artificial intelligence.

I'm always glad to see examples of fellow computer scientists with interests and accomplishments in the humanities. Just as humanists bring a fresh perspective when they come to computer science, so do computer scientists bring something different when they work in the humanities. Mason's background in AI could well contribute to how he approaches Odysseus's narrative. Writing programs that make it possible for computers to understand or tell stories causes the programmer to think differently about understanding and telling stories more generally. Perhaps this experience is what enabled Mason to "[pose] new questions to the reader about art and originality and the nature of storytelling".

Writing a program to do any task has the potential to teach us about that task at a deeper level. This is true of mundane tasks, for which we often find our algorithmic description is unintentionally ambiguous. (Over the last couple of weeks, I have experienced this while working with a colleague in California who is writing a program to implement a tie-breaking procedure for our university's basketball conference.) It is all the more true for natural human behaviors like telling stories.

In one of those unusual confluences of ideas, the Times book review came to me the same week that I read Peter Merholz's Why Design Thinking Won't Save You, which is about the value, even necessity, of bringing different kinds of people and thinking to bear on the tough problems we face. Merholz is reacting to a trend in the business world to turn to "design thinking" as an alternative to the spreadsheet-driven analytical thinking that has dominated the world for the last few decades. He argues that "the supposed dichotomy between 'business thinking' and 'design thinking' is foolish", that understanding real problems in the world requires a diversity of perspectives. I agree.

For me, Kakutani's and Merholz's articles intersected in a second way as I applied what they might say about how we build software. Kakutani explicitly connects author Mason's CS background to his consideration of narrative:

["Lost Books" is] a novel that makes us rethink the oral tradition of entertainment that thrived in Homer's day (and which, with its reliance upon familiar formulas, combined with elaboration and improvisation, could be said to resemble software development) ...

When I read Merholz's argument, I was drawn to an analogy with a different kind of writing, journalism:

Two of Adaptive Path's founders, Jesse James Garrett and Jeffrey Veen, were trained in journalism. And much of our company's success has been in utilizing journalistic approaches to gathering information, winnowing it down, finding the core narrative, and telling it concisely. So business can definitely benefit from such "journalism thinking."

So can software development. This passage reminded of a panel I sat on at OOPSLA several years ago, about the engineering metaphor in software development. The moderator of the panel asked folks in the audience to offer alternative metaphors for software, and Ward Cunningham suggested journalism. I don't recall all the connections he made, but they included working on tight deadlines, having work product reviewed by an editor, and highly stylized forms of writing. That metaphor struck me as interesting then, and I have since written about the relationship between software development and writing, for example here. I have also expressed reservations about engineering as a metaphor for building software, such as here and here.

I have long been coming to believe that we can learn a lot about how to build software better by studying intensely almost every other discipline, especially disciplines in which people make things -- even, say, maps! When students and their parents ask me to recommend minors and double majors that go well with computer science, I often mention the usual suspects but always make a pitch for broadening how we think, for studying something new, or studying intensely an area that really interests the students. Good will come from almost any discipline.

These days, I think that making software is like so many things and unlike them all. It's something new, and we are left to find our own way home. That is indeed part of the fun.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

January 22, 2010 9:23 PM

Calling C. S. Peirce

William Caputo channels the pragmatists:

These days, I believe the key difference between practice, value and principle (something much debated at one time in the XP community and elsewhere) is simply how likely we are to adjust them if things are going wrong for us (i.e., practices change a lot, principles rarely). But none should be immune from our consideration when our actions result in negative outcomes.

To the list of practice, value, and principle, pragmatists like Peirce, James, Dewey, and Meade would add knowledge. When we focus on their instrumental view of knowledge, it easy to forget one of the critical implications of the view: that knowledge is contingent on experience and context. What we call "knowledge" is not unchanging truth about the universe; it is only less likely to change in the face of new experience than other elements of our belief system.

Caputo reminds us to be humble when we work to help others to become better software developers. The old pragmatists would concur, whether in asking us to focus on behavior over belief or to be open to continual adaptation to our environment. This guidance applies to teaching more than just software development.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

January 13, 2010 7:18 PM

Programming, Literacy, and Superhuman Strength

I've written occasionally here about programming as a new communications medium and the need to empower as many people as possible with the ability to write little programs for themselves. So it's probably not surprising that I read Clay Shirky's The Shock of Inclusion, which appears in Edge's How Has The Internet Changed The Way You Think?, with a thought about programming. Shirky reminds us that the revolution in thought created by the Internet is hardly in its infancy. We don't have a good idea of how the Internet will ultimately change how we think because the most important change -- to "cultural milieu of thought" -- has not happened yet. This sounds a lot like Alan Kay on the computer revolution, and like Kay, Shirky makes an analogy to the creation of the printing press.

When we consider the full effect of the Internet, as Shirky does in his essay, we think of its effect on the ability of individuals to share their ideas widely and to connect those ideas to the words of others. From the perspective of a computer scientist, I think of programming as a form of writing, as a medium both for accomplishing tasks and for communicating ideas. Just as the Internet has lowered the barriers to publishing and enables 8-year-olds to become "global publishers of video", it lowers the barriers to creating and sharing code. We don't yet have majority participation in writing code, but the tools we need are being developed and communities of amateur and professional programmers are growing up around languages, tools, and applications. I can certainly imagine a YouTube-like community for programmers -- amateurs, people we should probably call non-programmers who are simply writing for themselves and their friends.

Our open-source software communities have taught us not only that "collaboration between loosely joined parties can work at scales and over timeframes previously unimagined", as Shirky notes, but other of his lessons learned from the Internet: that sharing is possible in ways far beyond the 20th-century model of publishing, that "post-hoc peer review can support astonishing creations of shared value", that whole areas of human exploration "are now best taken on by groups", that "groups of amateurs can sometimes replace single experts", and that the involvement of users accelerates the progress of research and development. The open-source software is a microcosm of the Internet. In its own way, with some conscious intent by its founders, it is contributing to creation of the sort of Invisible College that Shirky rightly points out is vital to capitalizing on this 500-year advance in man's ability to communicate. The OSS model is not perfect and has much room for improvement, but it is a viable step in the right direction.

All I know is, if we can put the power of programming into more people's hands and minds, then we can help more people to have the feeling that led Dan Meyer to write Put THAT On The Fridge:

... rather than grind the solution out over several hours of pointing, clicking, and transcribing, for the first time ever, I wrote twenty lines of code that solved the problem in several minutes.

I created something from nothing. And that something did something else, which is such a weird, superhuman feeling. I've got to chase this.

We have tools and ideas that make people feel superhuman. We have to share them!


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

December 12, 2009 10:15 PM

The Computer Reconfigured Me

Joe Haldeman is a writer of some renown in the science fiction community. I have enjoyed a novel or two of his myself. This month he wrote the Future Tense column that closes the latest issue of Communications of the ACM, titled Mightier Than the Pen. The subhead really grabbed my attention.

Haldeman still writes his novels longhand, in bound volumes. I scribble lots of notes to myself, but I rarely write anything of consequence longhand any more. In a delicious irony, I am writing this entry with pen and paper during stolen moments before a basketball game, which only reminds me how much my penmanship has atrophied has from disuse! Writing longhand gives Haldeman the security of knowing that his first draft is actually his first draft, and not the result of the continuous rewriting in place that word processors enable. Even a new generation word processor like WriteBoard, with automatic versioning of every edit, cannot ensure that we produce a first draft without constant editing quite as well as a fountain pen. We scientists might well think as much about the history and provenance of our writing and data.

Yet Haldeman admits that, if he had to choose, he would surrender his bound notebooks and bottles of ink:

... although I love my pens and blank books with hobbyist zeal, if I had to choose between them and the computer there would be no contest. The pens would have to go, even though they're so familiar they're like part of my hand. The computer is part of my brain. It has reconfigured me.

We talk a lot about how the digital computer changes how we work and live. This passage expresses that idea as well as any I've seen and goes one step more. The computer changes how we think. The computer is part of my brain. It has reconfigured me.

Unlike so many others, Haldeman -- who has tinkered with computers in order to support his writing since the Apple II -- is not worried about this new state of the writer's world. This reconfiguration is simply another stage in the ongoing development of how humans think and work.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

December 05, 2009 2:01 PM

Some Things I Have Learned, along with Milton Glaser

I recently came across a link to graphic designer Milton Glaser's 2001 talk Ten Things I Have Learned. Several of his lessons struck close to home for me.

  • After spending time with a person, do you usually feel exhilarated or exhausted? If you always feel tired, then you have been poisoned. Avoid people who do this to you. I would add positive advice in the same vein: Try to surround yourself with people who give you energy, and try to be a person who energizes those around you.

  • Less is not necessarily more. That's a lie we tell ourselves too often when we face cuts. "Do less with more." In the short term, this can be a way to become more efficient. In the long term, it starves us and our organizations. I like Glaser's idea better: Just enough is more.

  • If you think you have achieved enlightenment, "then you have merely arrived at your limitation". I see this too often in academia and in industry. Glaser uses this example of the lesson that doubt is better than certainty, but it also relates to an earlier lesson in the talk: Style is not to be trusted. Styles come and go; integrity and substance remain vital no matter what the fashion is for expressing solutions.

This talk ends with a passage that brought to mind discussion in recent months among agile software developers and consultants about a the idea of certifying agile practitioners:

Everyone interested in licensing our field might note that the reason licensing has been invented is to protect the public not designers or clients. "Do no harm" is an admonition to doctors concerning their relationship to their patients, not to their fellow practitioners or the drug companies.

Much of the discussion in the agile community about certification seems more about protecting the label "agile" from desecration than about protecting our clients. It may well be that some clients are being harmed when unscrupulous practitioners do a lazy or poor job of introducing agile methods, because they are being denied the benefits of a more responsive development process grounded in evidence gathered from continuous feedback. A lot of the concern, though, seems to be with the chilling effect that poorly- executed agile efforts have on the ability of honest and hard-working agile consultants and developers to peddle our services under that banner.

I don't know what the right answer to any of this is, but I like the last sentence of Glaser's talk:

If we were licensed, telling the truth might become more central to what we do.

Whether we are licensed or not, I think the answer will ultimately back to a culture of honesty and building trust in relationships with our clients. So we can all practice Glaser's tenth piece of advice: Tell the truth.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

November 23, 2009 2:53 PM

Personality and Perfection

Ward Cunningham recently tweeted about his presentation at Ignite Portland last week. I enjoyed both his video and his slides.

Brian Marick has called Ward a "gentle humanist", which seems so apt. Ward's Ignite talk was about a personal transformation in his life, from driver to cyclist, but as is often the case he uncovers patterns and truths that transcend a single experience. I think that is why I always learn so much from him, whether he is talking about software or something else.

From this talk, we can learn something about change in habit, thinking, and behavior. Still, one nugget from the talk struck me as rather important for programmers practicing their craft:

Every bike has personality. Get to know lots of them. Don't search for perfection. Enjoy variety.

This is true about bikes and also true about programming languages. Each has a personality. When we know but one or two really well, we have missed out on much of what programming holds. When we approach a new language expecting perfection -- or, even worse, that it have the same strengths, weaknesses, and personality as one we already know -- we cripple our minds before we start.

When we get to know many languages personally, down to their personalities, we learn something important about "paradigms" and programming style: They are fluid concepts, not rigid categories. Labels like "OO" and "functional" are useful from some vantage points and exceedingly limiting from others. That is one of the truths underlying Anton van Straaten's koan about objects and closures.

We should not let our own limitations limit how we learn and use our languages -- or our bikes.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

November 21, 2009 5:54 AM

Quotes of the Day

The day was yesterday.

I am large, I contain multitudes.

The to-do list is a time capsule, containing missives and pleas to your future selves. ... Why is it not trivially easy to carry out items on your own to-do list? And the answer is: Because the one writing the list, and the one carrying it out are two different people.

Now I understand the problem... my to-do list is a form of time travel.

Open to Multitudes

It's the kind of culture that can tolerate rap music and extreme sports that can also create space for guys like Page and Brin and Google. That's one of our hidden strengths.

This is from economist Paul Romer, as quoted by Tyler Cowen. I agree. We need to try out lots of ideas to find the great ones.

Going to an Extreme

I'm not interested in writing short stories. Anything that doesn't take years of your life and drive you to suicide hardly seems worth doing.

Cormac McCarthy must live on the edge. This is one of those romantic notions that has never appealed to me. I've never been so driven -- nor felt like I wanted to be.

A Counterproposal

6. MAKE MANY SKETCHES

Join the best sketches to produce others and improve them until the result is satisfactory.

To make sketches is a humble and unpretentious approach toward perfection.

... says composer Arnold Schonberg, as quoted at peripatetic axiom. This is more my style.

Speaking of Perfection

My perfect day is sitting in a room with some blank paper. That's heaven. That's gold and anything else is just a waste of time.

Again from Cormac McCarthy. Unlike McCarthy, I do not think that everything else is a waste of time. Yet I feel a kinship with his sense of a perfect day. To sit in a room, alone, with an open terminal. To write, whether prose or code. But especially code.


Posted by Eugene Wallingford | Permalink | Categories: General

November 15, 2009 8:02 PM

Knowledge Arbitrage

A couple of weeks back, Brian Foote tweeted:

Ward Cunningham: Pure Knowledge arbitrageurs will no longer gain by hoarding as knowledge increasingly becomes a plentiful commodity #oopsla

This reminds me of a "quought" of the day that I read a couple of years ago. Paraphrased, it asked marketers: What will you do when all of your competitors know all of the same things you do? Ward's message broadens the implication from marketers to any playing field on which knowledge drives success. If everyone has access to the same knowledge, how do you distinguish yourself? Your product? The future looks a bit more imposing when no one starts with any particular advantage in knowledge.

Ward's own contributions to the world -- the wiki and extreme programming among them -- give us a hint as to what this new future might look like. Hoarding is not the answer. Sharing and building together might be.

The history of the internet and the web tells us at the result of collaboration and open knowledge may well be a net win for all of us over a world in which knowledge is hoarded and exploited for gain in controlled bursts.

Part of the ideal of the academy has always been the creation and sharing of knowledge. But increasingly its business model has been exposed as depending on the sort of knowledge arbitrage that Ward warns against. Universities now compete in a world of knowledge more plentiful and open than ever before. What can they do when all of their customers have access to much of the same knowledge that they hope to disseminate? Taking a cue from Ward, universities probably need to be thinking hard about how they share knowledge, how they help students, professors, and industry build knowledge together, and how they add value in their unique way through academic inquiry.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

November 09, 2009 9:54 PM

Reality versus Simulation

A recent discussion on the XP mailing list discussed the relative merits of using physical cards for story planning versus a program, even something as simple as a spreadsheet. Someone had asked, "why not use a program?", and lots of XP aficionados explained why not.

I mostly agree with the explanations, but one undercurrent in the discussion bothered me. It is best captured in this comment:

The software packages are simulations. The board and cards are the real thing.

I was immediately transported twenty years back, to a set of old arguments against artificial intelligence. They went something like this... If we write a program to simulate a rainstorm, we will not get wet; it is just a simulation. By the same token, we can write a program to simulate symbol processing the way we think people do it, but it's not real symbol processing; it is just a simulation. We can write a program to simulate human thought, but it's not real; it's just simulated thought. Just as a simulated rainstorm will not make us wet, simulated thought can't enlighten us. Only human thought is real.

That always raised my hackles. I understand the difference between a physical phenomenon like rain and a simulation of it. But symbol processing and thought are something different. They are physical in our brains, but they manifest themselves in our interactions with the exterior world, including other symbol processors and thinkers. Turing's insight in his seminal paper Computing Machinery and Intelligence was to separate the physical instantiation of intelligent behavior from the behavior itself. The essence of the behavior is its ability to communicate ideas to other agents. If a program can carry on such communication in a way indistinguishable form how humans communicate, then on what grounds are we to say that the simulation is any less real than the real thing?

That seems like a long way to go back for a connection, but when I read the above remark, from someone whose work I greatly respect, it, too, raised my hackles. Why would a software tool that supports an XP practice be "only" a simulation and current practice be the real thing?

The same person prefaced his conclusion above with this, which explains the reasoning behind it:

Every software package out there has to "simulate" some definite subset of these opportunities, and the more of them the package chooses to support the more complex to learn and operate it becomes. Whereas with a physical board and cards, the opportunities to represent useful information are just there, they don't need to be simulated.

The current way of doing things -- index cards and post-it notes on pegboards -- is a medium of expression. It is an old medium, familiar, comfortable, and well understood, but a medium nonetheless. So is a piece of software. Maybe we can't express as much in our program, or maybe it's not as convenient to say what we want to say. This disadvantage is about what we can say or say easily. It's not about reality.

The same person has the right idea elsewhere in his post:

Physical boards and cards afford a much larger world of opportunities for representing information about the work as it is getting done.

Ah... The physical medium fits better into how we work. It gives us the ability to easily represent information as the work is being done. This is about work flow, not reality.

Another poster gets it right, too:

It may seem counterintuitive for those of us who work with technology, but the physical cards and boards are simply more powerful, more expressive, and more useful than electronic storage. Maybe because it's not about storage but communication.

The physical medium is more expressive, which makes it more powerful. More power combined with greater convenience makes the physical medium more useful. This conclusion is about communication. It doesn't make the software tool less real, only less useful or effective.

You will find that communication is often the bottom line when we are talking about software development. The agile approaches emphasize communication and so occasionally reach what seems to be a counterintuitive result for a technical profession.

I agree with the XP posters about the use of physical cards and big, visible boards for displaying them. This physical medium encourages and enhances human communication in a way that most software does not -- at least for now. Perhaps we could create better software tools to support our work? Maybe computer systems will evolve to the point that a live display board will dynamically display our stories, tasks, and status in a way that meshes as nicely with human workflow and teamwork as physical displays do now. Indeed, this is probably possible now, though not as inexpensively or as conveniently as stash of index cards, a cheap box of push pins, and some cork board.

I am open to a new possibility. Framing the issue as one of reality versus simulation seems to imply that it's not possible. I think that perspective limits us more than it helps us.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

October 30, 2009 4:31 PM

Writing to Learn, Book-Style

I know all about the idea of "writing to learn". It is one of the most valuable aspects of this blog for me. When I first got into academia, though, I was surprised to find how many books in the software world are written by people who are far from experts on the topic. Over the years, I have met several serial authors who pick a topic in conjunction with their publishers and go. Some of these folks write books that are successful and useful to people. Still the idea has always seemed odd.

In the last few months, I've seen several articles in which authors talk about how they set out to write a book on a topic they didn't know well or even much at all. Last summer, Alex Payne wrote this about writing the tapir book:

I took on the book in part to develop a mastery of Scala, and I've looked forward to learning something new every time I sit down to write, week after week. Though I understand more of the language than I did when I started, I still don't feel that I'm on the level of folks like David Pollak, Jorge Ortiz, Daniel Spiewak, and the rest of the Scala gurus who dove into the language well before Dean or I. Still, it's been an incredible learning experience ...

Then today I ran across Noel Rappin's essay about PragProWriMo:

I'm also completely confident in this statement -- if you are willing to learn new things, and learn them quickly, you don't need to be the lead maintainer and overlord to write a good technical book on a topic. (Though it does help tremendously to have a trusted super-expert as a technical reference.)

Pick something that you are genuinely curious about and that you want to understand really, really well. It's painful to write even a chapter about something that doesn't interest you.

This kind of writing to learn is still not a part of my mentality. I've certainly chosen to teach courses in order to learn -- to have to learn -- something I want to know, or know better. For example, I didn't know any PHP to speak of, so I gladly took on a 5-week course introducing PHP as a scripting language. But I have a respect for books, perhaps even a reverence, that makes the idea of publishing one on a subject I am not expert in unpalatable. I have to much respect for the people who might read it to waste their time.

I'm coming to learn that this probably places an unnecessary limit on myself. Articles like Payne's and Rappin's remind me that I can study something and become expert enough to write a book that is useful to others. Maybe it's time to set out on that path.

Getting people to take this step is one good reason to heed the call of Pragmatic Programmers Writing Month (PragProWriMo), which is patterned after the more generic NaNoWriMo (NaNoWriMo). Writing is like anything else: we can develop a habit that helps us to produce material regularly, which is a first and necessary step to ever producing good material regularly. And if research results on forming habits is right, we probably need a couple of months of daily repetitions to form a habit we can rely on.

So, whether it's a book or blog you have in mind, get to writing.

(Oh, and you really should click through the link in Rappin's essay to Merlin Mann's Making the Clackity Noise for a provocative -- if salty -- essay on why you should write. From there, follow the link to Buffering, where you will find a video of drummer Sonny Payne playing an extended solo for Count Basie's orchestra. It is simply remarkable.)


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

October 22, 2009 4:00 PM

Local Boys Succeed in Gaming Industry

I went last night to see a talk by Aaron Schurman, co-founder and CEO of Phantom EFX. Phantom is a homegrown local company that makes video games. The talk told the story of their latest and most ambitious release, Darkest of Days, a first-person shooter game built around historic narratives and a time-travel hook.

Phantom got its start with casino games. They started from scratch, with no training in software development. Part of the team did have background in graphic design, which gave them a foundation to build on. In the last decade, they have became serious players in the market, with several top-selling titles.

I'm am not a "computer gamer" and rarely ever play the sort of games that are so popular with students these days. But as a computer scientists, I am interested in them as programs. Nearly every game these days requires artificial intelligence, both to play the game and, in character-based games, to provide realistic agents in the simulated world. My background in AI made me a natural local resource to the company when they were getting started. As a result, I have had the good fortune to be a long-time friend of the company.

Aaron's talk was like the game; it had something for almost everyone: history, creative writing, art, animation, media studies, and computer science. The CS is not just AI, of course. A game at this level of scale is a serious piece of software. The developers faced a number of computational constraints in filling a screen with a large number of realistic humans and while maintaining the frame rate required for an acceptable video experience. There were also software development challenges, such as building for multiple platforms in sync and working with contractors distributed across the globe. There is a lot to be learned by conducting a retrospective of this project.

Aaron spoke a lot about the challenges they faced. His response was the sort you expect from people who succeed: Don't be dismayed. Do you think you are too small or too poor to compete with the big boys? Don't be dismayed. You can find a way, even if it means rolling your own gaming engine because the commercial alternatives are too expensive. Don't know how to do something? Don't be dismayed. You simply don't know yet. Work hard to learn. Everyone can do that.

The practical side of me is glad that we are so close to a company like this and have connections. We've recently begun exploring ways to place our students at Phantom EFX for internships. I love the idea of running an iPhone development class to port some of the company's games to that market. This is a great opportunity for the students, but also for professors!

The dreamer in me was inspired by this talk. I am always impressed when I meet people, especially former students, who have a vision to build something big. This sort of person accepts risks and works hard. The return on that investment can be huge, both monetarily and spiritually. I hope more of our students take stories like this to heart and realize that entrepreneurship offers an alternative career path when they have ideas and are willing to put their their work hours toward something that they really care about.

At its bottom, this is the story of small-town Iowa guys staying in small-town Iowa and building a new tech company. Now they have Hollywood producers knocking on their doors, bidding to option their script and concept for a major motion picture. Not a bad way to make a living.


Posted by Eugene Wallingford | Permalink | Categories: General

October 15, 2009 5:22 PM

Conscience and Clarity

I've been working on a jumble of administrative duties all week long, with an eye toward the weekend. While cleaning up some old files, I ran across three items that struck me as somehow related, at least in the context of the last few days.

Listen to Your Conscience

Here's a great quote from an old article by John Gruber:

If you think your users would be turned off by an accurate description of something, that doesn't mean you should do it without telling them. It means means you shouldn't be doing whatever it is you don't want to tell them about.

This advice applies to so many different circumstances. It's not bulletproof, but it's worth being the first line of thought whenever your conscience starts to gnaw at you.

Listen With Your Heart

And here's a passage on writing from the great Joni Mitchell:

You could write a song about some kind of emotional problem you are having, but it would not be a good song, in my eyes, until it went through a period of sensitivity to a moment of clarity. Without that moment of clarity to contribute to the song, it's just complaining.

This captures quite nicely one of the difficulties I have with blogging about being a department head: I rarely seem to have that moment of clarity. And I need them, even if I don't intend to blog about the experience.

Somebody Must Be Listening

One piece of nice news... I recently received a message saying that Knowing and Doing has been included in a list of the top 100 blogs by professors on an on-line learning web site. There are a lot of great blogs on that list, and it's an honor to be included among them. I follow a dozen or so of those blogs closely. One that some of my readers might not be familiar with is Marginal Revolution, which looks at the world through the lens of an economist.

If I could add only one blog to that list, right now it would be The Endeavour, John Cook's blog on software, math, and science. I learn a lot from the connections he makes.

In any case, it's good to know that readers find some measure of value here, too. I'll keep watching for the moments of clarity about CS, software development, teaching, running, and life that signal a worthwhile blog entry.


Posted by Eugene Wallingford | Permalink | Categories: General

October 13, 2009 9:31 AM

Living with Yesterday

After my long run yesterday, I was both sorer and more tired ('tireder'?) than after last Sunday's big week and fast long run. Why? I cut my mileage last week from 48 miles to 38, and my long run from 22 miles to 14. I pushed hard only during Wednesday's track workout. Shouldn't last week have felt easy, and shouldn't I be feeling relatively rested after an easy long run yesterday?

No, I shouldn't. The expectation I should is a mental illusion that running long ago taught me was an impostor. It's hard to predict how I will feel on any day, especially during training, but the best predictor isn't what I did this week, but last; not today, but yesterday.

Intellectually, this should not surprise us. The whole reason we train today is to be better -- faster, strong, more durable -- tomorrow. My reading of the running literature says that it takes seven to ten days for the body to integrate the effects of a specific workout. It makes sense that the workout can be affecting our body in all sorts of ways during that period.

This is good example of how running teaches us a lesson that is true in all parts of life:

We are what and who are we are today because of what we did yesterday.

This is true of athletic training. It is true of learning and practice more generally. What we practice is what we become.

More remarkable than that this true in my running is that I can know and write about habit of mind as an intellectual idea without making an immediate connection to my running. I often find in writing this blog that I come back around on the same ideas, sometimes in a slightly different form and sometimes in much the same form as before. My mind seems to need that repetition before it can internalize these truths as universal.

When I say that I am living with yesterday, I am not saying that I can live anywhere but in this moment. That is all I have, really. But it is wise to be mindful that tomorrow will find me a product of what I do today.


Posted by Eugene Wallingford | Permalink | Categories: General, Running, Teaching and Learning

October 07, 2009 8:11 PM

Refactoring as Rewriting

Reader and occasional writer that I am, Michael Nielsen's Six Rules for Rewriting seemed familiar in an instant. I recognize their results in good writing, and even when I don't practice them successfully in my own writing I know they would often make it better.

Occasional programmer that I am, they immediately had me thinking... How well do they apply to refactoring? Programming is writing, and refactoring is one of our common forms of rewriting... So let's see.

First of all, let's acknowledge up front that a writer's rewriting is not identical to a programmer's refactoring. First of all, the writer does not have automated tests to help her ensure that the rewrite doesn't break anything. It's not clear to me exactly what not breaking anything means for a writer, though I have a vague sense that it is meaningful for most writing.

Also, the term "refactoring" does not refer to any old rewrite of a code base. It has a technical meaning: to modify code without changing its essential functionality. There are rewrites of a code base that are not refactoring. I think that's true of writing in general, though, and I also think that Nielsen is clearly talking about rewrites that do not change the essential content or purpose of a text. His rules are about how to say the same things more effectively. That seems close enough to our technical sense of refactoring to make this exercise worth an effort.

Striking

Every sentence should grab the reader and propel them forward.

Can we say that every line of code should grab the reader and propel her forward?! I certainly prefer to read programs in which every statement or expression tells me something important about what the program is and does. Some programming languages make this harder to do, with boilerplate and often more noise than signal.

Perhaps we could say that every line of code should propel the program forward, not get in the way of its functionality? This says more about the conciseness with which the programmer writes, and fits the spirit of Nielsen's rule nicely.

Every paragraph should contain a striking idea, originally expressed.

Can we say that every function or class should contain a striking idea, originally expressed? Functions and classes that do not usually get in the reader's way. In programming, though, we often write "helpers", auxiliary functions or classes that assist another in expressing an essential, even striking, idea. The best helpers capture an idea of deep value, but it's may be the nature of decomposition that we sometimes create ones that are striking only in the context of the larger system.

The most significant ideas should be distilled into the most potent sentences possible.

Yes! The most significant ideas in our programs should be distilled into the most potent code possible: expressions, statements, functions, classes, whatever the abstractions our language and style provide.

Style

Use the strongest appropriate verb.

Of course. Names matter. Use the strongest, most clearly named primitives and library functions possible. When we create new functions, give them strong, clear names. This rule applies to our nouns, too. Our variables and classes should carry strong names that clearly name their concept. No more "manager" or "process" nouns. They avoid naming the concept. What do those objects do?

This rule also applies more broadly to coding style. It seems to me that Tell, Don't Ask is about strength in our function calls.

Beware of nominalization.

In code, this guideline prescribes a straightforward idea: Don't make a class when a function will do. You Aren't Gonna Need It.

Meta

None of the above rules should be consciously applied while drafting material.

Anyone who writes a lot knows how paralyzing it can be to worry about writing good prose before getting words down onto paper, or into an emacs buffer. Often we don't know what to write until we write it; why try to write that something perfect before we know what it is?

This rule fits nicely with most lightweight approaches to programming. I even encourage novice programmers to write code this way, much to the chagrin of my more engineering-oriented colleagues. Don't be paralyzed by the blank screen. Write freely. Make something work, anything on the path to a solution, and only then worry about making it right and fast. Do the simplest thing that will work. Only after your code works do you rewrite to make it better.

Not all rewriting is refactoring, but all refactoring is rewriting. Write. Pass the test. Refactor.

Many people find that refactoring provides the most valuable use of design patterns, as a target toward which one moves the code. This is perhaps a more important use of patterns than initial design, at which time many of us tend to overdesign our programs. Joshua Kerievsky's Refactoring to Patterns book makes shows programmers how to do this safely and reliably. I wonder if there is any analogue to this book in the writing world, or if there even could be such a book?

I once wrote a post on writing in an agile style, and rewriting played a key role in that idea. Some authors like rewriting more than writing, and I think you can say the same thing of many, many programmers. Refactoring brings a different kind of joy, at getting something right that was before almost right -- which is, of course, another way of saying not yet right.

I recall once talking with a novelist over lunch about tools for writers. Yet even the most humble word processor has done so much to change how authors write and rewrite. One of the comments on Nielsen's entry asks whether new tools for writing have changed the way writers think. We might also ask whether new tools -- the ability to edit and rewrite so much more easily and with so much less= technical effort -- has changed the product created by most writers. If not, could it?

New tools also change how we rewrite code. The refactoring browser has escaped the confines of the Smalltalk image and now graces IDEs for Java, C++, and C## programmers; indeed, refactoring tools exist for so many languages these days. Is that good or bad? Many of my colleagues lament that the ease of rewriting has led to an endemic sloppiness, to a rash of random programming in which students keep making seemingly random changes to their code until something compiles. Back in the good old days, we had to think hard about our code before we carved it into clay tablets... It seems clear to me that making rewriting and refactoring easier is a huge win, even as it changes how we need to teach and practice writing.

In retrospect, a lot of Nielsen's rules generalize to dicta we programmers will accept eagerly. Eliminate boilerplate. Write concise, focused code. Use strong, direct, and clear language. Certainly when we abstract the tasks to a certain level, writing and rewriting really are much the same in text and code.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

September 27, 2009 11:19 AM

History Mournful and Glorious

While prepping for my software engineering course last summer, I was re-reading some old articles by Philip Greenspun on teaching, especially an SE course focused on building on-line communities. One of the talks he gives is called Online Communities. This talk builds on the notion that "online communities are at the heart" of most successful applications of the Internet". Writing in 2006, he cites amazon.com, AOL, and eBay as examples, and the three years since have only strengthened his case. MySpace seems to have passed its peak yet remains an active community. I sit hear connected with friends from grade school who have been flocking to Facebook in droves, and Twitter is now one of my primary sources for links to valuable professional articles and commentary.

As a university professor, the next two bullets in his outline evoke both sadness and hope:

  • the mournful history of applying technology to education: amplifying existing teachers
  • the beauty of online communities: expanding the number of teachers

Perhaps we existing faculty are limited by our background, education, or circumstances. Perhaps we simply choose the more comfortable path of doing what has been done in the past. Even those of us invested in doing things differently sometimes feel like strangers in a strange land.

The great hope of the internet and the web is that it lets many people teach who otherwise wouldn't have a convenient way to reach a mass audience except by textbooks. This is a threat to existing institutions but also perhaps an open door on a better world for all of us.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

September 19, 2009 9:09 PM

Quick Hits with an Undercurrent of Change

Yesterday evening, in between volleyball games, I had a chance to do some reading. I marked several one-liners to blog on. I planned a disconnected list of short notes, but after I started writing I realized that they revolve around a common theme: change.

Over the last few months, Kent Beck has been blogging about his experiences creating a new product and trying to promote a new way to think about his design. In his most recent piece, Turning Skills into Money, he talks about how difficult it can be to create change in software service companies, because the economic model under which they operates actually encourages them to have a large cohort of relatively inexperienced and undertrained workers.

The best line on that page, though, is a much-tweeted line from a comment by Niklas Bjørnerstedt:

A good team can learn a new domain much faster than a bad one can learn good practices.

I can't help thinking about the change we would like to create in our students through our software engineering course. Skills and good practices matter. We cannot overemphasize the importance of proficiency, driven by curiosity and a desire to get better.

Then I ran across Jason Fried's The Next Generation Bends Over, a salty and angry lament about the sale of Mint to Intuit. My favorite line, with one symbolic editorial substitution:

Is that the best the next generation can do? Become part of the old generation? How about kicking the $%^& out of the old guys? What ever happened to that?

I experimented with Mint and liked it, though I never convinced myself to go all the way it. I have tried Quicken, too. It seemed at the same time too little and too much for me, so I've been rolling my own. But I love the idea of Mint and hope to see the idea survive. As the industry leader, Intuit has the leverage to accelerate the change in how people manage their finances, compared to the smaller upstart it purchased.

For those of us who use these products and services, the nature of the risk has just changed. The risk with the small guy is that it might fold up before it spreads the change widely enough to take root. The risk with the big power is that it doesn't really get it and wastes an opportunity to create change (and wealth). I suspect that Intuit gets it and so hold out hope.

Still... I love the feistiness that Fried shows. People with big ideas and need not settle. I've been trying to encourage the young people with whom I work, students and recent alumni, to shoot for the moon, whether in business or in grad school.

This story meshed nicely with Paul Graham's Post-Medium Publishing, in which Graham joins in the discussion of what it will be like for creators no longer constrained by the printed page and the firms that have controlled publication in the past. The money line was:

... the really interesting question is not what will happen to existing forms, but what new forms will appear.

Change will happen. It is natural that we all want to think about our esteemed institutions and what the change means for them. But the real excitement lies in what will grow up to replace them. That's where the wealth lies, too. That's true for every discipline that traffics in knowledge and ideas, including our universities.

Finally, Mark Guzdial ruminates on what changes CS education. He concludes:

My first pass analysis suggests that, to make change in CS, invent a language or tool at a well-known institution. Textbooks or curricula rarely make change, and it's really hard to get attention when you're not at a "name" institution.

I think I'll have more to say about this article later, but I certainly know what Mark must be feeling. In addition to his analysis of tools and textbooks and pedagogies, he has his own experience creating a new way to teach computing to non-majors and major alike. He and his team have developed a promising idea, built the infrastructure to support it, and run experiments to show how well it works. Yet... The CS ed world looks much like it always has, as people keep doing what they've always been doing, for as many reasons as you can imagine. And inertia works against even those with the advantages Mark enumerates. Education is a remarkably conservative place, even our universities.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Software Development, Teaching and Learning

September 09, 2009 10:04 PM

Reviewing a Career Studying Camouflage

Camouflage Conference poster

A few years ago I blogged when my university colleague Roy Behrens won a faculty excellence award in his home College of Humanities and Fine Arts. That entry, Teaching as Subversive Inactivity, taught me a lot about teaching, though I don't yet practice it very well. Later, I blogged about A Day with Camouflage Scholars, when I had the opportunity to talk about how a technique of computer science, steganography, related to the idea of camouflage as practiced in art and the military. Behrens is an internationally recognized expert on camouflage who organized an amazing one-day international conference on the subject here at my humble institution. To connect with these scholars, even for a day, was a great thrill. Finally, I blogged about Feats of Association when Behrens gave a mesmerizing talk illustrating "that the human mind is a connection-making machine, an almost unwilling creator of ideas that grow out of the stimuli it encounters."

As you can probably tell, I am a big fan of Behrens and his work. Today, I had a new chance to hear him speak, as he gave a talk associated with his winning another award, this time the university's Distinguished Scholar Award. After hearing this talk, no one could doubt that he is a worthy recipient, whose omnivorous and overarching interest in camouflage reflects a style of learning and investigation that we could all emulate. Today's talk was titled "Unearthing Art and Camouflage" and subtitled my research on the fence between art and science. It is a fence that more of us should try to work on.

The talk wove together threads from Roy's study of the history and practice of camouflage with bits of his own autobiography. It's a style I enjoyed in Kurt Vonnegut's Palm Sunday and have appreciated at least since my freshman year in college, when in an honors colloquium at Ball State University I was exposed to the idea of history from the point of view of the individual. As someone who likes connections, I'm usually interested in how accomplished people come to do what they do and how they make the connections that end up shaping or even defining their work.

Behrens was in the first generation of his family to attend college. He came from a small Iowa town to study here at UNI, where he first did research in the basement of the same Rod Library where I get my millions. He held his first faculty position here, despite not having a Ph.D. or the terminal degree of discipline, an M.F.A. After leaving UNI, he earned an M.A. from the Rhode Island School of Design. But with a little lucky timing and a publication record that merited consideration, he found his way into academia.

From where did his interest in camouflage come? He was never interested in military, though he served as a sergeant in the Vietnam-era Marine Corps. His interest lay in art, but he didn't enjoy the sort of art in which subjective tastes and fashion drove practice and criticism. Instead, he was interested in what was "objective, universal, and enduring" and as such was drawn to design and architecture. He and I share an interest in the latter; I began mu undergraduate study as an architecture major. A college professor offered him a chance to do undergraduate research, and his result was a paper titled "Perception in the Visual Arts", in which he first examined the relationship between the art we make and the science that studies how we perceive it. This paper was later published in major art education journal.

That project marked his first foray into perceptual psychology. Behrens mentioned a particular book that made an impression on him, Aspects of Form, edited by Lancelot Law Whyte. It contained essays on the "primacy of pattern" by scholars in both the arts and the sciences. Readers of this blog know of my deep interest in patterns, especially in software but in all domains. (They also know that I'm a library junkie and won't be surprised to know that I've already borrowed a copy of Whyte's book.)

Behrens noted that it was a short step from "How do people see?" to "How are people prevented from seeing?" Thus began what has been forty years of research on camouflage. He studies not only the artistic side of camouflage but also its history and the science that seeks to understand it. I was surprised to find that as a RISD graduate student he already intended to write a book on the topic. At the time, he contacted Rudolf Arnheim, who was then a perceptual psychologist in New York, with a breathless request for information and guidance. Nothing came of that request, I think, but in 1990 or so Behrens began a fulfilling correspondence with Arnheim that lasted until his death in 2007. After Arnheim passed away, Behrens asked his family to send all of his photos so that Behrens could make copies, digitize them, and then return the originals to the family. They agreed, and the result is a complete digital archive of photographs from Arnheim's long professional life. This reminded me of Grady Booch's interest in preservation, both of the works of Dijkstra and of the great software architectures of past and present.

While he was at RISD, Behrens did not know that the school library had 455 original "dazzle" camouflage designs in its collection and so missed out on the opportunity to study them. His ignorance of these works was not a matter of poor scholarship, though; the library didn't realize their significance and so had them uncataloged on a shelf somewhere. In 2007, his graduate alma mater contacted him with news of the items, and he has now begun to study them, forty years later.

As grad student, Behrens became in interested in the analogical link between (perceptual) figure-ground diagrams and (conceptual) Venn diagrams. He mentioned another book that helped him make this connection, Community and Privacy, by Serge Chermayeff and Christopher Alexander, whose diagrams of cities and relationships were Venn diagrams. This story brings to light yet another incidental connection between Behrens's work and mine. Alexander is, of course, the intellectual forebear of the software patterns movement, through his later books Notes On The Synthesis Of Form, The Timeless Way Of Building, A Pattern Language, and The Oregon Experiment.

UNI hired Behrens in 1972 into a temporary position that became permanent. He earned tenure and, fearing the lack of adventure that can come from settling down to soon, immediately left for the University of Wisconsin-Milwaukee. He worked there ten years and earned his tenure anew. It was at UW-M where he finally wrote the book he had begun planning in grad school. Looking back now, he is embarrassed by it and encouraged us not to read it!

At this point in the talk, Behrens told us a little about his area of scholarship. He opened with a meta-note about research in the era of the world wide web and Google. There are many classic papers and papers that scholars should know about. Most of them are not yet on-line, but one can at least find annotated bibliographies and other references to them. He pointed us to one of his own works, Art and Camouflage: An Annotated Bibliography, as an example of what is now available to all on the web.

Awareness of a paper is crucial, because it turns out that often we can find it in print -- even in the periodical archives of our own libraries! These papers are treasures unexplored, waiting to be rediscovered by today's students and researchers.

Camouflage consists of two primary types. The first is high similarity, as typified by figure-ground blending in the arts and mimicry in nature. This is the best known type of camouflage and the type most commonly seen in popular culture.

The second is high difference, or what is often called figure disruption. This sort of camouflage was one of the important lessons of World War I. We can't make a ship invisible, because the background against which it is viewed changes constantly. A British artist named Norman Wilkinson had the insight to reframe the question: We are not trying to hide a ship; we are trying to prevent the ship from being hit by a torpedo!

(Redefining one problem in terms of another is a standard technique in computer science. I remember when I first encountered it as such, in a graduate course on computational theory. All I had to do was find a mapping from a problem to, say, 3-SAT, and -- voilá! -- I knew a lot about it. What a powerful idea.)

This insight gave birth to dazzle camouflage, in which the goal came to be break an image into incoherent or imperceptible parts. To protect a ship, the disruption need not be permanent; it needed only to slow the attackers sufficiently that they were unable to target it, predict its course, and launch a relatively slow torpedo at it with any success.

a Gabon viper, which illustrates coincident disruption

Behrens offered that there is a third kind of camouflage, coincident disruption, that is different enough to warrant its own category. Coincident disruption mixes the other two types, both blending into the background and disrupting the viewer's perception. He suggested that this may well be the most common form of camouflage found in nature using the Gabon viper, pictured here, as one of his examples of natural coincident disruption.

Most of Behrens' work is on modern camouflage, in the 20th century, but study in the area goes back farther. In particular, camouflage was discussed in connection to Darwin's idea of natural selection. Artist Abbott Thayer was a preeminent voice on camouflage in the 19th century who thought and wrote on both blending and disruption as forms in nature. Thayer also recommended that the military use both forms of camouflage in combat, a notion that generated great controversy.

In World War I, the French ultimately employed 3,000 artists as "camoufleurs". The British and Americans followed suit on a smaller scale. Behrens gave a detailed history of military camouflage, most of which was driven by artists and assisted by a smaller number of scientists. He finds World War II's contributions less interesting but is excited by recent work by biologists, especially in the UK, who have demonstrated renewed interest in natural camouflage. They are using empirical methods and computer modeling as ways to examine and evaluate Thayer's ideas from over a hundred years ago. Computational modeling in the arts and sciences -- who knew?

Toward the end of his talk, Behrens told several stories from the "academic twilight zone", where unexpected connections fall into the scholar's lap. He called these the "unsung delights of researching". These are stories best told first hand, but they involved a spooky occurrence of Shelbyville, Tennessee, on a pencil he bought for a quarter from a vending machine, having the niece and nephew of Abbott Thayer in attendance at a talk he gave in 1987, and buying a farm in Dysart, Iowa, in 1992 only then to learn that Everett Warner, whom he had studied, was born in Vinton, Iowa -- 14 miles away. In the course of studying a topic for forty years, the strangest of coincidences will occur. We see these patterns whether we like to or not.

Behrens's closing remarks included one note that highlights the changes in the world of academic scholarship that have occurred since he embarked on his study of camouflage forty years ago. He admitted that he is a big fan of Wikipedia and has been an active contributor on pages dealing with the people and topics of camouflage. Social media and web sites have fundamentally changed how we build and share knowledge, and increasingly they are being used to change how we do research itself -- consider the Open Science and Polymath projects.

Today's talk was, indeed, the highlight of my week. Not only did I learn more about Behrens and his work, but I also ended up with a couple of books to read (the aforementioned Whyte book and Kimon Nicolaïdes's The Natural Way to Draw), as well as a couple of ideas about what it would mean for software patterns to hide something. A good way to spend an hour.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

August 07, 2009 2:18 PM

A Loosely-Connected Friday Miscellany

An Addition to My News Aggregator

Thanks to John Cook, I came across the blog of Dan Meyers, a high school math teacher. Cook pointed to an entry with a video of Meyer speaking pecha kucha-style at OSCON. One of the important messages for teachers conveyed in this five minutes is Be less helpful. Learning happens more often when people think and do than when they follow orders in a well-defined script.

While browsing his archive I came across this personal revelation about the value of the time he was spending on his class outside of the business day:

I realize now that the return on that investment of thirty minutes of my personal time isn't the promise of more personal time later. ... Rather it's the promise of easier and more satisfying work time now.

Time saved later is a bonus. If you depend on that return, you will often be disappointed, and that feeds the emotional grind that is teaching. Kinda like running in the middle. I think it also applies more than we first realize to reuse and development speed in software.

Learning and Doing

One of the underlying themes in Meyers's writing seems to be the same idea in this line from Gerd Binnig, which I found at Physics Quote of Day:

Doing physics is much more enjoyable than just learning it. Maybe 'doing it' is the right way of learning ....

Programming can be a lot more fun than learning to program, at least the way we often try to teach it. I'm glad that so many people are working on ways to teach it better. In one sense, the path to better seems clear.

Knowing and Doing

One of the reasons I named by blog "Knowing and Doing" was that I wanted to explore the connection between learning, knowing, and doing. Having committed to that name so many years ago, I decided to stake its claim at Posterous, which I learned about via Jake Good. Given some technical issues with using NanoBlogger, at least an old version of it, I've again been giving some thought to upgrading or changing platforms. Like Jake, I'm always tempted to roll my own, but...

I don't know if I'll do much or anything more with Knowing and Doing at Posterous, but it's there if I decide that it looks promising.

A Poignant Convergence

Finally, a little levity laced with truth. Several people have written to say they liked the name of my recent entry, Sometimes, Students Have an Itch to Scratch. On a whim, I typed it into Translation Party, which alternately translates a phrase from English into Japanese and back until it reaches equilibrium. In only six steps, my catchphrase settles onto:

Sometimes I fear for the students.

Knowing how few students will try to scratch their own itches with their new-found power as a programmer, and how few of them will be given a chance to do so in their courses on the way to learning something valuable, I chuckled. Then I took a few moments to mourn.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Software Development, Teaching and Learning

August 06, 2009 11:59 AM

Woody Allen Is On Line 1

An excerpt from an interview at Gaping Void:

Some days, the work is tedious, labour-intensive and as repetitive as a production line in a factory. ... The key is having a good infrastructure. ...

But none of it works without discipline. Early on in my career, I was told that success demanded one thing above all others: turning up. Turning up every bloody day, regardless of everything.

This was said by artist Hazel Dooney, but it could just as well been said by a programmer -- or a university professor. One thing I love about the agile software world is its willingness to build new kinds of tools to support the work of programmers. Isn't it ironic? Valuing people over tools makes having the right tools even more important.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

August 01, 2009 7:09 AM

Casting Shadows

I have been reading David Ogilvy's Confessions of an Advertising Man. I have found it to be quite enjoyable. It is a slim volume, written charmingly in a style we don't see much anymore. It is about not only advertising but also leading a team, creating and guiding an organization, and running a business. There are elements of all these is my job as department head, and even as a faculty member. Many of Ogilvy's essons won't surprise you; he recommends the old-fashioned virtues. Hard work. Quality. Fairness. Honesty. Integrity. High standards. Candor.

Ogilvy describes how to build and run a great agency, but at heart he is a great believer in the individual, especially when it comes to creative acts:

Some agencies pander to the craze for for doing everything in committee. They boast about "teamwork" and decry the role of the individual. But no team can write an advertisement, and I doubt whether there is a single agency of any consequence which is not the lengthened shadow of one man.

I sometimes wonder whether greatness can be achieved by a group of competent or even above-average individuals, or if an outstanding individual is an essential ingredient. In an advertising agency, there are the somewhat distinct acts of creating campaigns and running the agency. Perhaps the latter is more amenable to being led by a team. But even when it comes to great works, I am aware that teams have produced excellent software. How much of that success can be attributed to the vision and leadership of one individual on the team, I don't know.

As I mentioned at the top of a recent entry, a university task force I chaired submitted its final report at the beginning of July. After working so long with this group, I am feeling a little seller's remorse. Did we do a good enough job? If acted upon, will our recommendations effect valuable change? Can they be acted upon effectively at a time of budget uncertainties? The report we wrote does not advocate revolutionary change, at least not on the surface. It is more about creating structures and practices that will support building trust and communication. In a community that has drifted in recent years and not always had visionary leadership, these are prerequisites to revolutionary changes. Still, I am left wondering what we might have done more or differently.

The report is most definitely the product of a committee. I suspect that several of the individuals in the group might well have been able to produce something as good or better by working solo, certainly something with sharper edges and sharper potential -- at higher risk. Others on the group could not have done so, but that was not the nature of their roles. In the end, the committee rounded off the sharp edges, searched for and found common ground. The result is not a least common denominator, but it is no longer revolutionary. If that sort of result is what you need, a committee is not your best agent.

Part of my own remorse comes back to Ogilvy's claim. Could I have led the group better? Could I have provided a higher vision and led the group to produce a more remarkable set of recommendations? Did I cast a long enough shadow?

~~~~

The shadows of summer are lengthening. One of the reasons that I have always liked living in the American Midwest is the changing of the seasons. Here we have not four seasons but eight, really, as the blending of summer to autumn, or winter to spring, each has its own character. Academia, too, has its seasons, and they are part of what attracted me to professional life at a university. From the outside looking in, working in industry looked like it could become monotonous. But the university offers two semesters and a summer, each with a brand new start, a natural life, and a glorious end. Whatever monotony we experience happens at a much larger time scale, as these seasons come and go over the years.

Like the weather, academia has more than the obvious seasons we name. We have the break between semesters over Christmas and New Year's Day, a short period of change. When school ends in May, it is like the end of the year, and we have a period of changing over to a summer of activity that is for many of us much different than the academic year. And finally, we have the transition between summer and the new academic year. For me, that season begins about now, on the first day of August, as thoughts turn more and more to the upcoming year, the preparation of classes, and the return of students. This is a change that injects a lot of energy into our world and saves us from any monotony we might begin to feel.

So, as the long shadows of summer begin to fall, we prepare for the light of a new year. What sort of shadow will I cast?


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

July 14, 2009 1:06 PM

Is He Talking About News, or Classroom Content?

Seth Godin says:

People will not pay for by-the-book rewrites of news that belongs to all of us. People will not pay for yesterday's news, driven to our house, delivered a day late, static, without connection or comments or relevance. Why should we?

Universities may not be subject to the same threats as newspapers, due in some measure to

  • their ability to aggregate intellectual capital and research capacity,
  • their privileged status in so many disciplines as the granters of required credentials, and
  • frankly, the lack of maturity, initiative, and discipline of their primary clientele.

But Godin's quote ought to cause a few university professors considerable uneasiness. In the many years since I began attending college as an undergrad, I have seen courses at every level and at every stop that fall under the terms of this rebuke.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 14, 2009 10:28 AM

They Say These Things Come in Threes...

After writing that two UNI CS grads had recently defended their doctoral dissertations, I heard about the possibility of a third. Turns out it was more than a possibility... Last Friday, Chris Johnson defended and has since submitted the final version of his dissertation to the University of Tennessee. His work is in the area of scientific visualization, with a focus on computation-intensive simulations. For the last few years, Chris has been working out of Ames, Iowa, and we may be lucky enough to have him remain close by.

The summer bonanza grows. Congratulations, Chris!


Posted by Eugene Wallingford | Permalink | Categories: General

July 11, 2009 10:57 AM

Former Students Crossing the Divide

What a summer for UNI CS alumni in academia! In the last few weeks, Andrew Drenner and Ryan Dixon both defended their Ph.D. dissertations, at the University of Minnesota and UC-Santa Barbara, respectively. Currently Andrew is currently working with a robotics start-up spun off from his research lab, and Ryan is enjoying a short break before starting full-time at Apple next month.

I had the great fortune to work with Andrew and Ryan throughout their undergraduate years, in several courses and projects each. Some students are different from the rest, and these guys distinguished themselves immediately not only by their creativity and diligence but also by their tremendous curiosity. When a person with deep curiosity also has the desire to work hard to find answers, stand back. It is neat to see them both expanding what we know about topics they were working on as undergrads. Indeed, Ryan's project at Apple is very much in the spirit of his undergrad research project, which I was honored to supervise.

Congratulations, gentlemen! Many of your friends and family may think that this means you are no longer students. But you are really joining a new fraternity of students. We are honored to have you among us.


Posted by Eugene Wallingford | Permalink | Categories: General

July 09, 2009 3:52 PM

Five Years On

Sketchbooks are not about being a good artist,
they're about being a good thinker.
-- Jason Santa Maria

Five years ago today, I started this blog as a sort of sketchbook for words and ideas. I didn't know just what to expect, so I'm not surprised that it hasn't turned out as I might have guessed. Thinking out loud and writing things down can be like that. Trying to explain to myself and anyone who would what was happening as I lived life as a computer scientist and teacher have been a lot of fun.

a shot of me running the Chicago marathon

In the beginning, I was preparing to teach a course on agile software development and planning to run my second marathon. These topics grew together in my mind, almost symbiotically, and the result was a lot of connections. The connections were made firmer by writing about them. They also gave me my first steady readership, as occasionally someone would share a link with a friend.

Things have changed since 2004. Blogging was an established practice in a certain core demographic but just ready to break out among the masses. Now, many of the bloggers whose worked I cherished reading back then don't write as much as they used to. Newer tools such as Twitter give people a way to share links and aphorisms, and many people seem to live in the Twittersphere now. Fortunately, a lot of people still take the time to share their ideas in longer form.

Even though I go through stretches where I don't write much, my blog has become an almost essential element of how I go about life now. Yesterday's entry is a great example of me writing to synthesize experience in a way I might not otherwise. I had a few thoughts running around my head. They were mostly unrelated but somehow... they wanted to be connected. So I started writing, and ended up somewhere I may not have taken the time to go if I hadn't had to write complete sentences and say things in a way my friends would understand. That is good for me. For you readers? I hope so. A few of you keep coming back.

Five years down the road, I am no longer surprised by how computer science, writing, and running flow together. First, they are all a part of who I am right now, and our minds love to make connections. But then there is something common to all activities that challenge us. With the right spirit, we find that they drive us to seek excellence, and the pursuit of excellence -- whether we are Roger Federer, reaching the highest of heights, or Lance Armstrong, striving to reach those heights yet again, or just a simple CS professor trying to reach his own local max -- is a singular experience.

Last week, I ran across a quote from Natalie Goldberg on Scott Smith's blog. I first mentioned Goldberg during my first month as a blogger. This quote ties running to writing to habit:

If you run regularly, you train your mind to cut through or ignore your resistance. You just do it. And in the middle of the run, you love it. When you come to the end, you never want to stop. And you stop, hungry for the next time. That's how writing is, too.

The more I blog, the more I want to write. And, in the face of some troubles over the last year, I wake up hungry to run.

A few years ago, I read a passage from Brian Marick that I tucked away for July 9, 2009:

I've often said that I dread the day when I look back on the me of five years ago without finding his naivete and misconceptions faintly ridiculous. When that day comes, I'll know I've become an impediment to progress.

a forest

Just last month, Brian quick-blogged on the same theme: continuing to grow enough that the me of five years ago looks naive, or stepping away from the stage. After five years blogging, my feeling on this is mixed. I look back and see some naivete, yes, but I often see some great stuff. "I thought that?" Sometimes I'm disappointed that a great idea from back then hasn't become more ingrained in my practices of today, but then I remember that it's a lot easier to think an idea than to live it. I do see progress, though. I also see new themes emerging in my thoughts and writing, which is a different sort of progress altogether.

I do take seriously that you are reading this and that you may even make an effort to come back to read more later. I am privileged to have had so many interactions with readers over these five years. Even when you don't send comments and links, I know you are there, spending a little of your precious time here.

So I think I'll stay on this stage a while longer. I am just a guy trying to evolve, and writing helps me along the way.


Posted by Eugene Wallingford | Permalink | Categories: General

July 06, 2009 3:26 PM

Cleaning Data Off My Desk

As I mentioned last time, this week I am getting back to some regular work after mostly wrapping up a big project, including cleaning off my desk. It is cluttered with a lot of loose paper that the Digital Age had promised to eliminate. Some is my own fault, paper copies of notes and agendas I should probably find a way to not to print. Old habits dies hard.

But I also have a lot paper sent to me as department head. Print-outs; old-style print-outs from a mainframe. The only thing missing from a 1980s flashback is the green bar paper.

Some of these print-outs are actually quite interesting. One set is of grade distribution reports produced by the registrar's office, which show how many students earned As, Bs, and so on in each course we offered this spring and for each instructor who taught a course in our department. This sort of data can be used to understand enrollment figures and maybe even performance in later courses. Some upper administrators have suggested using this data in anonymous form as a subtle form of peer pressure, so that profs who are outliers within a course might self-correct their own distributions. I'm ready to think about going there yet, but the raw data seems useful, and interesting in its own right.

I might want to do more with the data. This is the first time I recall receiving this, but in the fall it would be interesting to cross-reference the grade distributions by course and instructor. Do the students who start intro CS in the fall tend to earn different grades than those who start in the spring? Are there trends we can see over falls, springs, or whole years? My colleagues and I have sometimes wondered aloud about such things, but having a concrete example of the data in hand has opened new possibilities in my mind. (A typical user am I...)

As a programmer, I have the ability to do such analyses with relatively straightforward scripts, but I can't. The data is closed. I don't receive actual data from the registrar's office; I receive a print-out of one view of the data, determined by people in that office. Sadly, this data is mostly closed even to them, because they are working with an ancient mainframe database system for which there is no support and a diminishing amount of corporate memory here on campus. The university is in the process of implementing a new student information system, which should help solve some of these problems. I don't imagine that people across campus will have much access to this data, though. That's not the usual M.O. for universities.

Course enrollment and grade data aren't the only ones we could benefit from opening up a bit. As a part of the big project I just wrapped up, the task force I was on collected a massive amount of data about expenditures on campus. This data is accessible to many administrators on campus, but only through a web interface that constrains interaction pretty tightly. Now that we have collected the data, processed almost all of it by hand (the roughness of the data made automated processing an unattractive alternative), and tabulated it for analysis, we are starting to receive requests for our spreadsheets from others on campus. These folks all have access to the data, just not in the cleaned-up, organized format into which we massaged it. I expressed frustration with our financial system in a mini-rant a few years ago, and other users feel similar limitations.

For me, having enrollment and grade data would be so cool. We could convert data into information that we could then us to inform scheduling, teaching assignments, and the like. Universities are inherently an information-based institutions, but we don't always put our own understanding of the world into practice very well. Constrained resources and intellectual inertia slow us down or stop us all together.

Hence my wistful hope while reading Tim Bray's "Hello-World" for Open Data. Vancouver has a great idea:

  • Publish the data in a usable form.
  • License it in a way that turns people loose to do whatever they want, but doesn't create unreasonable liability risk for the city.
  • See what happens. ...

Would anyone on campus take advantage? Maybe, maybe not. I can imagine some interesting mash-ups using only university data, let alone linking to external data. But this isn't likely to happen. GPA data and instructor data are closely guarded by departments and instructors, and throwing light on it would upset enough people that any benefits would probably be shouted down. But perhaps some subset of the data the university maintains, suitably anonymized, could be opened up. If nothing else, transparency sometimes helps to promote trust.

I should probably do this myself, at the department level, with data related to schedule, budget, and so on. I occasionally share the spreadsheets I build with the faculty, so they can see the information I use to make decisions. This spring, we even discussed opening up the historic formula used in the department to allocate our version of merit pay.

(What a system that is -- so complicated that that I've feared making more than small editorial changes to it in my time as head. I keep hoping to find the time and energy to build something meaningful from scratch, but that never happens. And it turns out that most faculty are happy with what we have now, perhaps for "the devil you know" reasons.)

I doubt even the CS faculty in my department would care to have open data of this form. We are a small crew, and they are busy with the business of teaching and research. It is my job to serve them by taking as much of this thinking out of our way. Then again, who knows for sure until we try? If the cost of sharing can be made low enough, I'll have no reason not to share. But whether anyone uses the data that might not even be the real point. Habits change when we change them, when we take the time to create new ones to replace the old ones. This would a good habit for me to have.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Managing and Leading

June 26, 2009 4:01 PM

The Why of X

Where did the title of my previous entry come from? Two more quick hits tell a story.

Factoid of the Day

On a walk the other night, my daughter asked why we called variables x. She is reviewing some math this summer in preparation to study algebra this fall. All I could say was, "I don't know."

Before I had a chance to look into the reason, one explanation fell into my lap. I was reading an article called The Shakespeare of Iran, which I ran across in a tweet somewhere. And there was an answer: the great Omar Khayyam.

Omar was the first Persian mathematician to call the unknown factor of an equation (i.e., the x) shiy (meaning thing or something in Arabic). This word was transliterated to Spanish during the Middle Ages as xay, and, from there, it became popular among European mathematicians to call the unknown factor either xay, or more usually by its abbreviated form, x, which is the reason that unknown factors are usually represented by an x.

However, I can't confirm that Khayyam was first. Both Wikipedia and another source also report the Arabic language connection, and the latter mentions Khayyam, but not specifically as the source. That author also notes that "xenos" is the Greek word for "unknown" and so could be the root. However, I also haven't found a reference for this use of x that predates Khayyam, either. So may be.

My daughter and I ended up with as much of a history lesson as a mathematical terminology lesson. I like that.

Quote of the Day

Yesterday afternoon, the same daughter was listening in on a conversation between me and a colleague about doing math and science, teaching math and science, and how poorly we do it. After we mentioned K-12 education and how students learn to think of science and math as "hard" and "for the brains", she joined the conversation with:

Don't ask teachers, 'Why?' They don't know, and they act like it's not important.

I was floored.

She is right, of course. Even our elementary school children notice this phenomenon, drawing on their own experiences with teachers who diminish or dismiss the very questions we want our children to ask. Why? is the question that makes science and math what they are.

Maybe the teacher knows the answer and doesn't want to take the time to answer it. Maybe she knows the answer but doesn't know how to answer it in a way that a 4th- or 6th- or 8th-grader can understand. Maybe he really doesn't know the answer -- a condition I fear happens all too often. No matter; the damage is done when the the teacher doesn't answer, and the child figures the teacher doesn't know. Science and math are so hard that the teacher doesn't get it either! Better move on to something else. Sigh.

This problem doesn't occur only in elementary school or high school. How often do college professors send the same signal? And how often do college professors not know why?

Sometimes, truth hits me in the face when I least expect it. My daughters keep on teaching me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

June 25, 2009 9:48 PM

X of the Day

Quick hits, for different values of x, of course, but also different values of "the day" I encountered them. I'm slow, and busier than I'd like.

Tweet of the Day

Courtesy of Glenn Vanderburg:

Poor programmers will move heaven and earth to do the wrong thing. Weak tools can't limit the damage they'll do.

Vanderburg is likely talking about professional programmers. I have experienced this truth when working with students. At first, it surprised me when students learning OOP would contort their code into the strangest configurations not to use the OO techniques they were learning. Why use a class? A fifty- or hundred-line method will do nicely.

Then, students learning functional programming would seek out arcane language features and workarounds found on the Internet to avoid trying out the functional patterns they had used in class. What could have been ten lines of transparent Scheme code in two mutually recursive functions became fifteen or more of the most painfully tortured C code wrapped in a thin veil of Scheme.

I've seen this phenomenon in other contexts, too, like when students take an elective course called Agile Software Development and go out of their way to do "the wrong thing". Why bother with those unit tests? We don't really need to try pair programming, so we? Refactor -- what's that?

This feature of programmers and learners has made me think harder trying to help them see the value in just trying the techniques they are supposed to learn. I don't succeed as often as I'd like.

Comic of the Day

Hammock dwellers, unite!

2009-06-23 Wizard of Id on professors

If only. If only. When does summer break start?


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

June 24, 2009 8:13 AM

Brains, Patterns, and Persistence

I like to solve the Celebrity Cipher in my daily paper. Each puzzle is a mixed alphabet substitution cipher on a quote by someone -- a "celebrity", loosely considered -- followed by the speaker's name, sometimes prefixed with a title or short description. Lately I've been challenging myself to solve the puzzle in my head, without writing any letters down, even once I'm sure of them. Crazy, I know, but this makes the easier puzzles more challenging now that I have gotten pretty good at solving them with pen in hand.

(Spoiler alert... If you like to do this puzzle, too, and have not yet solved the June 22 cipher, turn away now. I am about to give the the answer away!)

Yesterday I was working on a puzzle, and this was the speaker phrase:

IWHNN TOXFZRXNYHO NXKJHSSA YXOYXEBUHO

I had looked at the quote itself for a couple of minutes and so was operating on an initial hypothesis that YWH was the word the. I stared at the speaker for a while... IWHNN would be IheNN. Double letters to end the third word, which is probably the first name. N could be s, or maybe l. s... That would be the first letter of the first name.

And then I saw it, in whole cloth:

Chess grandmaster Savielly Tartakower

Please don't think less of me. I'm not a freak. Really.

a picture of Savielly Tartakower

How very strange. I have no special mental powers. I do have some experience solving these puzzles, of course, but this phrase is unusual both in the prefix phrase and in the obscurity of the speaker. Yes, I once played a lot of chess and did know of Tartakower, a French-Polish player of the early 20th century. But how did I see this answer?

The human brain amazes me almost every day with its ability to find, recognize, and impose patterns on the world. Practice and exposure to lots and lots of data is one of the ways it learns these patterns. That is part of how I am able to solve these ciphers most days -- experience makes patterns appear to me, unbidden by conscious thought. There may be other paths to mastery, but I know of no other reliable substitute for practice.

What about the rest of the puzzle? From the letter pairs in the speaker phrase, I was able to reconstruct the quote itself with little effort:

Victory goes to the player who makes the next-to-last mistake.

Ah, an old familiar line. If we follow this quote to its logical conclusion, it offers good advice for much of life. You never know which mistake will be the next-to-last, or the last. Keep playing to win. If you learn from your mistakes, you'll start to make fewer, which increases the probability that your opponent will make the last mistake of the game.

Even when in non-adversarial situations, or situations in which there is no obvious single adversary, this is a good mindset to have. People who embrace failure persist. They get better, but perhaps more importantly they simply survive. You have to be in the game when your opportunity comes -- or when your opponent makes the ultimate mistake.

Like so many great lines, Tartakower's is not 100% accurate in all cases. As an accomplished chessplayer, he certainly knew that the best players can lose without ever making an obvious mistake. Some of my favorite games of all time are analyzed in My Sixty Memorable Games, by Bobby Fischer himself. It includes games in which the conquered player never made the move that lost. Instead, the loser accreted small disadvantages, or drifted off theme, and suddenly the position was unfavorable. But looking back, Fischer could find no obvious improvement. Growing up, this fascinated me -- the loser had to make a mistake, right? The winner had make a killer move... Perhaps not.

Even still, the spirit of Tartakower's advice holds. Play in this moment. You never know which mistake will be the next-to-last, or the last. Keep playing.

At this time of year, when I look back over the past twelve months of performing tasks that do not come naturally to me, and looking ahead to next year's vision and duties, this advice gives me comfort.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 13, 2009 7:16 PM

Agile Moments While Reading the Newspaper

The first: Our local paper carries a parenting advice column by John Rosemond, an advocate of traditional parenting. In Wednesday's column, a parent asked how to handle a child who refuses to eat his dinner. Rosemond responded that the parents should calmly, firmly, and persistently expect the child to eat the meal -- even if it meant that the child went hungry that night by refusing.

[Little Johnny] will survive this ordeal -- it may take several weeks from start to finish -- with significantly lower self-esteem and a significantly more liberal palette, meaning that he will be a much happier child.

If you know Rosemond, you'll recognize this advice.

I couldn't help thinking about what happens when we adults learn a new programming style (object-oriented or functional programming), a new programming technique (test-driven development, pair programming), or even a new tool that changes our work flow (say, SVN or JUnit). Calm, firm, persistent self-discipline or coaching are often the path to success. In many ways, Rosemond's advice works more easily with 3- or 5-year-olds than college students or adults, because the adults have the option of leaving the room. Then again, the coach or teacher has less motivation to ensure the change sticks -- that's up to the learner.

I also couldn't help thinking how often college students and adults behave like 3- and 5-year-olds.

The second: Our paper also carries a medical advice column by a Dr. Gott, an older doctor who harkens back to an older day of doctor-patient relations. (There is a pattern here.) In Wednesday's column, the good doctor said about a particular diagnosis:

There is no laboratory or X-ray test to confirm or rule out the condition.

My first thought was, well, then how do we know it exists at all? This a natural reaction for a scientist -- or pragmatist -- to have. I think this means that we don't currently have a laboratory or X-ray test for the presence or absence of this condition. Or there may be another kind of test that will tell us whether the condition exists, such as a stress tests or an MRIs.

Without any test, how can we know that something is? We may find out after it kills the host -- but then we would need a post-mortem test. While the patient lives, there could be a treatment regimen that works reliably in face of the symptoms. This could provide the evidence we need to say that a particular something was present. But if the treatment fails, can we rule out the condition? Not usually, because there are other reasons that the treatment fails.

We face a similar situation in software with bugs. When we can't reproduce a bug, at least not reliably, we have a hard time fixing it. Whether we know the problem exists depends on which side of the software we live... If I am the user who encounters the problem, I know it exists. If I'm the developer, then maybe I don't. It's easy for me as developer to assume that there is something wrong with the user, not my lovingly handcrafted code. When the program involves threading or a complex web of interactions among several systems, we are more inclined to recognize that a problem exists -- but which problem? And where? Oh, to have a test... I can only think of two software examples of reliable treatment regimens that may tell us something was wrong: rebooting the machine and reinstalling the program. (Hey to Microsoft.). But those are such heavy-handed treatments that they can't give us much evidence about a specific bug.

There is, of course, the old saying of TDD wags: Code without a test doesn't exist. Scoff at that if you want, but it is a very nice guideline to live by.

To close, here are my favorite new phrases from stuff I've been reading:

Expect to see these jewels used in an article sometime soon.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

June 11, 2009 8:24 PM

Revolution Out There -- and Maybe In Here

(Warning: This is longer than my usual entry.)

In recent weeks I have found myself reading with a perverse fascination some of the abundant articles about the future of newspapers and journalism. Clay Shirky's Newspapers and Thinking the Unthinkable has received a deserving number of mentions in most. His essay reminds us, among other things, that revolutions change the rules that define our world. This means that living through a revolution is uncomfortable for most people -- and dangerous to the people most invested in the old order. The ultimate source of the peril is lack of imagination; we are so defined by the rules that we forget they are not universal laws but human constructs.

I'm not usually the sort of person attracted to train wrecks, but that's how I feel about the quandary facing the newspaper industry. Many people in and out of the industry like to blame the internet and web for the problem, but it is more complicated than that. Yes, the explosion of information technology has played a role in creating difficulties for traditional media, but as much as it causes the problems, I think it exposes problems that were already there. Newspapers battle forces from all sides, not the least of which is the decline -- or death? -- of advertising, which may soon be known as a phenomenon most peculiar to the 20th century. The web has helped expose this problem, with metrics that show just how little web ads affect reader behavior. It has also simply given people alternatives to media that were already fading. Newspapers aren't alone.

This afternoon, I read Xark's The Newspaper Suicide Pact and was finally struck by another perverse thought, a fear because it hits closer to my home. What if universities are next? Are we already in a decline that will become apparent only later to those of us who are on the inside?

Indications of the danger are all around. As in the newspaper industry, money is at the root of many problems. The cost of tuition has been rising much faster than inflation for a quarter of a century. At my university, it has more than doubled in the 2000s. Our costs, many self-imposed, rise at the same time that state funding for its universities falls. For many years, students offset the gap by borrowing the difference. This solution is bumping into a new reality now, with the pool of money available for student loans shrinking and the precipitous decline in housing equity for many eroding borrowing ability. Some may see this as a good thing, as our students have seen a rapid growth in indebtedness at graduation, outpacing salaries in even the best-paying fields. Last week, many people around here were agog at a report that my state's university grads incur more student loan debt than any other state's. (We're #1!)

Like newspapers, universities now operate in a world where plentiful information is available on-line. Sometimes it is free, and other times its is much less expensive than the cost of taking a course on the subject. Literate, disciplined people can create a decent education for themselves on-line. Perhaps universities serve primarily the middle and lower tier of students, who haven't the initiative or discipline to do it on their own?

I have no numbers to support these rash thoughts, though journalists and others in the newspaper industry do have ample evidence for fear. University enrollments depend mostly on the demographics of their main audience: population growth, economics, and culture. Students also come for a social purpose. But I think the main driver for many students to matriculate is industry's de facto use of the college degree as the entry credential to the workplace. In times of alternatives and tight money, universities benefit from industry's having outsourced the credentialing function to them.

The university's situation resembles the newspaper's in other ways, too. We offer a similar defense of why the world needs us: in addition to creating knowledge, we sort it, we package it for presentation, and we validate its authenticity and authority. If students start educating themselves using resources freely or cheaply available outside the university, how will we know that they are learning the right stuff? Don't get most academics started on the topic of for-profits like Kaplan University and the University of Phoenix; they are the university's whipping boy. The news industry has one, too: bloggers.

Newspaper publishers talk a lot these days about requiring readers to pay for content. In a certain sense, that is what students do: pay universities for content. Now, though, the web gives everyone access to on-line lectures, open-source lecture notes, the full text of books, technical articles, and ... the list goes on. Why should they pay?

Too many publishers argue that their content is better, more professional, and so stand behind "the reasonable idea that people should have to pay for the professionally produced content they consume". Shirky calls this a "post-rational demand", one that asks readers to behave in a way "intended to restore media companies to the profitability ordained to them by God Almighty" -- despite living in a world where such behaviors are as foreign as living in log cabins and riding horses for transportation. Is the university's self-justification as irrational? Is it becoming more irrational every year?

Some newspapers decide to charge for content as a way to prop up their traditional revenue stream, print subscriptions. Evidence suggest that this not only doesn't work (people inclined to drop their print subscriptions won't be deterred by pay walls) but that it is counter-productive: the loss of on-line visitors causes a decline in web advertising revenue that is much greater than the on-line reader revenue earned. Again, this is pure speculation, but I suspect that if universities try to charge for their on-line content they will see similar results.

The right reason to charge for on-line content is to create a new revenue stream, one that couldn't exist in the realm of print. This is where creative thinking will help to build an economically viable "new media". This is likely the right path for universities, too. My oldest but often most creative-thinking colleague has been suggesting this as a path for my school to consider for a few years. My department is working on one niche offering now: on-line courses aimed at a specific audience that might well take them elsewhere if we don't offer them, and who then have a smoother transition into full university admission later. We have other possibilities in mind, in particular as part of a graduate program that already attracts a large number of people who work full time in other cities.

But then again, there are schools like Harvard, MIT, and Stanford with open course initiatives, placing material on-line for free. How can a mid-sized, non-research public university compete with that content, in that market? How will such schools even maintain their traditional revenue streams if costs continue to rise and high quality on-line material is readily available?

In a middle of a revolution, no one knows the right answers, and there is great value in trying different ideas. Most any school can start with the obvious: lectures on-line, increased use of collaboration tools such as wikis and chats and blogs -- and Twitter and Facebook, and whatever comes next. These tools help us to connect with students, to make knowledge real, to participate in the learning. Some of the obvious paths may be part of the solution. Perhaps all of them are wrong. But as Shirky and others tell us, we need to try all sorts of experiments until we find the right solution. We are not likely to find it by looking at what we have always done. The rules are changing. The reactions of many in the academy tell a sad story. They are dismissive, or simply disinterested. That sounds a lot like the newspapers, too. Maybe people are simply scared and so hole up in the bunker constructed out of comfortable experience.

Like newspapers, some institutions of higher education are positioned to survive a revolution. Small, focused liberal arts colleges and technical universities cater to specific audiences with specific curricula. Of course, the "unique nationals" (schools such as Harvard, MIT, and Stanford) and public research universities with national brands (schools such as Cal-Berkeley and Michigan) sit well. Other research schools do, too, because their mission goes beyond the teaching of undergraduates. Then again, many of those schools are built on an economic model that some academics think is untenable in the long run. (I wrote about that article last month, in another context.)

The schools most in danger are the middle tier of so-called teaching universities and low-grade research schools. How will they compete with the surviving traditional powers or the wealth of information and knowledge available on-line? This is one reason I embrace our president's goal of going from good to great -- focusing our major efforts on a few things that we do really well, perhaps better than anyone, nurturing those areas with resources and attention, and then building our institution's mission and strategy around this powerful core. There is no guarantee that this approach will succeed, but it is perhaps the only path that offers a reasonable chance to schools like ours. We do have one competitive advantage over many of our competitors: enough research and size to offer students a rich learning environment and a wide range of courses of study, but small enough to offer a personal touch otherwise available only at much smaller schools. This is the same major asset that schools like us have always had. When we find ourselves competing in a new arena and under different conditions, this asset must manifest itself in new forms -- but it must remain the core around which we build..

One of the collateral industries built around universities, textbook publishing, has been facing this problem in much the same way as newspapers for a while now. The web created a marketplace with less friction, which has made it harder for them to make the return on investment to which they had grown accustomed. As textbook prices rise, students look for alternatives. Of course, students always have: using old editions, using library copies, sharing. Those are the old strategies -- I used them in school. But today's students have more options. They can buy from overseas dealers. They can make low-cost copies much more readily. Many of my students have begun to bypass the the assigned texts altogether and rely on free sources available on-line. Compassionate faculty look for ways to help students, too. They support old editions. They post lecture notes and course materials on-line. They even write their own textbooks and post them on-line. Here the textbook publishers cross paths with the newspapers. The web reduces entry costs to the point that almost anyone can enter and compete. And publishers shouldn't kid themselves; some of these on-line texts are really good books.

When I think about the case of computer science in particular, I really wonder. I see the wealth of wonderful information available on line. Free textbooks. Whole courses taught or recorded. Yes, blogs. Open-source software communities. User communities built around specific technologies. Academics and practitioners writing marvelous material and giving it away. I wonder, as many do about journalists, whether academics will be able to continue in this way if the university structure on which they build their careers changes or disappears? What experiments will find the successful models of tomorrow's schools?

Were I graduating from high school today, would I need a university education to prepare for a career in the software industry? Sure, most self-educated students would have gaps in their learning, but don't today's university graduates? And are the gaps in the self-educated's preparation as costly as 4+ years paying tuition and taking out loans? What if I worked the same 12, 14, or 16 hours a day (or more) reading, studying, writing, contributing to an open-source project, interacting on-line? Would I be able to marshall the initiative or discipline necessary to do this?

In my time teaching, I have encountered a few students capable of doing this, if they had wanted or needed to. A couple have gone to school and mostly gotten by that way anyway, working on the side, developing careers or their own start-up companies. Their real focus was on their own education, not on the details of any course we set before them.

Don't get me wrong. I believe in the mission of my school and of universities more generally. I believe that there is value in an on-campus experience, an immersion in a community constructed for the primary purpose of exploring ideas, learning and doing together. When else will students have an opportunity to focus full-time on learning across the spectrum of human knowledge, growing as a person and as a future professional? This is probably the best of what we offer: a learning community, focused on ideas broad and deep. We have research labs, teams competing in a cyberdefense and programming contests. The whole is greater than the sum of parts, both in the major and in liberal education.

But for how many students is this the college experience now, even when they live on campus? For many the focus is not on learning but on drinking, social life, video games... That's long been the case to some extent, but the economic model is changing. Is it cost-effective for today's students, who sometimes find themselves working 30 or more hours a week to pay for tuition and lifestyle, trying to take a full load of classes at the same time? How do we make the great value of a university education attractive in a new world? How do we make it a value?

And how long will universities be uniquely positioned to offer this value? Newspapers used to be uniquely positioned to offer a value no one else could. That has changed, and most in the industry didn't see it coming (or did, and averted their eyes rather than face the brutal facts).

I'd like also to say that expertise distinguishes the university from its on-line competition. That has been true in the past and remains true today, for the most part. But in a discipline like computer science, with a large professional component attracts most of its students, where grads will enter software development or networking... there is an awesome amount of expertise out in the world. More and more of those talented people are now sharing what they know on-line.

There is good news. Some people still believe in the value of a university education. Many students, and especially their parents, still believe. During the summer we do freshman orientation twice a week, with an occasional transfer student orientation thrown into the mix. People come to us eagerly, willing to spend out of their want or to take on massive debts to buy what we sell. Some come for jobs, but most still have at least a little of the idealism of education. When I think about their act in light of all that is going on in the world, I am humbled. We owe them something as valuable as what they surrender. We owe them an experience befitting the ideal. This humbles me, but it also Invigorates and scares me, too.

This article is probably more dark fantasy than reality. Still, I wonder how much of what I believe I really should believe, because it's right, and how much is merely a product of my lack of imagination. I am certain that I'm living in the middle of a revolution. I don't know how well I see or understand it. I am also certain of this: I don't want someone to be writing this speech about universities in a few years with me in its clueless intended audience.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 05, 2009 3:25 PM

Paying for Value or Paying for Time

Brian Marick tweeted about his mini-blog post Pay me until you're done, which got me to thinking. The idea is something like this: Many agile consultants work in an agile way, attacking the highest-value issue they can in a given situation. If the value of the issues to work on decreases with time, there will come a point at which the consultant's weekly stipend exceeds the value of the work he is doing. Maybe the client should stop buying services at that point.

My first thought was, "Yes, but." (I am far too prone to that!)

First, the "yes": In the general case of consulting, as opposed to contract work, the consultant's run will end as his marginal effect on the company approaches 0. Marick is being honest about his value. At some point, the value of his marginal contribution will fall below the price he is charging that week. Why not have the client end the arrangement at that point, or at least have the option to? This is a nice twist on our usual thinking.

Now for the "but". As I tweeted back this feels a bit like Zeno's Paradox. Marick the consultant covers not half the distance from start to finish each week, but the most valuable piece of ground remaining. With each week, he covers increasingly less valuable distance. So our consultant, cast in the role of Achilles, concedes the race and says, okay, so stop paying me.

This sounds noble, but remember: Achilles would win the race. We unwind Zeno's Paradox when we realize that the sum of an infinite series can be a finite number -- and that number may be just small enough for Achilles to catch the tortoise. This works only for infinite series that behave in a particular way.

Crazy, I know, but this is how the qualification of the "yes" arose in my mind. Maybe, the consultant helps to create a change in his client that changes the nature of the series of tasks he is working on. New ideas might create new or qualitatively different tasks to do. The change created may change the value of an existing task, or reorder the priorities of the remaining tasks. If the nature of the series changes, it may cause the value of the series to change, too. If so, then the client may well want to keep the consultant around, but doing something different than the original set of issues would have called for.

Another thought: Assume that the conditions that Marick described do hold. Should the compensation model be revised? He seems to be assuming that the consultant charges the same amount for each week of work, with the value of the tasks performed early being greater than that amount and the value of the tasks performed later being less than that amount. If that is true,then early on the consultant is bringing in substantially more value than he costs. If the client pulls the plug as soon as the value proposition turns in its favor, then the consultant ends up receiving less than the original contract called for yet providing more than average value for the time period. If the consultant thinks that is fair, great. What if not? Perhaps the consultant should charge more in the early weeks, when he is providing more value, than in later week? Or maybe the client could pay a fee to "buy out" the rest of the contract? (I'm not a professional consultant, so take that into account when evaluating my ideas about consultant compensation...)

And another thought: Does this apply to what happens when a professor teaches a class? In a way, I think it does. When I introduce a new area to students, it may well be the case that the biggest return on the time we spend (and the biggest bang for the students' tuition dollars) happens in the first weeks. If the course is successful, then most students will become increasingly self-sufficient in the area as the semester goes on. This is more likely the case for upper-division courses than for freshmen. What would it be like for a student to decide to opt out of the course at the point where she feels like she has stopped receiving fair value for the time being spent? Learning isn't the same as a business transaction, but this does have an appealing feel to it.

The university model for courses doesn't support Marick's opt-out well. The best students in a course often reach a point where they are self-sufficient or nearly so, and they are "stuck". The "but" in our teaching model is that we teach an audience larger than one, and the students can be at quite different levels in background and understanding. Only the best students reach a point where opting out would make sense; the rest need more (and a few need a lot more -- more than one semester can offer!).

The good news is that the unevenness imposed by our course model doesn't hurt most of those best students. They are usually the ones who are able to make value out of their time in the class and with the professor regardless of what is happening in the classroom. They not only survive the latency, but thrive by veering off in their own direction, asking good questions and doing their own programming, reading, thinking outside of class. This way of thinking about the learning "transaction" of a course may help to explain another class of students. We all know students who are quite bright but end up struggling through academic courses and programs. Perhaps these students, despite their intelligence and aptitude for the discipline, don't have the skills or aptitude to make value out of the latency between the point they stop receiving net value and the end of the course. This inability creates a problem for them (among them, boredom and low grades). Some instructors are better able to recognize this situation and address it through one-on-one engagement. Some would like to help but are in a context that limits them. It's hard to find time for a lot of one-on-one instruction when you teach three large sections and are trying to do research and are expected to meet all of the other expectations of a university prof.

Sorry for the digression from Marick's thought experiment, which is intriguing in its own setting. But I have learned a lot from applying agile development ideas to my running. I have found places where the new twist helps me and others where the analogy fails. I'm can't shake the urge to do the same on occasion with how we teach.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Software Development, Teaching and Learning

May 30, 2009 11:15 PM

How To Be Invincible

Everyone is trying to accomplish something big,
not realizing that life is made up of little things.
-- Frank A. Clark

Instead of trying to be the best, simply do your best.

Trying to be the best can turn into an ego trap: "I am better than you." In fact, the goal of being the best is often driven by ego. If it doesn't work out, this goal can become a source of finding fault and tearing oneself down. "I am not good enough." I should probably say "when", rather than "if". When your goal is to be the best, there always seems to be someone out there who does some task better. The result is like a cruel joke: trying to be the best may make you feel like you are never good enough.

In more prosaic sense, trying to be the best can provide a convenient excuse for being mediocre. When you realize that you'll never be as good as a particular someone, it's easy to say, "Well, why bother trying to be the best? I can spend my time doing something else.." This is a big problem when we decide to compare ourselves to the best of the best -- Lebron James, Haile Gebreselassie, or Ward Cunningham. Who among us can measure up to those masters? But it's also a problem when we compare ourselves to that one person in the office who seems to get and do everything right. Another cruel joke: trying to be the best ultimately gives us an excuse not to try to get better.

Doing your best is something that you can do any time or any place. You can succeed, no matter who else is involved. As time goes by, you are likely going to get better, as you develop your instincts. This means that every time you do your best you'll be in a different state, which adds a freshness to every new task you take on. Even more, I think that there is something about doing our best that causes us to want to get better; we are energized by the moment and realize that what we are doing now isn't the best we could do.

I've never met Lebron James or Haile Gebreselassie, but I've had the good fortune to meet and work with Ward Cunningham. He is a very bright guy, but he seems mostly to be a person who cares about other people and who has a strong drive to do interesting work -- and to get better. It's good to see that the folks we consider the best are... human. I've met enough runners, programmers, computer scientists, and chessplayers who are a lot better than I, and most of them are simply trying to do their best. That's how they got to be so good.

Some of you may say this is a distinction without a difference, but I have found that the subtle change in mindset that occurs when I shift my sights from trying to be the best to trying to do my best can have a huge effect on my attitude and my happiness. That is worth a lot. Again, though, there's more. The change in mindset also affects how I approach my work, and ultimately my effectiveness. Perhaps that's the final lesson, not a cruel joke at all: Doing your best is a better path to being better -- and maybe even the best -- than trying to the best.

(This entry is a riff on a passage from David Allen's Ready for Anything, from which I take the entry's title. Allen's approach to getting things done really does sync well with agile approaches to software development.)


Posted by Eugene Wallingford | Permalink | Categories: General

May 25, 2009 9:48 PM

Is There a Statute of Limitations for Blogging?

I had a few free minutes tonight with no big project at the front of my mind, so I decided to clean up my blog-ideas folder. Maybe one of the ideas would grab my imagination and I would write. But this is what grabbed my attention instead, a line in my main ideas.txt file:

(leftovers from last year's SIGCSE -- do them!?)

You have heard of code smells. This is a blog smell.

I have two entries still in the hopper from SIGCSE 2008 listed in my conference report table of contents: "Rediscovering the Passion and Beauty", on ways to share our passion and awe with others, and "Recreating the Passion and Beauty", because maybe it's just not that much fun any more. Both come from a panel discussion on the afternoon of Day 2, and both still seem worth writing, even after fourteenth months.

The question in the note to myself in the ideas file lets a little reality under the curtain... Will I ever write them? As conference report, they probably don't offer much, and the second entry has been preempted a bit by Eric Roberts giving a similar talk in other venues, and posting his slides on the web. But timeliness of the conference report isn't the only reason I write; the primary reason is to think about the ideas. The writing both creates the thinking and records it for later consideration. In this regard, they still hold my interest. Not all old ideas do.

When I first started this blog, I never realized how much my blogging would exhibit the phenomenon I call the Stack of Ideas. Sometimes an entry is a planned work, but more often I write what needs to be written based on where I am in my work. Hot ideas will push ideas that recently seemed hot onto the back burner. Going to a conference only makes the problem worse. The sessions follow one after another, and each one tends to stir me up so much as to push even the previous session way back in my mind. I have subfolders for hot ideas and merely recent ideas, and I do pull topics from them -- "hot" serving up ideas more reliably than "recent".

This is one risk of having more ideas than time. Of course, ideas are like most everything else: a lot of them are bunk. I suspect that many of my ideas are bunk and that the Stack of Ideas does me and my readers the Darwinian service of pushing the worst down, down, down out of consciousness. When I look back at most of the ideas that haven't made the cut yet, they feel stale. Are they just old, or were they not good enough? It's hard to say. Like other Darwinian processes, this one probably isn't optimal. Occasionally a good idea may lose out only because it wasn't fit for the particular mental environment in which it found itself. But all in all, the process seems to get things mostly right. I just hope the good ideas come back around sometime later. I think the best ones do.

This is one of the reasons that academics can benefit from keeping a blog. A lot of ideas are bunk. Maybe the ones that don't get written shouldn't be written. For the ideas that make the cut, writing this sort of short essay is a great way to think them through, make them come to life in words that anyone can read, and then let them loose into the world. Blog readers are great reviewers, and they help with the good and bad ideas in equal measure. What a wonderful opportunity blogging offers: an anytime, anyplace community of ideas. Most of us had little access to such a community even ten years ago.

I must say this, though. Blogging is of more value to me than just as a technical device. It can also offer an ego boost. There is nothing quite like having someone I met several years ago at SIGCSE or OOPSLA tell me how much they enjoy reading my blog. Or to have someone I've never met come up to me and say that they stumbled across my blog and find it useful. Or to receive e-mail saying, "I am a regular reader and thought you might enjoy this..." What a joy!

Will those old SIGCSE 2008 entries ever see the light of day? I think so, but the Stack of Ideas will have its say.


Posted by Eugene Wallingford | Permalink | Categories: General

May 08, 2009 6:31 AM

The Annual Book March

There is a scene in the television show "Two and a Half Men" in which neurotic Alan has a major meltdown in a bookstore. He decided to use some newly-found free time to better himself through reading the classics. He grabs some books from one shelf, say, Greek drama, and then from another, and another, picking up speed as he realizes that there aren't enough hours in a day or a lifetime to read all that is available. This pushes him over the edge, he makes a huge scene, and his brother is embarrassed in front of the whole store.

I know that feeling this time of year. When I check books out from the university library, the due date is always next May, at the end of finals week for spring semester. Over the year, I run into books I'd like to read, new and old, in every conceivable place: e-mail, blogs, tweets, newspapers, ... With no particular constraint other than a finite amount of shelf space -- and floor space, and space at home -- I check it out.

Now is the season of returning. I gather up all the books on my shelves, and on my floors, and in my home. For most of my years here, I have renewed them. Surely I will read them this summer, when time is less rare, or next year, on a trip or a break. At the beginning of the last couple of Mays, though, I have been trying to be more honest with myself and return books that have fallen so far down the list as to be unlikely reads. Some are far enough from my main areas of interest or work that they are crowded out by more relevant books. Others are in my area of interest but trumped by something newer or more on-point.

Now, as I walk to the library, arms full, to return one or two or six, I often feel like poor, neurotic Alan. So many book, so little time! How can I do anything but fall farther and farther behind withe each passing day? Every book I return is like a little surrender.

I am not quite as neurotic as Alan; at least I've never melted down in front of the book drop for all my students to see. I recognize reality. Still, it is hard to return almost any book unread.

I've had better habits this year, enforcing on myself first a strict policy of returning two books for every new one I checked out, then backsliding to an even one-for-one swap. As a result, I have far fewer books to return or new. Still, this week I have surrendered Knuth's Selected Papers on Analysis of Algorithms, David Berlinski's The Advent of the Algorithm, and Jerry Weissman's Presenting to Win. Worry not; others will take their place, both old (Northcote Parkinson, Parkinson's Law) and new: The Passionate Programmer and Practical Programming. The last of these promises an intro to programming for the 21st century, and I am eager to see how well they carry off the idea.

So, in the end, even if something changed radically to make the life of a professor less attractive, I agree with Learning Curves on the real reason I will never give up my job: the library.


Posted by Eugene Wallingford | Permalink | Categories: General

April 09, 2009 7:48 PM

Musings on Software, Programming, and Art

My in-flight and bedtime reading for my ChiliPLoP trip was William Stafford's Writing the Australian Crawl, a book on reading and especially writing poetry, and how these relate to Life. Stafford's musings are crashing into my professional work on the trip, about solving problems and writing programs. The collisions give birth to disjointed thoughts about software, programming, and art. Let's see what putting them into words does to them, and to me.

Intention endangers creation.

An intentional person is too effective to be a good guide in the tentative act of creating.

I often think of programming as art. I've certainly read code that felt poetic to me, such as McCarthy's formulation of Lisp in Lisp (which I discussed way back in an entry on the unity of data and program. But most of the programs we write are intentional: we desire to implement a specific functionality. That isn't the sort of creation that most artists do, or strive to do. If we have a particular artifact in mind, are we really "creating"?

Stafford might think not, and many software people would say "No! We are an engineering discipline, not an artistic one." Thinking as "artists", we are undisciplined; we create bad software: software that breaks, software that doesn't serve its intended purpose, software that is bad internally, software that is hard to maintain and modify.

Yet many people I know who program know... They feel something akin to artistry and creation.

How can we impress both sides of this vision on people, especially students who are just starting out? When we tell only one side of the story, we mislead.

Art is an interaction between object and beholder.

Can programs be art? Can a computer system be art? Yes. Even many people inclined to say 'no' will admit, perhaps grudgingly, that the iPod and the iPhone are objects of art, or at least have elements of artistry in them. I began writing some of these notes on the plane, and all around me I see iPods and iPhones serving people's needs, improving their lives. They have changed us. Who would ever have thought that people would be willing to watch full-length cinematic films on a 2" screen? Our youth, whose experiences are most shaped by the new world of media and technology, take for granted this limitation, as a natural side effect of experiencing music and film and cartoons everywhere.

Yet iPods aren't only about delivering music, and iPhones aren't just ways to talk to our friends. People who own them love the feel of these devices in their hands, and in our lives. They are not just engineered artifacts, created only to meet a purely functional need. They do more, and they are more.

Intention endangers creation.

Art reflects and amplifies experience. We programmers often look for inspirations to write programs by being alert to our personal experience and by recognizing disconnects, things that interrupt our wholeness.

Robert Schumann said, To send light into the darkness of men's hearts -- such is the duty of the artist. Artists deal in truth, though not in the direct, assertional sense we often associate with mathematical or scientific truth. But they must deal in truth if they are to shine light into the darkness of our hearts.

Engineering is sometimes defined as using scientific knowledge and physical resources to create artifacts that achieve a goal or meet a need. Poets use words, not "physical resources", but also shapes and sounds. Their poems meet a need, though perhaps not a narrowly defined one, or even one we realize we had until it was met in the poem. Generously, we might think of poets as playing a role somewhat akin to the engineer.

How about engineers playing a role somewhat akin to the artist? Do engineers and programmers "send light into the darkness of men's hearts"? I've read a lot of Smalltalk code in my life that seemed to fill a dark place in my mind, and my soul, and perhaps even my heart. And some engineered artifacts do, indeed, satisfy a need that we didn't even know we had until we experienced them. And in such cases it is usually experience in the broadest sense, not the mechanics of saving a file or deleting our e-mail. Design, well done, satisfies needs users didn't know they had. This applies as well to the programs we write as to any other artifact that we design with intention.

I have more to write about this, but at this time I feel a strong urge to say "Yes".


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

April 06, 2009 2:05 AM

The Hard Part

This is the idea behind biction:

(The hard part of writing isn't the writing; it's the thinking.)
-- William Zinsser

s/writing/programming/*

This line comes from Zinsser's recent article, Visions and Revisions, in which he describes the writing and rewriting of On Writing Well over the course of thirty years. I read On Writing Well a decade or so ago, in one of its earlier editions. It is my favorite book on the craft of writing.


Posted by Eugene Wallingford | Permalink | Categories: General

March 29, 2009 11:39 AM

Looking Forward to Time Working

In real life there is no such thing as algebra.

-- Fran Lebowitz

At this time next week, I will be on my way to ChiliPLoP for a working session. Readers here know how much I enjoy my annual sojourn to this working conference, but this year I look forward to it with special fervor.

First, my day job the last few months -- the last year, really -- has been heavier than usual with administrative activities: IT task force, program review, budget concerns. These are all important tasks, with large potential effects on my university, my department, and our curriculum and faculty. But they are not computer science, and I need to do some computer science.

Second, I am still in a state of hopeful optimism that my year-long Running Winter is coming to an end. I put in five runs this week and reached 20 miles for the first time since October. The week culminated this morning in a chilly, hilly 8 miles on a fresh dusting of snow and under a crystal clear blue sky. ChiliPLoP is my favorite place to run away from home. I never leave Carefree without being inspired, unless I am sick and unable to run. Even if I manage only two short runs around town, which is what I figure is in store, I think that the location will do a little more magic for me.

Our hot topic group will be working at the intersection of computer science and other disciplines, stepping a bit farther from mainstream CS than it has in recent years. We all see the need to seek something more transformative than incremental, and I'd like to put into practice some of the mindset I've been exploring in my blog the last year or so.

The other group will again be led by Dave West and Dick Gabriel, and they, too, are thinking about how we might re-imagine computer science and software development around Peter Naur's notion of programming as theory building. Ironically, I mentioned that work recently in a context that crosses into my hot topic's focus. This could lead to some interesting dinner conversation.

Both hot topics' work will have implications for how we present programming, software development, and computer science to others, whether CS students are professionals in other disciplines. Michael Berman (who recently launched his new blog) sent a comment on my Sweating the Small Stuff that we need to keep in mind whenever we want people to learn how to do something:

I think that's an essential observation, and one that needs to be designed into the curriculum. Most people don't learn something until they need it. So trying to get students to learn syntax by teaching them syntax and having them solve toy problems doesn't teach them syntax. It's a mistake to think that there's something wrong with the students or the intro class -- the problem is in the curriculum design.

I learned algebra when I took trig, and trig when I took calculus, and I learned calculus in my physics class and later in queueing theory and probability. (I never really learned queueing theory.)

One of the great hopes of teaching computation to physicists, economists, sociologists, and anyone else is that they have real problems to solve and so might learn the tool they need to solve them. Might -- because we need to tell them a story that compels them to want to solve them with computation. Putting programming into the context of building theories in an applied discipline is a first step.

(Then we need to figure out the context and curriculum that helps CS students learn to program without getting angry...)


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

March 24, 2009 3:45 PM

Meta-Blog: Follow-Up to My Adele Goldberg Entry

When I was first asked to consider writing a blog piece for the Ada Lovelace Day challenge, I wasn't sure I wanted to. I don't usually blog with any particular agenda; I just write whatever is in my mind at the time, itching to get out. This was surely a topic I have thought and written about before, and it's one that I have worked on with people at my university and across the state. I think it is in the best interest of computer science to be sure that we are not missing out on great minds who might be self-selecting away from the discipline for the wrong reasons. So I said yes.

Soon afterwards, ACM announced Barbara Liskov as the winner of the Turing Award. I had written about Fran Allen when she won the Turing Award, and here was another female researcher in programming languages whose work I have long admired. I think the Liskov Substitution Principle is one of the great ideas in software development, a crucial feature of object-oriented programming, of any kind of programming, really. I make a variant of the LSP the centerpiece of my undergraduate courses on OOP. But Liskov has done more -- CLU and encapsulation, Thor and object-oriented databases, the idea of Byzantine fault tolerance in distributed computing, ... It was a perfect fit for the challenge.

But my first thought, Adele Goldberg, would not leave me. That thought grew out of my long love affair with Smalltalk, to which she contributed, and out of a memory I have from my second OOPSLA Educators' Symposium, where she gave a talk on learning environments, programming, and language. Goldberg isn't a typical academic Ph.D.; she is versatile, having worked in technical research, applications, and business. She has made technical contributions and contributions to teaching and learning. She helped found companies. In the end, that's the piece I wanted to write.

So, if my entry on Goldberg sounds stilted or awkward, please cut me a little slack. I don't write on assigned topics much any more, at least not in my blog. I should probably have set aside more time to write that entry, but I wrote it much as I might write any other entry. If nothing else, I hope you can find value in the link to her Personal Dynamic Media article, which I was so happy to find on-line.

At this point, one other person has written about Goldberg for the Lovelace Day challenge. That entry has links to a couple of videos, including one of Adele demonstrating a WIMP interface using an early implementation of Smalltalk. A nice piece of history. Mark Guzial mentions Adele in his Lovelace Day essay, but he wrote about three women closer to home. One of his subjects is Janet Kolodner, who did groundbreaking research on case-based reasoning that was essential to my own graduate work. I'm a fan!


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

March 06, 2009 8:01 PM

Coming Up For Air

I have spent nearly every working minute this week sitting in front of this laptop, preparing a bunch of documents for an "academic program assessment" that is being done campus-wide at my university. Unfortunately, that makes this week Strike Two.

Last October: no OOPSLA for me.

This week: no SIGCSE for me.

The next pitch arrives at the plate in about a month... Will there be no ChiliPLoP for me?

That would be an inglorious Strike Three indeed. It would break my equivalent of DiMaggio's streak: I have never missed a ChiliPLoP. But budget rescissions, out-of-state travel restrictions, and work, work, work are conspiring against me. I intend to make my best effort. Say a little prayer.

I hope that you can survive my missing SIGCSE, as it will mean no reports from the front. Of course, you will notice two missing links on my 2008 report, so I do have some material in the bullpen!

Missing SIGCSE was tougher than usual, because this year I was to have been part of the New Teaching Faculty Roundtable on the day before the conference opened. I was looking forward to sharing what little wisdom I have gained in all my years teaching -- and to stealing as many good ideas as I could from the other panelists. Seeing all of the firepower on the roster of mentors, I have no doubts that the roundtable was a great success for the attendees. I hope SIGCSE offers the roundtable again next year.

Part of my working day today was spent in Cedar Rapids, an hour south of here. Some of you may recall that Cedar Rapids was devastated by flooding last summer, when much of the eastern part of the state was under 500-year flood waters. I surprised and saddened to see that so much of the downtown area still suffers the ill effects of the water. The public library is still closed while undergoing repair. But I was heartened to see a vibrant city rebuilding itself. A branch library has been opened at a mall on the edge of town, and it was buzzing with activity.

You know, having a library in the mall can be a good thing. It is perhaps more a part of some people's lives than a dedicated building in the city, and it serves as a nice counterpoint to the consumption and noise outside the door. Besides, I had easy access to excellent wireless service out in mall area even before the library opened, and easy access to all the food I might want whenever I needed to take a break. Alas, I really did spend nearly every working minute this week sitting in front of this laptop, so I worked my way right up to dinner time and a welcome drive home.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

February 21, 2009 7:10 PM

Hope for Troubled Economic Times

And we decided to innovate our way through this downturn, so that we would be further ahead of our competitors when things turn up.

"This downturn" was the dot.com bust. The speaker was Steve Jobs. The innovations were the iPod and iTunes. Seems to have worked out fine.

My agile friends are positioned well to innovate through our current downturn, as are the start-ups that other friends and former students run. It is something of a cliché but true nonetheless. Recessions can be a good time for people and organizations that are able -- and willing -- to adapt. They can be an opportunity as much as a challenge.

I hope that the faculty and staff of my university can approach these troubled budget times with such an attitude. In five years, we could be doing a much better job for our students, our state, and our respective academic disciplines.


Posted by Eugene Wallingford | Permalink | Categories: General

February 17, 2009 3:53 PM

Posts of the Day

Tweet of the Day

Haskell is a human-readable program compression format.
-- Michael Feathers

Maybe we should write a generator that produces Haskell.

Non-Tech Blog of the Day

Earlier in my career I worked hard to attract attention. ... The problem with this approach is that eventually it all burns down to ashes and no one knows a thing more about software development than they did before.
-- Kent Beck

Seek truth. You will learn to focus your life outside your own identity, and it makes finding out you're wrong not only acceptable, but desirable.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

February 05, 2009 4:24 PM

So Little Computer Science...

... so many meetings and budget discussion. A week in which I do no computer science is as bad as a week in which I do not run.

I did play hooky while attending a budget meeting yesterday. I took one of our department laptops so that I could upgrade Ruby, install Bluecloth, and play with a new toy. But that's bit-pushing, not computer science.

Why all the budget talk? Working at a public university offers some shelter from changing economic winds and tends to level changes out over time. But when the entire economy goes down, so do state revenues. My university is looking at a 9% cut in its base budget for next year. That magnitude of change means making some hard choices. Such change creates uneasy times among the faculty, and more work planning for changes and contingencies among department heads.

There is some consolation in being on the front line, knowing that I can shield the faculty from much of the indecision. I also have opportunities to argue against short-sighted responses to the budget cuts and to suggest responses that will help the university in the long term. There is nothing like a sudden lack of revenue to help you focus on what really matters!

Still, I'd rather be writing a program or working on a tough CS problem.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

February 04, 2009 7:44 AM

Remembering the Answer a Few Days Late

I need a better memory.

Last time, I wrote about being surprised by an interview request. But more than a year ago I read about a similar problem and one solution:

As a science journalist, I can tell you the best thing to do, as an academic getting interviewed and wanted to guide the interview somewhat, is to have analogies cocked, locked and loaded.... [R]eporters go nuts for pre-thought-out analogies/explanations because it's quotable material, and could in fact be the center of the article.... So cranking them out before you speak with someone is a great way to maintain some control of what reporters quote you on.

As in so many things, preparation pays off.

Of course, this isn't quite the same problem. Talking about one's own research or teaching is different than talking about department business or someone else's project. But that is one of the responsibilities that comes with chairing the department -- speaking about the wider interests of the department.

The bigger issue here is, how to convert what I read into learning. The passage above stuck out enough that I filed it away for eighteen months. But it doesn't do me any good sitting in a text file somewhere.


Posted by Eugene Wallingford | Permalink | Categories: General

January 30, 2009 3:10 PM

Pop Interview!

The phone rings.

"Hi, I'm [local radio personality]. I'd like to interview you about the grant your department received from State Farm."

"Um, sure." Quick -- compose yourself.

"So, what is this grant all about?"

A short game of Twenty Questions ensued. This was a first for me: a cold call from a radio station requesting an interview. Fortunately the interview was conducted off-line; my answers were recorded and will be used to produce a finished piece later.

I have done phone interviews before, some of which I have discussed here. But those were arranged in advance, so I had time to prepare specific comments and to get into the right frame of mind. to answer questions in that context. Also, my previous interviews have always been for my own personal work, which I know at a different level than I know my department work. Even though I wrote the grant proposal in question, it was collective work, not mine, and that shows in how well I feel the project.

A quick word about the grant... State Farm Insurance is based a few hours' drive from here and hires many of our best software engineering students into its systems development division. Through its foundation, State Farm supports universities with grants to support educational work. A few years ago, one of these grants helped us to build our first computational cluster and begin using it in our bioinformatics program, and to support a number of computational science projects on campus. The fact that an insurance company would fund this kind of work shows that it has a long-term view of education, which we at the university appreciate.

We recently received a new grant to purchase two quad-socket, quad-core servers and integrate their use into our architecture, systems, and programming courses. The world is going multi-core, and we would like to give our students some of the experiences they will need to contribute.

Anyway, I now have a new set of skills to work on: the pop interview. Or I at least need to develop a mind quick enough to say, "Hey, can I call you back in five?"


Posted by Eugene Wallingford | Permalink | Categories: General

January 22, 2009 4:05 PM

A Story-Telling Pattern from the Summit

At the Rebooting Computing Summit, one exercise called for us to interview each other and then report back to the group about the person we interviewed. The reports my partner and I gave, coupled with some self-reported experiences later in the day, reminded me of a pattern I've experienced in other contexts. Here is a rough first draft. Let me know what you think.

Second-Hand Story

When we need to know a person's story, our first tendency is often to ask him to tell us. After all, he know it best, because he lived it. He has had a chance to reflect on it, to reconsider decisions, and to evaluate what the story "means".

This approach can disappoint us. Sometimes, the person is too close to the experience and attaches accidental emotions and details. Sometimes, even though he has had a chance to reflect on his experience, he hasn't reflected enough -- or perhaps not at all! Telling the story may be the first time he has thought about some of those experiences in a long time. While trying to tell the story and summarize its meaning at the same time, the storyteller may reach for an easily-found answer. The result can be trite, convenient, or self-protective. Maybe the person is simply too close to an experience to see its true meaning.

Therefore, ask the person to tell his story to someone else, focusing on "just the facts". Then, ask the interviewer to tell the story, perhaps in summary form. Let the interviewer and the listeners look for patterns and themes.

The interviewer has distance and an opportunity to listen objectively. She is less likely to impose well-rehearsed personal baggage over the story.

The result can still be trite. If the listener does not listen carefully, or is too eager to stereotype the story, then the resulting story may well be worse than the original, because it is not only sanitized but sanitized by someone without intimate connection to it.

It can be refreshing to hear someone else tell your own story, to draw conclusions, to summarize what is most important. A good listener can pick up on essential details and remove the shroud of humility or disappointment that too often obscures your own view. You can learn something about yourself!

This technique depends on two people's ability to tell a story, not one. The original story-teller must be open, honest, and willing to describe situations more than evaluate them too much. (A little evaluation is unavoidable and also useful. The listener learns something about the story-teller from that party of the story, too.) The interviewer must be a careful listener and have a similar enough background to be able to put the story into context and form reasonable abstractions about it.

Examples. I found the interviewer's reports at the Rebooting Computing summit to be insightful, including the ones that Joe Carthy and I gave on one another. Hearing someone else "tell my story" let me hear it more objectively than if I had told it myself. Occasionally I felt let like chiming in to correct or add something, but I'm not sure than anything I said could have done a better job introducing myself to the rest of the group. Something Joe said during his interview of me made me think more about just how my non-CS profs helped lead me into CS, something I had never thought much about before that.

Later that day, we heard several self-reported stories, and those stories -- told by the same people who had reported on others earlier -- sounded flat and trite. I kept thinking, "What's the real story?" Maybe someone else could have told it better!

Related Ideas. I am reminded of several techniques I learned as an architecture major while studying Betty Edwards's Drawing on the Right Side of the Brain:

  • drawing a chair by looking at the negative space around it
  • drawing a picture of Igor Stravinsky upside down, in order to trick the visualization mechanism in our minds that jump right to stereotypes
  • drawing a paper wad, which matched no stereotyped form already in our memories

This pattern and these exercises are all examples of techniques for indirect learning. This is is perhaps the first time I realized just how well indirect learning can work in a social setting.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

January 21, 2009 7:55 AM

Rebooting Computing Summit -- This and That

As always, my report on the Rebooting Computing Summit left out some of the random thoughts and events that made my trip. Here are a few.

•   When I was growing up I learned that the prefixes "Mc" and "O'" indicated "son of" when used in names such as McDonald and O'Herlihy. I always wondered who the original ancestors were -- the Donalds and Herlihys. (Even then, I was concerned with the base case...) One of my favorite grad school profs was named Bill McCarthy, but I had never met a Carthy. Now I have... One of my table mates at the summit was Joe Carthy of Dublin! Joe shared some valuable insights on teaching computing.

•   In my report, I wrote of my vision for the future of computing, in which children will routinely walk to the computer and write a program.... "Walk to the computer" -- that is so 1990s! Today's children carry their technology in their hands.

•   During one of his messages, Peter Denning showed the familiar quote, "Insanity is doing the same thing over again, expecting different results," as a motivation to change. But I think there is more to it than that. I was reminded of a recent Frazz comic, in which the precocious Caulfield pointed out that the world is always changing, so it is also insanity to do the same thing over again, expecting the same results. The world is changing around computing and computing education. There is no particular reason to think that doing the same old things better will get us anywhere useful.

•   At one point, Alan Kay said that part of what is wrong with computing is that too many of us "fool around", rather than working to change the world. This, he said, is a feature of a popular culture, not a serious one. First, we had real guitar. Then came air guitar. And now we have Guitar Hero. He is, of course, right, and written occasionally of being shamed at coming up short when measured against his vision.

Later that evening, my roommate Robert Duvall discussed whether Guitar Hero might have some positives, by motivating some of the people who play it to learn to play a real guitar. I don't have a good feel for the culture around Guitar Hero, so I'll have to wait and see. New technologies often interact with younger generations in ways that we old folks can't predict. (My prurient side wants to say that Guitar Hero can't be all bad if it gives us Heidi Klum playing air guitar in her privates.)

•   A Creative Interlude

On the second day of the summit, each table was asked to communicate to the rest of the groups its vision of the future of CS. The facilitators encouraged us to express our vision creatively, via role play or some other non-bullet list medium. One group did a neat job on this, with one ham performer playing the central role in a number of vignettes showing where the computing of tomorrow will have taken us.

This is the sort of exercise for which I am ill-equipped to excel alone, but I am able to do all right if I am in a group. My table decided to gang-write a song -- doggerel, really. With Christmas still close in our memory, we chose the tune to the familiar carol "Angels We Have Heard On High", in part, I think, for its soaring "Gloria"s. The result was "Everyone Now Loves CS".

Our original plan was for Susan Horwitz to sing our creation, as she does this sort of thing in many of her classes and so is used to the attention. A few of us toyed with the idea of playing air guitar in the background, but I'm glad we opted not to; the juxtaposition of our performance with Alan Kay's remarks later that afternoon would have been unfortunate indeed! About five minutes before the performance Susan informed us that we would be singing as a group. So we did. My students should not expect a reprise.

My conference history now includes singing and acting. I don't imagine that dance is in my future, but you never know.


Posted by Eugene Wallingford | Permalink | Categories: General

January 02, 2009 10:42 PM

Fly on the Wall

A student wrote to tell me that I had been Reddited again, this time for my entry reflecting on this semester's final exam. I had fun reading the comments. Occasionally, I found myself thinking about how a poster had misread my entry. Even a couple of other posters commented as much. But I stopped myself and remembered what I had learned from writers' workshops at PLoP: they had read my words, which must stand on their own. Reading over a thread that discusses something I've written feels a little bit like a writers' workshop. As an author, I never know how others might interpret what I have written until I hear or read their interpretation in their own words. Interposing clarifying words is a tempting but dangerous trap.

Blog comments, whether on the author's site or on a community site such as Reddit, do tend to drift far afield from the original article. That is different from a PLoP workshop, in which the focus should remain on the work being discussed. In the case of my exam entry, the drift was quite interesting, as people discussed accumulator variables (yes, I was commenting on how students tend to overuse them; they are a wonderful technique when used appropriately) and recursion (yes, it is hard for imperative thinkers to learn, but there are techniques to help...). Well worth the read. But I could also see that sometimes a subthread comes to resemble an exchange in the children's game Telephone. Unless every commenter has read the original article -- in this case, mine -- the discussion tends to drift monotonically away from the content of the original as it loses touch with each successive post. Frankly, that's all right, too. I just hope that I am not held accountable for what someone at the end of the chain says I wrote...


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

December 11, 2008 7:37 AM

Movin' Out, Twyla Tharp, and Inspiration

a scene from the Broadway musical Movin' Out

Last month my wife and I had the good fortune to see a Broadway touring company perform the Tony Award-winning Movin' Out, a musical created by Twyla Tharp from the music of Billy Joel. I've already mentioned that I am a big fan of Billy Joel, so the chance to listen to his songs for two hours was an easy sell. Some of you may recall that I also wrote an entry way back called Start with a Box that was inspired by a wonderful chapter from Twyla Tharp's The Creative Habit. So even if I knew nothing else about Tharp, Movin' Out would have piqued my interest.

This post isn't about the show, but my quick review is: Wow. The musicians were very good -- not imitating Joel, but performing his music in a way that felt authentic and alive. (Yes, I sang along, silently to myself. My wife said she saw my lips moving!) Tharp managed somehow to tell a compelling story by stitching together a set of unrelated songs written over the long course of Joel's career. I know all of these songs quite well, and occasionally found myself thinking, "But that's not what this song means...". Yet I didn't mind; I was hearing from within the story. And I loved the dance itself -- it was classical even when modern, not abstract like Merce Cunningham's Patterns in Space and Sound. My wife knows dance well, and she was impressed that the male dancers in this show were actually doing classical ballet. (In many performances, the men are more props than dancers, doing lifts and otherwise giving the female leads a foil for their moves.)

Now I see that Merlin Mann is gushing over Tharp and The Creative Habit. Whatever else I can say, Mann is a great source of links... He points us to a YouTube video of Tharp talking about "failing well", as well as the first chapter of her book available on line. Now you can read a bit to see if you want to bother with the whole book. I echo Mann's caveat: we both liked the first chapter, but we liked the rest of the book more.

Since my post three years ago on The Creative Habit, I've been meaning to return to some of the other cool ideas that Tharp writes about in this book. Seeing Movin' Out caused me to dig out my notes from that summer, and seeing Mann's posts has awakened my desire to write some of the posts I have in mind. The ideas I learned in this book relate well to how I write software, teach, and learn.

Here is a teaser that may connect with agile software developers and comfort students preparing for final exams:

The routine is as much a part of the creative process as the lightning bolt of inspiration, maybe more. And this routine is available to everyone.

Oddly, this quote brings to mind an analogy to sports. Basketball coaches often tell players not to rely on having a great shooting night in order to contribute to the team. Shooting is like inspiration; it comes and it goes, a gift of capricious gods. Defense, on the other hand, is always within the control of the player. It is grunt work, made up of effort, attention, and hustle. Every player can contribute on defense every night of the week.

For me, that's one of the key points in this message from Tharp: control what you can control. Build habits within which you work. Regular routine -- weekly, daily, even hourly -- are the scaffolding that keep you focused on making something. What's better, everyone can create and follow a routine.

While I sit and wait for the lightning bolt of inspiration to strike, I am not producing code, inspired or otherwise. Works of inspiration happen while I am working. Working as a matter of routine increases the chances that I will be producing something when the gods smile on me with inspiration. And if they don't... I will still be producing something.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

November 30, 2008 10:04 PM

Disconnected Thoughts to End the Month

... and a week of time away from work and worries.

There is still something special about an early morning run on fresh snow. The world seems new.

November has been a bad month for running, with my health at its lowest ebb since June, but even one three-mile jog brings back a good feeling.

I can build a set of books for our home finances based on the data I have at hand. I do not have to limit myself to the way accountants define transactions. Luca Pacioli was a smart guy who did a good thing, but our tools are better today than they were in 1494. Programs change things.

S-expressions really are a dandy data format. They make so many things straightforward. Some programmers may not like the parens, but simple list delimiters are all I need. Besides, Scheme's (read) does all the dirty work parsing my input.

After a week's rest, I imagine something like one of those signs from God:

That "sustainable pace" thing...
I meant that.

-- The Agile Alliance

I'd put Kent or Ward's name in there, but that's a lot of pressure for any man. And they might not appreciate my sense of humor.

The Biblical story of creation in six days (small steps) with feedback ("And he saw that it was good.") and a day of rest convinces me that God is an agile developer.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

November 17, 2008 8:58 PM

Doing It Wrong Fast

Just this weekend I learned about Ashleigh Brilliant, a cartoonist and epigrammist. From little I've seen in a couple of days, his cartoons remind me of Hugh MacLeod's business-card doodles Gaping Void -- only with a 1930s graphic style and language that is more likely SFW.

This Brilliant cartoon, #3053, made it into my Agile Development Hall of Fame on first sight:

Doing it wrong fast... is at least better than doing it wrong slowly

Doing it wrong fast means that we have a chance to learn sooner, and so have a chance to be less wrong than yesterday.

This lesson was fresh in my mind over the weekend from a small dose of programming. I was working on the financial software I've decided to write for myself, which has me exploring a few corners of PLT Scheme that I don't use daily. As a result, I've been failing more frequently than usual. The best failures come rat-a-tat-tat, because they are often followed closely by an a-ha! Sunday, just before I learned of Brilliant's work, I felt one of those great releases when, for the first time, my code gave me an answer I did not already know and had not wanted to compute by hand: our family net worth. At that moment I was excited to verify the result manually (I'll need that test later!) and enjoy all the wrong wrong work that had brought me to this point. Brilliant has something.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

November 12, 2008 6:57 AM

That's a Wrap

I have posted all of my reports from the 2008 SECANT workshop. In sum, SECANT is a worthwhile community-building effort. It brings together such a mix: academia and industry, different disciplines, and different kinds of schools, from large Big Ten and other R-I universities down to small liberal arts colleges. This one of the reasons why I love OOPSLA, but this venue provides a smaller, more intimate setting. (Of course, while SECANT lies at the intersection of computing -- and especially programming -- OOPSLA's domain is really Everything Programming, which is even better.)

The workshop was again a great source of ideas and inspiration for me. This seems like a good use of a relatively small amount of money by NSF. The onus is now on us participants... Will we do the work to grow the community? To develop courses and materials? To transform our institutions and disciplines. A tall order.

As for being done with my reports, I feel a small measure of pride. Sure, last year, I posted my final workshop report only five days after the workshop ended, and this year I'm at eleven days. But my report on SIGCSE -- from March -- is still incomplete, with two entries on top: a general description of a panel on bringing the joy and wonder back to CS, and a more detailed report on one of the presentations from that panel, by Eric Roberts.

Is there a statute of limitations on blog entries? Has my coupon to post on that panel session expired? If I were Kent Beck, I'd probably call this long delay a "blog smell" and write a pattern!

For me, blogging suffers from a stack-of-ideas phenomenon. I have ideas, and they get pushed onto the to-blog list. Sometimes, I have more ideas than time to write, and some ideas get pushed deep in the stack before I get a chance to write them up. Time passes... And then I look back at the list of ideas, and most feel stale, or at least no longer have their original hold on my mind. I currently have three levels of "blog ideas" folders, and each one contains a bunch of ideas that I remember wanting to write now -- but which now I feel no desire to write. Sounds like it's time for a little rm -r *.*

Going to a conference only makes the stack-of-ideas problem worse. The sessions follow one upon another, and each one tends to stir me up so much that I push even the previous session way back in my mind. That's one advantage of a 1.5-day workshop over a several day conference like SIGCSE or OOPSLA: the scale does not overflow my small brain.

Do readers care about any of this? Is SIGCSE stale for them? Perhaps, and I figure anyone who was wondering what went on in Portland has likely found the information elsewhere, and in any case moved on. But the topic of the unwritten entry may not be stale yet, so hope remains.

To return to the beginning of this blog, on the end of my SECANT reports: I hope you get as much from reading them as I did writing them.


Posted by Eugene Wallingford | Permalink | Categories: General

October 31, 2008 10:52 AM

SECANT This and That

[A transcript of the SECANT 2008 workshop: Table of Contents]

As always, at this workshop I have heard lots of one-liners and one-off comments that will stick with me after the event ends. This entry is a place for me to think about them by writing, and to share them in case they click with you, too.

The buzzword of this year's workshop: infiltration. Frontal curricular assaults often fail, so people here are looking for ways to sneak new ideas into courses and programs. An incremental approach creates problems of its own, but agile software proponents understand its value.

College profs like to roll their own, but high-school teachers are great adapters. (And adopters.)

Chris Hoffman, while describing his background: "When you reach my age, the question becomes, 'What haven't you done?' Or maybe, 'What have you done well?'"

Lenny Pitt: "With Python, we couldn't get very far. Well, we could get as far as we wanted, but students couldn't get very far." Beautiful. Imagine how far students will get with Java or Ada or C++.

Rubin Landau: "multidisciplinary != interdisciplinary". Yes! Ideas that transform a space do more than bring several disciplines into the same room. The discipline is new.

It's important to keep in mind the relationship between modeling and computing. We can do model without computing. But analytical models aren't feasible for all problems, and increasingly the problems we are interested in fall into this set.

Finally let me re-run an old rant by linking to the original episode. People, when you are second or third or sixth, you look really foolish.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

October 29, 2008 9:11 PM

Information, Dystopia, and a Hook

On my drive to Purdue today, I listened to the first 3/4 of Caleb Carr's novel, "Killing Time". This is not a genre I read or listen to often, so it's hard for me to gauge the book's quality. If you are inclined, you can read reviews on-line. At this point, I would say that it is not a very good book, but it delivered fine escapism for a car ride on a day when I needed a break more than deep thought. But it did get me to thinking about... computer science. The vignette that sets up the novel's plot is based on a typical use case for Photoshop, or a homework assignment in a media computation CS1 course.

Carr describes a world controlled by "information barons", a term intended to raise the specter of the 19th century's rail barons and their control of wealth and commerce. The central feature of his world in 2023 is deception -- the manipulation of information, whether digital or physical, to control what people think and feel. The novel's opening involves the role a doctored video plays in a presidential assassination, and later episodes include doctored photos, characters manufactured via the data planted on the internet, the encryption of data on disk, and real-time surveillance of encrypted communication.

If students are at all interested in this kind of story, whether for the science fiction, the intrigue, or the social implications of digital media and their malleability, then we have a great way to engage them in computing that matters. It's CSI for the computer age.

Carr seems to have an agenda on the social issues, and as is often the case, such an agenda interferes with the development of the story. His characters are largely cut-outs in service of the message. Carr paints a dystopian view striking for its unremitting focus on the negatives of digital media and the science's increasing understanding of the world at a molecular level. The book seems unaware that biology and chemistry are helping us to understand diseases, create new drugs, and design new therapies, or that computation and digital information create new possibilities in every discipline and part of life. Perhaps it is more accurate to say that Carr starts with these promises as his backdrop and chooses to paint a world in which everything that could go wrong has. That makes for an interesting story but ultimately an unsatisfying thought experiment. For escapism, that may be okay.

After my previous entry, I couldn't help but wonder whether I would have the patience to read this book. I have to think not. How many pages? 274 pages -- almost slender compared to Perec's book. Still, I'm glad I'm listening and not reading.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

October 29, 2008 9:16 AM

Clearing the Mind for a Trip

I leave today to attend the second SECANT workshop at Purdue. This is the sort of trip I like: close enough that I can drive, which bypasses all the headaches and inconveniences of flight, but far enough away that it is a break from home. My conference load has been light since April, and I can use a little break from the office. Besides, the intersection of computer science and the other sciences is an area of deep interest, and the workshop group is a diverse one. It's a bit odd to look forward to six hours on the road, but driving, listening to a book or to music, and thinking are welcome pursuits.

As I was checking out of the office, I felt compelled to make two public confessions. Here they are.

First, I recently ran across another recommendation for Georges Perec's novel, Life: A User's Manual. This was the third reputable recommendation I'd seen, and as is my general rule, after the third I usually add it to my shelf of books to read. As I was leaving campus, I stopped by the library to pick it up for the trip. I found it in the stacks and stopped. It's a big book -- 500 pages. It's also known for its depth and complexity. I returned the book to its place on the shelf and left empty-handed. I've written before of my preference for shorter books and especially like wonderful little books that are full of wisdom. But these days time and energy are precious enough resources that I have to look at a complex, 500-page book with a wary eye. It will make good reading some other day. I'm not proud to admit it, but my attention span isn't up to the task right now.

Second, on an even more frivolous note, there is at the time of this writing no Diet Mountain Dew in my office. I drank the last one yesterday afternoon while giving a quiz and taking care of pre-trip odds and ends. This is noteworthy in my mind only because of its rarity. I do not remember the last time the cupboard was bare. I'm not a caffeine hound like some programmers, but I don't drink coffee and admit some weakness for a tasty diet beverage while working.

I'll close with a less frivolous comment, something of a pattern I've been noticing in my life. Many months ago, I wrote a post on moving our household financial books from paper ledgers and journals into the twentieth century. I fiddled with Quicken for a while but found it too limiting; my system is a cross between naive home user and professional bookkeeping. Then I toyed with the idea of using a spreadsheet tool like Numbers to create a cascaded set o journals and ledgers. Yet at every turn I was thinking that I'd want to implement this or that behavior, which would strain the limits of typical spreadsheets. Then I came to my computer scientist's senses: When in doubt, write a program. I'd rather spend my time that way anyway, and the result is just what I want it to be. No settling. This pattern is, of course, no news at all to most of you, who roll your own blogging software and homework submission systems, even content management systems and book publishing systems, to scratch your own itches. It's not news to me, either, though sometimes my mind comes back to the power slowly. The financial software will grow slowly, but that's how I like it.

As a friend and former student recently wrote, "If only there were more time..."

Off to Purdue.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

October 24, 2008 12:10 PM

I've Been Reddited

I don't know if "reddited" is a word like "slashdotted" yet, but I can say that yesterday's post, No One Programs Any More, has reached #1 on Reddit's programming channel. This topic seems to strike a chord with a lot of people, both in the software business and in other technology pursuits. Here are my favorite comments so far:

I can't think of a single skill I've learned that has had more of an impact on my life than a semi-advanced grasp of programming.

This person received some grief for ranking learning how to program ahead of, say, learning how to eat, but I know just what the commenter means. Learning to program changes one's mind in the same way that learning to read and write. Another commenter agreed:

It's amazing how after a year of programming at university, I began to perceive the world around me differently. The way I saw things and thought about them changed significantly. It was almost as if I could feel the change.

Even at my advanced age, I love when that feeling strikes. Beginning to understand call/cc felt like that (though I don't think I fully grok it yet).

My favorite comment is a bit of advice I recommend for you all:

I will not argue with a man named Eugene.

Reddit readers really are smart!


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

October 16, 2008 11:56 PM

Odds and Ends from Recent Reading and Writing

I'm on the road to a recruiting event in Des Moines. The event is for girls who are interested in math and science. For me, the real treat is a chance to meet Mae Jemison the first woman of color to travel in space, on the space shuttle Endeavour in 1992. She's surely going to do a better selling math and science to these students than I could! (Note after the talk: She did. Perhaps the best way to summarize her message is, "We have choices to make.")

A few short items have been asking me to write them:

• At the risk of living too public a life where my students can see, I will say that the personality of my current class of students is not one that gives me a lot of energy. They are either still wary or simply disinterested. This happens every once in a while, and I'll try to find a way to create more energy in the room. In any case, it's nice at least to have a student or two who are like this.

• Kevin Rutherford has been working on a little software tool called reek, a smell detector for Ruby code.

That is what I would like to be doing right now, with either Ruby or Scheme being fine as a source language. Every time I teach programming languages I get the itch to dive deeply back into the refactoring pool. This is the primary drawback of administrative work and the primary indicator that I am probably not suited for a career in administration.

Short of working on such a cool project, blogging about interesting ideas is the next best thing.

• But consider this advice on writing:

If you have so many ideas, prove it to the world and start blogging. There is nothing like a blog to help you realize you have nothing new to say.

That post is really about why not to write a book. For many people, writing a book is a way to gain or demonstrate authority. Several of my friends and family have asked when I plan to write a book, and for at least a few their desire for me is grounded in the great respect that have for the value of a book. But I think that the author of the post is correct that writing a book is an outdated way to gain authority.

The world still needs great books such as, well, Refactoring, and one day I may sit down to write one. But I have to have something to say that should best be said in a book.

Perhaps we should take this author's advice with caution. She wrote a book and markets it with a blog!

• That piece also contains the following passage:

... self-respect comes from having some sort of vision for one's life and heading in that direction. And there is no one who can give you that vision -- you have to give it to yourself, and before you can feel like you have direction, you have to feel lost -- and lost is okay.

Long-time readers of this blog know that getting lost is not only okay but also demonstrates and exercises the imagination. Sometimes we get lost inside code just so that we can learn a new code base intimately.

• Finally, Seth Godin offers an unusual way to get things done:

Assemble your team (it might be just you) ... and focus like your hair is on fire. ... Do nothing except finish the project.

I need a Friday or a Monday at the office to try this out on a couple of major department projects. I was already planning a big sprint this weekend on a particularly persistent home project, and now I have a more provocative way to rev my engine.


Posted by Eugene Wallingford | Permalink | Categories: General

October 15, 2008 7:52 AM

Social Networks and the Changing Relationship Between Students and Faculty

One of my most senior colleagues has recently become enamored of Facebook. One of his college buddies started using it to share pictures, so my colleague created an account. Within minutes, he had a friend request -- from a student in one of his classes. And they kept coming... He now has dozen of friends, mostly undergrads at our school but also a few former students and current colleagues.

Earlier this week, he stopped me in the hall to report that during his class the previous hour, a student in the class had posted a message on his own Facebook page saying something to the effect, "I can't keep my eyes open. I have to go to sleep!" How does the prof know? Because they are Facebook friends, of course.

Did the student think twice about posting such a message during class? I doubt it. Was he so blinded by fatigue or boredom that he forgot the prof is his friend and so would see the message? I doubt it. Is he at all concerned in retrospect, or even just a little sheepish? I doubt it. This is standard operating procedure for a college set that opens the blinds on it life, day by day and moment by moment.

We live in a new world. Our students live much more public lives than most of us did, and today's network technology knocks down the well that separates Them from Us.

This can be a good thing. My colleague keeps his Facebook page open in the evenings, where his students can engage him in chat about course material and assignments. He figures that his office hours are now limited only by the time he spends in front of a monitor. Immediate interaction can make a huge difference to a student who is struggling with a database problem or a C syntax error. The prof does not mind this as an encroachment on his time or freedom; he can close the browser window and draw the blinds on office hours anytime he wants, and besides, he's hacking or reading on-line most of the time anyway!

I'm uncertain what the potential downsides of this new openness might be. There's always a risk that students can become too close to their professors, so a prof needs to take care to maintain some semblance of a professional connection. But the demystification of professors is probably a good thing, done right, because it enables connections and creates an environment more conducive to learning. I suppose one downside might be that students develop a sense of entitlement to Anytime, Anywhere access, and professors who can't or don't provide could be viewed negatively. This could poison the learning environment on both sides of the window. But it's also not a new potential problem. Just ask students about the instructors who are never in their offices for face-to-face meetings or who never answer e-mail.

I've not had experience with this transformation due to Facebook. I do have a page, created originally for much the same reason as my colleague's. I do have a small number of friends, including undergrads, former students, current colleagues, a grade-school buddy, and even my 60+ aunt. But I use Facebook sparingly, usually for a specific task, and rarely have my page open. I don't track the comments on my "wall", and I don't generally post on others'. It has been useful in one particular case, though, reconnecting me with a former student whose work I have mentioned here. That has been a real pleasure. (FYI, the link to his old site seems to be broken now.)

However, I do have limited experience with the newly transparent wall between me and my students, through blogs. It started when a few students -- not many -- found my blog and began to read it. Then I found the blogs of a few recent students and, increasingly, current students. I don't have a lot of time to read any blogs these days, but when I do read, I read some of theirs. Blogs are not quite as immediate as the Twitter-like chatter to be found in Facebook, but they are a surprisingly candid look into my students' lives and minds. Struggles they have with a particular class or instructor; personal trials at home; illness and financial woes -- all are common topics in the student blogs I read. So, too, are there joys and excitement and breakthroughs. Their posts enlighten me and humble me. Sometimes I feel as if I am privy to far too much, but mostly I think that the personal connection enriches my relationship both with individual students and with the collective student body. What I read certainly can keep me on a better path as I play the role of instructor or guide.

And, yes, I realize that there is a chance that the system can be gamed. Am I being played by a devious student? It's possible, but honestly, I don't think it's a big issue. The same students who will post in full view of their instructor that they want to sleep through class without shame or compunction are the ones who are blogging. There is a cultural ethic at play, a code by which these students live. I feel confident in assuming that their posts are authentic, absent evidence to the contrary for any given blogger.

(That said, I appreciate when students write entries that praise a course or a professor. Most students current students are circumspect enough not to name names, but there is always the possibility that they refer to my course. That hope can psyche me up some days.)

To be fair, we have to admit that the same possibility for gaming the system arises when professors blog. I suppose that I can say anything here in an effort to manipulate my students' perceptions or feelings. I might also post something like this, which reflects my take on a group of students, and risk affecting my relationship with those students. One of my close friends sent me e-mail soon after that post to raise just that concern.

For the same reasons I give the benefit of the doubt to student bloggers, I give myself the benefit of the doubt, and the same to the students who read this blog. To be honest, writing even the few entries I manage to write these days takes a lot of time and psychic energy. I have too little of either resource to spend them disingenuously. There is a certain ethic to blogging, and most of us who write do so for more important purposes than trying to manipulate a few students' perceptions. Likewise, I trust the students who read this blog to approach it with a mindset of understanding something about computer science and just maybe to get a little sense of what their Dear Old Professor tick.

I know that is the main reason I write -- to figure out how I tick, and maybe learn a few useful nuggets of wisdom along the way. Knowing that I do so in a world much more transparent than the one I inhabited as a CS student years ago is part of the attraction.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

October 07, 2008 5:49 AM

Databases and the Box

Last time I mentioned a Supreme Court justice's thoughts on how universal access to legal case data changes the research task associated with the practice of the law. Justice Roberts's comments brought to mind two thoughts, one related to the law and one not.

As a graduate student, I worked on the representation and manipulation of legal arguments. This required me to spend some time reading legal journals for two different purposes. First, I needed to review the literature on applying computers to legal tasks, ad in particular how to represent knowledge of statute and cases. Second, I needed to find, read, and code cases for the knowledge base of my program. I'm not that old, but I'm old enough that my research preceded the Internet Age's access to legal cases. I went to the campus library to check out thick volumes of the Harvard Law Review and other legal collections and journals. These books became my companions for several months, as I lay on the floor of my study and pored over them.

When I could not find a resource I needed on campus, I rode my bike to the Michigan State Law Library in downtown Lansing to use law reviews in its collection. I was not allowed to take these home, so I worked through them one at a time in carols there. I was quite an anomalous sight there, in T-shirt and shorts with a bike helmet at my side!

I loved that time, reading and learning. I never considered studying the law as a profession, but this work was a wonderful education in a fascinating domain where computing can be applied. My enjoyment of the reading almost certainly extending my research time in grad school by a couple of months.

The second thought was of the changes in chess brought about by the application of simple database technology. I've written about chess before, but not about computing applications to it. Of course, the remarkable advances in chess-playing computers that came to a head in Hitech and Deep Thought have now reached the desktop in the form of cheap and amazingly strong programs. This has affected chess in so many ways, from eliminating the possibility of adjournments in most tournaments to providing super-strong partners for every player who wants to play, day or night. The Internet does the same, though now we are never sure if we are playing against a person or a person sitting next to a PC running Fritz.

But my thoughts turned to the same effect Justice Roberts talked about, the changes created by opening databases on how players learn, study, and stay abreast of opening theory. If you have never played tournament chess, you may not be aware of how much knowledge of chess openings has been recorded. Go to a big-box bookstore like Amazon or Barnes and Noble or Borders and browse the library of chess titles. (You can do that on-line now, of course!) You will see encyclopedias of openings like, well, the Encyclopedia of Chess Openings; books on classes of openings, such as systems for defending against king pawn openings; and books upon books about individual openings, from the most popular Ruy Lopez and Sicilian Defense to niche openings like my favorites, Petroff's Defense and the Center Game.

In the olden days of the 1980s, players bought books on their objects of study and pored over them with the same vigor as legal theorists studying law review articles. We hunted down games featuring our openings so that we could play through them to see if there was a novelty worth learning or if someone had finally solved an open problem in a popular variation. I still have a binder full of games with Petroff's Defense, cataloged using my own system, variation by variation with notes by famous players and my own humble notes from unusual games. My goal was to know this opening so well that I could always get a comfortable game out of the opening, against even stronger players, and to occasionally get a winning position early against a player not as well versed in the Petroff as I.

Talk about a misspent youth.

Chessplayers these days have the same dream, but they rarely spend hours with their heads buried inside opening books. These days, it is possible to subscribe to a database service that puts at our fingertips, via a computer keyboard, every game played with any opening -- anywhere in the recorded chess world, as recently as the latest update a week ago. What is the study of chess openings like now? I don't know, having grown up in the older era and not having kept up with chess study in many years. Perhaps Justice Roberts feels a little like this these days. Clerks do a lot of his research, and when he needs to do his own sleuthing, those old law reviews feel warm and inviting.

I do know this. Opening databases have so changed chess practice, from grandmasters down to patzers like me, that the latest issue of Chess Life, the magazine of U.S. Chess, includes a review of the most recent revision of Modern Chess Openings -- the opening bible on which most players in the West once relied as the foundation of broad study -- whose primary premise is this: What role does MCO play in a world where computer database is king? What is the use of this venerable text?

From our gamerooms to our courtrooms, applications of even the most straightforward computing technology have changed the world. And we haven't even begun to talk about programs.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

September 01, 2008 2:31 PM

B.B. King, CS Wannabe

The Parade magazine insert to my Sunday paper contained an interview with legendary bluesman B.B. King that included this snippet:

There's never a day goes by that I don't miss having graduated and gone to college. If I went now, I would major in computers and minor in music. I have a laptop with me all the time, so it's my tutor and my buddy.

CS and music are, of course, a great combination. Over the years, I've had a number of strong and interesting students whose backgrounds included heavy doses of music, from alt rock to marching band to classical. But B.B. King is from a different universe. Maybe I can get this quote plastered on the walls of all the local haunts for musicians.

I wonder what B.B. calls his laptop?


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

August 22, 2008 4:00 PM

Lawyers Read My Blog

Not really. But they do protect their marks.

A couple of years ago, I received a polite request from the lawyers of a well-known retired cartoonist, asking that I not use one of his cartoons. Today, I received a polite request from the lawyers of a well-known business author and speaker, asking:

Please make sure to include [a statement acknowledging our registered trademark] on each page that our trademarked term appears. Additionally, we respectfully request that every time you use our mark in the body of your work of commentary that you capitalize the first letter of each word in the mark and directly follow the mark with the ® symbol so that it reads as "... ®"

Google does change the landscape for many, many things. This is a good thing; it reduces friction in the market and the law.

One result for me is that I now know that the ® symbol is denoted by entity number 174 or entity name reg. I've used © occasionally, but rarely ®.

That said, I'm not too keen on having to capitalize two common words every time I use them in an article. I think I either need to write those articles without using the code phrase, or simply stop quoting books that are likely to trademark simple phrases. The latter rules out most thin business trade books, especially on management and marketing. That's not much of a loss, I suppose.


Posted by Eugene Wallingford | Permalink | Categories: General

August 15, 2008 2:35 PM

Less, Sooner

Fall semester is just around the corner. Students will begin to arrive on campus next week, and classes start a week from Monday. I haven't been able to spend much time on my class yet and am looking forward to next week, when I can.

What I have been doing is clearing a backlog of to-dos from the summer and handling standing tasks that come with the start of a new semester and especially a new academic year. This means managing several different to-do lists, crossing priorities, and generally trying to get things done.

As I look at this mound of things to do I can't help being reminded of something Jeff Patton blogged a month or so ago: two secrets of success in software development, courtesy of agile methods pioneer Jim Highsmith: start sooner, and do less.

Time ain't some magical quantity that I can conjure out of the air. It is finite, fixed, and flowing relentlessly by. If I can't seem to get done on time, I need to start sooner. If I can't seem to get it all done, I need to do less. Nifty procedures and good tools can help only so much.

I need to keep this in mind every day of the year.

Oh, and to you students out there: You may not be able to do less work in my class, but you can start sooner. You may have said so yourself at the end of last semester. Heck, you may even want to do more, like read the book...


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 28, 2008 3:44 PM

Revolution, Then Evolution

I recently started reading The Art of Possibility, by Roz and Ben Zander, and it brought to mind a pattern I have seen many times in literature and in life. Early on, the Zanders explain that this book is "not about making incremental changes that lead to new ways of doing things based on old beliefs". It is "geared toward causing a total shift of posture [and] perceptions"; it is "about transforming your entire world".

That's big talk, but the Zanders are not alone in this message. When talking to companies about creating new products, reaching customers, and running a business, Guy Kawasaki uses the mantra Revolution, Then Evolution. Don't try to get better at what you are doing now, because you aren't always doing the right things. But also don't worry about trying to be perfect at doing something new, because you probably won't be. Transform your company or your product first, then work to get better.

This pattern works in part because people need to be inspired. The novelty of a transformation may be just what your customers or teammates need to rally their energies, when "just" trying to get better will make them weary.

It also works despite running contrary to our fixation these days with "evolving". Sometimes, you can't get there from here. You need a mutation, a change, a transformation. After the transformation, you may not be as good as you would like for a while, because you are learning how to see the world differently and how to react to new stimuli. That is when evolution becomes useful again, only now moving you toward a higher peak than was available in the old place.

I have seen examples of this pattern in the software world. Writing software patterns was a revolution for many companies and many practitioners. The act of making explicit knowledge that had been known only implicitly, or the act of sharing internal knowledge with others and growing a richer set of patterns, requires a new mindset for most of us. Then we find out we are not very good, so we work to get better, and soon we are operating in a world that we may not have been able even to imagine before.

Adopting agile development, especially a practice-laden approach such as XP, is for many developers a Revolution, Then Evolution experience. So are major lifestyle changes such as running.

Many of you will recognize an old computational problem that is related to this idea: hill climbing. Programs that do local search sometimes get stuck at a local maximum. A better solution exists somewhere else in the search space, but the search algorithm makes it impossible for the program to get out of the neighborhood of the local max. One heuristic for breaking out of this circumstance is occasionally to make a random jump somewhere else in the search space, and see where hill climbing leads. If it leads to a better max, stay there, else jump back to the starting point.

In AI and computer science more generally, it is usually easier to peek somewhere else, try for a while, and pop back if it doesn't work out. Most individuals are reluctant to make a major life change that may need to be undone later. We are, for the most part, beings living in serial time. But it can be done. (I sometimes envy the freer spirits in this world who seem built for this sort of experimentation.) It's even more difficult to cause a tentative radical transformation within an organization or team. Such a change disorients the people involved and strains their bonds, which means that you had better well mean it when you decide to transform the team they belong to. This is a major obstacle to Revolution, Then Evolution, and one reason that within organizations it almost always requires a strong leader who has earned everyone's trust, or at least their respect.

As a writer of patterns, I struggle with how to express the context and problem for this pattern. The context seems to be "life", though there are certainly some assumptions lurking underneath. Perhaps this idea matters only when we are seeking a goal or have some metric for the quality of life. The problem seems to be that we are not getting better, despite an effort to get better. Sometimes, we are just bored and need a change.

Right now, the best I can say from my own experience is that Revolution, Then Evolution applies when it has been a while since I made long-term progress, when I keep finding myself revisiting the same terrain again and again without getting better. This is a sign that I have plateaued or found a local maximum. That is when it is time to look for a higher local max elsewhere -- to transform myself in some way, and then begin again the task of getting better by taking small steps.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

July 09, 2008 1:57 PM

Interlude

[Update: Added a linked to my interview at Confessions of a Science Librarian.]

Some months, I go through stretches when I write a lot. I started this month with a few substantive posts and a few light posts in the span of a week. Back in November 2007, I wrote twice as many posts as the typical month and more than any month since my first few months blogging. That month, I posted entries eleven days in a row, driven by a burst of thoughts from time spent at a workshop on science and computer science. This month, I had the fortune to read some good articles and the chance to skip real work, think, and write. Sometimes, the mood to write takes hold.

I have had an idea for a long time to write an entry that was motivated by reading George Orwell's essay Why I Write, but never seem to get to it. I'm not getting to it today, either. But it came to mind again for two reasons. First, I spent the morning giving a written interview to John Dupuis, who blogs at Confessions of a Science Librarian. John is a reader of my blog and asked me to share some of my ideas with his readers. I was honored to be asked, and so spent some time this morning reflecting on my blog, what and why I write. Second, today is the fourth anniversary of my first blog post.

Responding to John's questions is more writing than I do on most days. I don't have enough energy left to write a substantive post yet today, but I'm still in a reflective frame of mind about why I write.

Do I really need to blog? Someone has already said what I want to say. In that stretch of posts last November, I cited Norvig, Minsky, and Laurel, among others, talking about the same topics I was writing about. Some reasons I can think of are:

  • My experiences are different, so maybe I have something to add, however small that might be.
  • My posts link to all that great work, and someone who reads my blog may find an article they didn't know about and read it. Or they may remember it from the past, feel guilty at not having read it before, and finally go off to read it.
  • If nothing else, I write to learn, and to make myself happy. Some days, that's more than enough reason for me.

There are certainly other self-interested reasons to write. There is noble self-interest:

Share your knowledge. It's a way to achieve immortality.
-- the 14th Dalai Lama

And there is the short-term self-interest. I get to make connections in my own mind. Sometimes I am privileged to see my influence on former students, when they respond to something I've written. And then there is the lazy blog, where some reader knows or has something I don't and shares. At times, these two advantages come together, as when former student Ryan Dixon brought me a surprise gift last winter.

Year Five begins today, even if still without comments.


Posted by Eugene Wallingford | Permalink | Categories: General

July 07, 2008 1:58 PM

Patterns in My Writing

While reading this morning I came across a link to this essay. College students should read it, because it points out many of the common anti-patterns in the essays that we professors see -- even in papers written for computer science courses.

Of course, if you read this blog, you know that my writing is a poster child for linguistic diffidence, and pat expressions are part of my stock in trade. It's sad to know that these anti-patterns make up so much of my word count.

This web page also introduced me to Roberts's book Patterns in English. With that title, I must check it out. I needed a better reason to stop by the library than merely to return books I have finished. Now I have one.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

July 05, 2008 9:57 PM

Wedding Season

On my way into a store this afternoon to buy some milk, I ran into an old friend. He moved to town a decade or so ago and taught art at the university for five years before moving on to private practice. As we reminisced about his time on the faculty, we talked about how much we both like working with students. He mentioned that he recently attended his 34th wedding of a former student.

Thirty-four weddings from five years of teaching. I've been teaching for sixteen years and have been invited to only a handful of weddings -- three or four.

Either art students are a different lot from CS students, or I am doing something wrong...


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

July 04, 2008 8:37 AM

Science, Education, and Independent Thought

I wrote about a recent CS curricular discussion, which started with a blog posting by Mark Guzdial. Reading the comments to Guzdial's post is worth the time, as you'll find a couple of lengthy remarks by Alan Kay. As always, Kay challenges even computer science faculty to think beyond the boundaries of our discipline to the role what our students learn from us plays in a democratic world.

One of Kay's comments caught my attention for connections to a couple of things I've written about in recent years. First, consider this:

I posit that this is still the main issue in America. "Skilled children" is too low a threshold for our system of government: we need "educated adults". ... I think the principle is clear and simple: there are thresholds that have to be achieved before one can enter various conversations and processes. "Air guitar and attitude" won't do.

Science is a pretty good model (and it was used by the framers of the US). It is a two level system. The first level has to admit any and all ideas for consideration (to avoid dogma and becoming just another belief system). But the dues for "free and open" are that science has built the strongest system of critical thinking in human history to make the next level threshold for "worthy ideas" as high as possible. This really works.

This echoes the split mind of a scientist: willing to experiment with the widest set of ideas we can imagine, then setting the highest standard we can imagine for accepting the idea as true. As Kay goes on to say, this approach is embedded in the fabric of the American mentality for free society and government. This is yet another good reason for all students to learn and appreciate modern science; it's not just about science.

Next, consider this passage that follows soon after:

"Air guitar" is a metaphor for choosing too tiny a subset of a process and fooling oneself that it is the whole thing. ... You say "needs" and I agree, but you are using it to mean the same as "wants", and it is simply not the case that education should necessarily adapt to the "wants" of students. This is where the confusion of education and marketing enters. The marketeers are trying to understand "wants" (and even inject more) and cater to them for a price; real educators are interested in "needs" and are trying to fulfill these needs. Marketeers are not trying to change but to achieve fit; educators are trying to change those they work with. Introducing marketing ideas into educational processes is a disaster in the making.

I've written occasionally about ideas from marketing, from the value of telling the right story to the creating of new programs. I believe those things and think that we in academia can learn a lot from marketers with the right ideas. Further, I don't think that any of this is in conflict with what Kay says here. He and I agree that we should not change our curriculum to cater solely to the perceptions and base desires of our clientele, whether students, industry, or even government. My appeal to marketing for inspiration lies in finding better ways to communicate what we do and offer and in making sure that what we do and offer are in alignment with the long-term viability of the culture. The best companies are in business for the long haul and must stay aligned with the changing needs of the world.

Further, as I am certain Kay will agree based on many of the things he has said about Apple of the 1980s, the very best companies create and sell products that their customers didn't even know they wanted. We in academia might learn something from the Apples of our world about how to provide the liberal and professional education that our students need but don't realize they need. The same goes for convincing state legislatures and industry when they view too short a horizon for what we do.

Like Kay, I want to give my students "real computing" and "real education".

I think it is fitting and proper to talk about these issues on Independence Day in the United States. We depend on education to preserve the democratic system in which we live and the values by which we live. But there's more. Education -- including, perhaps especially, science -- creates freedom in the student. The mind becomes free to think greater thoughts and accomplish greater deeds when it has been empowered with our best ideas. Science is one.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 30, 2008 12:26 PM

Not Reading in the Wild

My recent entry on student evaluations brought to mind a few other items on not reading that I've encountered recently.

  • Michael Mitzenmacher writes about the accessibility of scientific references, whether on-line or in obscure journals. How much effort should an author have to make to track down related work to cite? Presumably, Mitzenmacher holds authors responsible for reading works about which they know, but it seems a short step from asking whether authors must make extra effort to find related work to asking, as Bayard did, whether authors must make extra effort even to read related work. (And, if you have ever seen academic papers in computer science, you know that many of them do require extra effort to read!)
  • Steve Yegge never learned to read sheet music and has survived by virtue of a prodigious memory. But he tells us that this is a bad thing:

    Having a good memory is a serious impediment to understanding. It lets you cheat your way through life.

    So, Montaigne and I need not worry. Had we better better memories, we might be skating through life as easily as Yegge.

  • Comedian Pat Dixon has a schtick in which he gives unbiased movie reviews. How does he maintain his objectivity in a sea of personal opinion? He doesn't watch the movies! Wouldn't watching affect his reviews? Brilliant, and often quite funny.

I hope it's clear that at least this last example is not serious at all.


Posted by Eugene Wallingford | Permalink | Categories: General

June 18, 2008 3:51 PM

William James and Focus

I've long been a fan of William James, and once wrote briefly about the connection between James's pragmatism and my doctoral work on knowledge-based systems. I was delighted yesterday to run across this quote from James's The Principles of Psychology, courtesy of 43 Folders and Linda Stone:

[Attention] is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. ... It implies withdrawal from some things in order to deal effectively with others....

Prone as I am to agile moments, this message from James struck me in an interesting way. First of all, I occasionally battle the issue that Stone writes about, the to-do list that grows no matter productive I seem to be on a given day. (And on lazy summer June days, well, all bets are off.) James tells me that part of my problem isn't a shortage of time, but a lack of will to focus. I need to make better, more conscious choices about what tasks to add to the list. Kent Beck is fond of saying something to the effect that you may have too many things to do and too little time, but you ultimately have control over only one side of the equation. James would tell us the same thing.

My mind also made a connection from this quote to the agile software and test-driven development practice of working on small stories, on taking small steps. If I pick up a card with a single, atomic, well-defined feature to be added to my program, I am able to focus. What is the shortest step I can take and make this feature part of my code? No distractions, no Zerstreutheit. Though I have an idea in mind toward where my program is evolving, for this moment I attend to one small feature and make it work. Focus. James would be proud.

I think it's ironic in a way that one of the more effective ways to reach the state of flow is to decompose a task into the smallest of tasks and focus on them one at a time. The mind gets into a rhythm of red bar-green bar: select task, write code, refactor, and soon it is deep in its own world. I would like to be more effective at doing this in my non-programming duties. Perhaps if I keep James and his quote in mind, I can be.

This idea applies for me in other areas, in particular in running and training for particular events. Focusing each day on a particular goal -- intervals, Long Slow Distance, hill strength, and so on -- helps the mind to push aside its concerns with other parts of the game and attend to a particular kind of improvement. There is a great sense of relaxation in running super-hard repeats when the problem I've been having is, say, picking up pace late in a run. (I'd love to run super-hard repeats again some day soon, but I'm not there yet.)


Posted by Eugene Wallingford | Permalink | Categories: General, Running, Software Development

June 16, 2008 1:38 PM

A Picture of My Blog

I saw a link to Wordle on Exploration Through Example and decided to try it out on Knowing and Doing. Wordle generates a tag cloud for any text you paste in. I pasted in the content of my blog since the beginning of 2008, and it produced this lovely image:

a tag cloud for Knowing and Doing posts since 01/2008

That looks like a pretty good capsule of what I've been writing about lately. I was a bit surprised at the size of "students", but I probably shouldn't have been. "Programming", "work", "time", "ideas", "read", and "computer"/"CS"/"computing" hit the mark.

Besides, I had fun writing the script to pull plain text for my posts from their source form. It was a nice break from a day dealing with lab reservations and end-of-year budget issues.


Posted by Eugene Wallingford | Permalink | Categories: General

June 12, 2008 9:53 PM

The Subject of My Writing

In recent days, I have written about not reading books and the relationship of these ideas to writing, from my enjoyment of Pierre Bayard's How to Talk About Books You Haven't Read. A couple of readers have responded with comments about how important reading is. Don't worry -- much of what Bayard and I are saying here is a joke. But it is also true, when looked at with one's head held tilted just so, and that's part of what made the book interesting to me. For you software guys, think about Extreme Programming -- an idea taken to its limits, to see what the limits can teach us. You can be sure that I am not telling you not to read every line of every novel and short story by Kurt Vonnegut! (I certainly have, some many, many times, and I enjoyed every minute.) Neither is Bayard, though it may seem so sometimes.

In my entries inspired by the book, it seems as if I am talking about myself an awful lot. Or consider my latest article, on parsing in CS courses. I read an article by Martin Fowler and ended up writing about my course and my opinions of CS courses. My guess is that most folks out there are more interested in Fowler's ideas than mine, yet I write.

This is another source of occasional guilt... Shouldn't this blog be about great ideas? When I write about, say, Bayard's book, shouldn't the entry be about Bayard's book? Or at least about Bayard?

Bayard helps me to answer these questions. Let's switch from Montaigne, the focus of my last entry on this topic, to Wilde. The lead quote of Bayard's Chapter 12 was the first passage of the book to seize my attention as I thumbed through it:

Speaking About Yourself

(in which we conclude, along with Oscar Wilde,
that the appropriate time span for reading a book
is ten minutes, after which you risk forgetting
that the encounter is primarily a pretext
for writing your autobiography)

My experience writing this blog biases me toward shouting out, "Amen, Brother Bayard!" But, if it is true that all of my writing is a pretext for writing my autobiography, then it is all the more remarkable that I have any readers at all. Certainly you all have figured this out by now.

Bayard claims -- and Wilde agrees -- that it cannot be any other way. You may find more interesting people writing about themselves and read what they write, but you'll still be reading about the writer. (This is cold consolation for someone like me, who knows myself to be not particularly interesting!)

Bayard explores Wilde's writing on this very subject, in particular his The Critic as Artist (HB++). Bayard begins his discussion with the surface connection of Wilde offering strident support for the idea of not reading. Wilde says that, in addition to making lists of books to read and lists of books worth re-reading, we should also make lists of books not to read. Indeed, a teacher or critic would do an essential service for the world by dissuading people from wasting their time reading the wrong books. Not reading of this sort is a "power acquired by specialists, a particular ability to grasp what is essential".

Bayard then moves on to a deeper connection. Wilde asserts in his typical fashion that the distinction between creating a work of art and critiquing a work of art is artificial. First, the artist, when creating, necessarily exercises her critical faculty in the "spirit of choice" and the "subtle tact of omission"; without this faculty no one can create art, at least not art worth considering. This is an idea that most people are willing to accept, especially those creative people who have some awareness of how they create.

But what of the critic? Many people consider critics to be parasites who at best report what we can experience ourselves and and at worst detract from our experience with their self-indulgent contributions.

Not Wilde:

Criticism is itself an art. And just as artistic creation implies the working of the critical faculty, and, indeed, without it cannot be said to exist at all, so Criticism is really creative in the highest sense of the word. Criticism is, in fact, both creative and independent.

This means that a blogger who primarily comments on the work of others can herself be making art, creating new value. By choosing carefully ideas to discuss, subtly omitting what does not matter, the critic creates a new work potentially worthy of consideration in its own right. (Suddenly, the idea of a mashup comes to mind.)

The idea of critic as an independent creator is key. Wilde says:

The critic occupies the same relation to the work of art he criticises as the artist does to the visible world of form and colour, or the unseen world of passion and thought. He does not even require for the perfection of his art the finest materials. Anything will serve his purpose.

...

To an artist so creative as the critic, what does subject-matter signify? No more and no less than it does to the novelist and the painter. Like them, he can find his motives everywhere. Treatment is the test. There is nothing that has not in it suggestion or challenge.

Bayard summarizes other comments from Wilde in this way:

The work being critiqued can be totally lacking in interest, then, without impairing the critical exercise, since the work is there only as a pretext.

But how can this be?? Because ultimately, the writer writes about himself. Freed from the idea that writing about something else is about that something, the writer is able to use the something as a trigger, a cue to write about the ideas that lie in his own mind. (Please read the first paragraph of the linked entry, if nothing else. Talk about not reading!)

As Wilde says,

That is what the highest criticism really is, the record of one's own soul.

Again, Bayard summarizes neatly:

Reflection on the self .. .is the primary justification for critical activity, and this alone can elevate criticism to the level of art.

As I read this chapter, I felt as if Bayard and Wilde were speaking directly to me and my own doubts as a blogger who likes to write about works I read, performances I see, and experiences as I have. It is a blogger's manifesto! Knowing and Doing feels personal to me because it is. Those works, performances, and experiences stimulate me to write, and that's okay. It is the nature of creativity to be sparked by something Other and to use that spark to express something that lies within the Self. Reading about Montaigne and his fear of forgetting what he had written was a trigger for me to write something I'd long been thinking. So I did.

I can take some consolation: This blog may not be worth reading, but not because I choose to connect what I read, see, hear, and feel to myself. It can be unworthy only to the extent that what is inside me is uninteresting.

By the way, I have just talked quite a bit about "The Critic as Artist", though I have never read it. I have only read the passages quoted by Bayard, and Bayard's commentary on it. I intend to read the original -- and begin forgetting it -- soon.

~~~~~

These three entries on Bayard's delightful little text cover a lot of ground in the neighborhood of guilt. We often feel shame at not having read something, or at not having grown from it. When we write for others, it is easy to become too concerned with getting things right, with being perfect, with putting on appearances. But consider this final quote from Bayard:

Truth destined for others is less important than truthfulness to ourselves, something attainable only by those who free themselves from the obligation to seem cultivated, which tyrannizes us from within and prevents us from being ourselves.

Long ago, near the beginning of this blog, I quoted Epictetus's The Enchiridion, via the movie Serendipity, of all places. That quote has a lot in common with what Bayard says here. Freeing ourselves from the obligation to seem cultivated -- being content to be thought foolish and stupid -- allows us to grow and to create. Epictetus evens refers to keeping our "faculty of choice in a state conformable to nature", just as Wilde stresses the role of critical faculty creating a work of art when we write.

Helping readers to see this truth and to release them from the obligation to appear knowing is the ultimate source of the value of How to Talk About Books You Haven't Read. Perhaps Bayard's will be proud that I mark it FB++.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

June 05, 2008 3:49 PM

Not Reading, and Writing

In my last entry, I talked about Pierre Bayard's How to Talk About Books You Haven't Read, which I have, indeed, read. Bayard started with the notion that no one should feel shame about not having read a book, even when we are called upon to talk abut it. He eventually reached a much more important idea, that by freeing ourselves from this and other fears we have about books and learning we open ourselves to create art of our own. This entry looks at the bigger idea.

The issues that Bayard discusses throughout the book touch me in several personal and professional ways. I am a university professor, and as a teacher I am occasionally asked by students to talk about books and papers. I've read many of these, but not all; when I have read a work, though, I may well have forgotten a lot of it. In either case, I can find myself expected to talk intelligently about a work I don't know well. Not surprisingly, students show considerable deference to their teachers, in part because they expect a level of authority. That's pressure. Personally, I sometimes hang with an interesting, literate, well-read crowd. They've all read a lot of cool stuff; why haven't I? They don't ask me that, of course, but I ask myself.

Bayard assures us "don't worry!", explains why not, and tells us how to handle several common situations in which we will find ourselves. That's the idea -- partly humorous, partly serious -- behind the book.

But there is more to the book, both humor and truth, that connected with me. Consider:

Reading is not just acquainting ourselves with a text or acquiring knowledge; it is also, from its first moments, an inevitable process of forgetting.

Until I started writing this blog, I did not have a good sense of how bad my memory is for what I have read. I've never had a high level of reading comprehension. Those standardized tests we all took in grade school showed me to be a slow reader with only moderate comprehension, especially when compared to performance in school. One of the best outcomes for me of writing this blog has been to preserve some of what I read, especially the part that seems noteworthy at the time, before I start to forget it.

The chapter that contains the sentence quoted above begins with this subtitle:

(in which, along with Montaigne, we raise
the question of whether a book you have
read and completely forgotten, and which
you have even forgotten you have read,
is still a book you have read)

Montaigne writes with fear about his forgetfulness, the loss any memory of having read a book. Does that still count? In one sense, yes. I've held Ringworld in my hands and taken in the words on each page. But in most ways I am today indistinguishable from a person who has never read the book, because I don't remember much more than the picture on the cover. Bayard explores this and other ways in which the idea of "to read" is ambiguous and bases his advice on the results.

How about any of the many, many technical computer science books I've read? The same fate. There is one solace to be had when we consider books that teach us how to do something. The knowledge we gain from reading technical material can become part of our active skill base, so that even after we have forgotten "knowledge that" the content of a compiler text is true, we can still have "knowledge how" to do something.

But reading is not the end of forgetting. Montaigne was an essayist. What about writing? Montaigne expects his loss to extend to his own work:

It is no great wonder if my book follows the fate of other books, and if my memory lets go of what I write as of what I read, and of what I give as of what I receive.

Forgetting what I have written is a sensation I share with Montaigne. Occasionally, I go back and re-read an old entry in this blog, or a month of entries, and am amazed. Some times, I am amazed that I wrote such drivel. Other times, I am amazed that I had such a good idea and managed to express it well. And, yes, I am often amazed to be reminded I have read something I've forgotten all about. In the best of these cases, the entry includes a quotation or, even better, a link to the work. This allows me to read it again, if I desire. I usually do.

That is good news. We can hold at bay the forgetting of what we read by re-reading. But there is another risk in forgetting: writing the same thing again. Bayard reports Montaigne's fear:

Incapable of remembering what he has written, Montaigne finds himself confronted with the fear of all those losing their memory: repeating yourself without realizing it....

Loss of memory creates an ambiguity in the writer's mind. It's common for me when writing a blog entry to have a sense of deja vu that I've written the same thing before. Sometimes my mind is playing tricks on me, due to the rich set of links in my brain, but sometimes I am and have forgotten. The fear in forgetting what we have written is heightened by the fear that what we write is unremarkable. We may remember the idea that stands out, but how are we to remember the nondescript? I often feel as Montaigne did:

Now I am bringing in here nothing newly learned. These are common ideas; having perhaps thought of them a hundred times, I am afraid I have already set them down.

I feel this almost no matter what I write. Surely my thoughts are common; what value is there in writing them down for others to read? That's why it was good for me to realize at the very beginning that I had to think that I was writing for myself. Only then would I find the courage to write at all and maybe benefit someone else. When confronted by a sense that I am writing the same ideas again, I just have to be careful. And when I do repeat myself, I must hope that I do it better the second time, or at least differently enough, to add something that makes the new piece worth a reader's time.

The danger in forgetting what I have written is not only in writing again. What about when a reader asks me about something I have written? Montaigne faced this fear, too, as Bayard writes:

But fear of repeating himself is not the only embarrassing consequence of forgetting his own books. Another is that Montaigne does not even recognize his own texts when they are quoted in his presence, leaving him to speak about texts he hasn't read even though he has written them.

That is at least two layers of not reading more than most of us expect to encounter in any situation! But the circumstance is quite real. When someone sends me e-mail asking about something I've forgotten I wrote, I have the luxury of time to re-read (there it is again!) before I respond. My correspondent is likely none the wiser. But what if they ask me in person? I am right where Bayard says I will be, left to respond in many of the ways he describes.

By writing about what I read and think about, there is a great risk that people will expect me to be changed by the experience! I did not do myself any favors when I chose a name for my blog, because I create an expectation about both knowing and doing. I certainly hope that I am changed by my experience reading and writing, but I know that often I have not changed, at least sufficiently. I still give lame assignments. I'm not that much better as a teacher at helping students learn more effectively. My shortcoming is all the more obvious when students and former students read my blog and are able to compare their experiences in my classes with my aspirations.

This is actually more a source of guilt for me than being thought not to have read a book. I know I am not as good as all the things I've read might lead one to believe, or even as good as what I've written (which sets a much lower bar!). If I am not growing, what is the point?

Of course, I probably am changing, in small increments beneath the scale of my perception. At least I hope so. Bayard doesn't say this in so many words, but I think it is implicit in how he approaches reading and not reading. For him, there is no distinction between the two:

We do not retain in memory complete books identical to the books rememeberd by everyone else, but rather fragments surviving from partial readings, frequently fused together and recast by our private fantasies.

This is a central theme for Bayard, and for me as well. He talks at length about the different ways our inner conception of books and libraries affects the act of reading any book. I often wonder how much of what I report about a book or paper I read is a product of me imposing my own view on what the writer has said -- not what is there, not what the author has intended, what distorts the writer's meaning? How much of what I am writing about Bayard's book reflects accurately the book he wrote?

Bayard would be unconcerned. On his view, I could no more not impose my inner book on the outer one than to be Bayard himself. No one can avoid distortion (in an objectivist sense) or imposition of self (in a subjectivist sense). What distinguishes the master is how forcefully and creatively one does so. Private fantasy, indeed!

To conceive of reading as loss ... rather than as gain is a psychological resource essential to anyone seeking effective strategies for surviving awkward literary confrontations.

Can I admit that I have forgotten something I've read or written? Certainly; I do it frequently. The key is to talk about the work as who I am in the present. I don't even need to explicitly acknowledge the loss, because the loss is a given. But I must talk without embarrassment and without any pretension that I remember exactly what I said or thought then. The human mind works in a certain way, and I must accept that state of affairs and get down to the business of learning -- and creating -- today.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

June 04, 2008 2:07 PM

Not Reading Books

I have another million to my credit, and it was a marvelous little surprise.

Popular culture is full of all sorts of literary references with which you and I are supposed to be familiar. Every year brings another one or two. The Paradox of Choice. The Tipping Point. The Wisdom of Crowds. Well-read people are expected, well, to have read the books, too. How else can we expect to keep up with our friends when they discuss these books, or to use the central wisdoms they contain in knowing ways?

I have a confession. I have read only two or three chapters of The Wisdom of Crowds. I have read only an excerpt from The Tipping Point that appeared in the New Yorker or some other literary magazine. And while I've seen a Google talk by Barry Schwartz on-line, I may not have read anything more than a short precis of the work. Of course, I have learned a lot about them from my friends, and by reading about them in various other contexts. But, strictly speaking, I have not read any of them.

To be honest, I feel no shame about this state of affairs. There are so, so many books to read, and these just have not seemed important enough to displace others from my list. And in the case of The Wisdom of Crowds, I found that one or two chapters told me pretty much all I needed to understand the Big Idea it contained. Much as Seth Godin has said about many popular business books, many books in the popular canon can be boiled down to much shorter works in their essence, with the rest being there for elaboration or academic gravitas.

cover of How to Talk About Books You Haven't Read

For airplane reading on my trip to the workshop at Google, I took Pierre Bayard's How to Talk About Books You Haven't Read. Bayard's thesis is that neither I nor anyone else should feel shame about not having read any given book, even if we feel a strong compulsion to comment, speak, or write about it. In not reading and talking anyway, we are part of a grand intellectual tradition and are, in fact, acting out of necessity. There are simply too many books to read.

This problem arises even in the most narrow technical situation. When I wrote my doctoral dissertation, I surely cited works with which I was familiar but which I had not "read", or, having read them, had only skimmed them for specific details. I recall feeling a little bit uneasy; what if some party of the book or dissertation that I had not studied deeply said something surprising or wrong? But I knew a lot about these works in context: from other people's analyses, from other works by the same author, and even from having discussed the work with the author him- or herself. But in an important way, I was talking about a work I "had not read".

How I could cite the work anyway and still feel I was being intellectually honest gets to one of the central themes of Bayard's book: the relationships between ideas are often more important than the ideas themselves. To understand a work in the context of the universal library means more than just to know the details of the work, and the details themselves are often so affected by conditions outside of the text that they are less reliable than the bigger picture anyway.

First, let me assure you. Bayard wrote this book with a wink in his eye. At times, he speaks with a cavalier sarcasm. He also repeats himself in places; occasional paragraphs sound as if they have been lifted verbatim from previous chapters.

Second, this book fits Seth Godin's analysis of popular business books pretty well. Two or three chapters were sufficient to express the basic idea of this book. But such a slim product would have missed something important. How to Talk About Books You Haven't Read started as a joke, perhaps over small talk at a cocktail party, but as Bayard expanded on the idea he ended up with an irreverent take on reading, thinking, and understanding that carries a lot more truth than I might first have imagined. Readers of this blog who are software patterns aficionados might think of Big Ball of Mud in order to understand just what I mean: antipattern as pattern, when looked at from a different angle.

This book covers a variety of books that deal in some way with not reading books but talking about them. Along the way, Bayard explores an even wider variety of ideas. Many of these sound silly, even wrong, at first, and he uses this to weave a lit-crit tale that is perfect parody. But as I read, I kept saying, "Yeah, but..." in a way, this really is true.

For example, Bayard posits that reading too much can cause someone to lose perspective in the world of ideas and to lose one's originality. In a certain way, the reader subordinates himself to the writer, and so reading too much means always subordinating to another rather than creating ideas oneself. We could read this as encouragement not to read (much), which would miss his joke. But there is another level at which he is dead-on right. I knew quite a few graduate students who learned this firsthand when they got into a master's program and found that they preferred to immerse themselves in the research of others than to do creative research of their own. And there many blogs which do a fine job reporting on other people's work but which never seem to offer much new. (I struggle with that danger each time I write in this blog.)

Not reading does not mean that we cannot have an opinion. My friends and I are examples of this. Students are notorious for this, and Bayard, a lit professor, discusses the case of students in class at some length. But I was most taken by his discussion of Laura Bohannan's experience telling the story of Hamlet to the Tiv people of West Africa. As she told the story, the Tiv elders interpreted the story for her, correcting her -- and Western culture, and Shakespeare -- along the way. One of the interpretations was a heterodoxy that has a small but significant following among Shakespeare scholars. The chief even ventured to predict how the story ended, and did a pretty good job. Bayard used this as evidence that not reading a book may actually leave our eyes open to new possibilities. Bohannan's story is available on-line, and you really should read it -- it is delightful.

Bayard talks about so many different angles on our relationship with books and stories about them, including

  • how we as listeners tend to defer to a speaker, and thus allow a non-reader to successfully talk about a book to us, and
  • how some readers are masters at imposing their own views on even books they have read, which should give us all license to impose ourselves on the books we haven't read.

One chapter focuses on our encounters with writers, and the ticklish situations they create for the non-reader and for the writer. In another, Bayard deals with the relationship among professors, students, and books. It made me think about how students interpret the things I say in class, whether about our readings or the technical material we are learning. Both of these chapters figure in a second entry I'm writing about this book, as well as chapters on the works of Montaigne and Wilde.

One chapter uses as his evidence the campus novels of David Lodge, of whom I am a big fan. I've never blogged about them, but I did use the cover of one of his books to illustrate a blog entry. Yet another draws on Bill Murray's classic Groundhog Day, an odd twist in which actually reading books enters into Bayard's story and supports his thesis. I have recommended this film before and gladly do so again.

As in so many situations, our fear of appearing weak or unknowledgable is what prevents us from talking freely about a book we haven't read, or even to admit that we have not read it. But this same fear is also responsible for discouraging us from talking about books we have read and about ideas we have considered. This is ultimately the culprit that Bayard hopes to undermine:

But our anxiety in the face of the Other's knowledge is an obstacle to all genuine creativity about books. The idea that the Other has read everything, and thus is better informed than us, reduces creativity to a mere stopgap that non-readers might resort to in a pinch. In truth, readers and non-readers alike are caught up in an endless process of inventing books, whether we like it or not, and the real question is not how to escape that process, but how to increase its dynamism and its range.

Bayard's book is worth reading just for his excerpts of other books and for his pointer to the story about the Tiv. I should probably feel guilt at not having read this book yet when so many others have, but I'm just happy to have read it now.

While reading on the plane coming home, I glanced across the aisle and noticed another passenger reading Larry Niven's Ringworld. I smiled and thought "FB++", in Bayard's rating system. I could talk about it nonetheless.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

May 31, 2008 1:21 AM

Google Impressions

I have already mentioned a couple of my first impressions of being a guest on the Google campus:

  • good, plentiful, and diverse food and drink for employees and guests alike, and
  • good, plentiful, and diverse power cables built right into the meeting tables.

Here are a few other things I noticed.

Calling it the "Google campus" is just right. It looks and feels like a college campus. Dining service, gym facilities, a small goodies store, laundry, sand volleyball courts... and lots of employees who look college-aged because they recently were.

Everywhere we walked outdoors, we saw numerous blue bicycles. They are free for the use of employees, presumably to move between buildings. But there appeared to be bike trails across the road where the bikes could be used for recreation, too.

The quad area between Buildings 40 and 43 had a dinosaur skeleton with pink flamingos in its mouth. Either someone forgot to tell the dinosaur "don't be evil", or the dinosaur has volunteered to serve as aviary for kitsch.

The same area included a neat little vegetable garden. How's that for eating local? (Maybe the dinosaur just wanted to fit in.)

As we entered Building 43 for breakfast, we were greeted with a rolling display of search terms that Google was processing, presumably in real time. I wondered if we were seeing a filtered list, but we did see a "paris hilton" in there somewhere.

The dining rooms served Google-branded ice cream sandwiches, IT's IT, "a San Francisco tradition since 1928". In typical Google fashion, the tasty treat (I verified its tastiness empirically with a trial of size N=2) has been improved, into "a natural, locally sourced, trans-fat-free rendition of their excellent treat". So there.

I don't usually comment on my experience in the restroom, but... The men's rooms at Google do more than simply provide relief; they also provide opportunities for professional development. Testing on the Toilet consists of flyers over the urinal with stories and questions about software testing. (But what's a "C.L.", as in "one conceptual change per C.L."?) I cannot confirm that female engineers at Google have the same opportunities to learn while taking the requisite breaks from their work.

I earlier commented that we visitors had to stay within sight of a Google employee. After a few more hours on campus, it became clear that security is a major industry at Google. Security guards were everywhere. My fellow guests and I couldn't decide whether they were guarding against intellectual property theft by brazen Microsoft or Yahoo! employees or souvenir theft by Google groupies. But I did decide that the Google security force far outnumbers the police force in my metro area.

All in all, an interesting and enjoyable experience.


Posted by Eugene Wallingford | Permalink | Categories: General

May 31, 2008 12:33 AM

K-12 Road Show Summit, Day Two

The second half of the workshop opened with one of the best sessions of the event, the presentation "What Research Tells Us About Best Practices for Recruiting Girls into Computing" by Lecia Barker, a senior research scientist at the National Center for Women and IT. This was great stuff, empirical data on what girls and boys think and prefer. I'll be spending some time looking into Barker's summary and citations later. Some of the items she suggested confirm commonsense, such as not implying that you need to be a genius to succeed in computing; you only need to be capable, like anything else. I wonder if we realize how often our actions and examples implicitly say "CS is difficult" to interested young people. We can also use implicit cues to connect with the interests of our audience, such as applications that involve animals or the health sciences, or images of women performing in positions of leadership.

Other suggestions were newer to me. For example, evidence shows that Latina girls differ more from white and African-American girls than white and African-American girls differ from each other. This is good to know for my school, which is in the Iowa metro area with the highest percentage of African-Americans and a burgeoning Latina population. She also suggested that middle-school girls and high-school girls have different interests and preferences, so outreach activities should be tailored to the audience. We need to appeal to girls now, not to who they will be in three years. We want them to be making choices now that lead to a career path.

A second Five-Minute Madness session had less new information for me. I thought most about funding for outreach activities, such as ongoing support for an undergraduate outreach assistant whom we have hired for next year using a one-time grant form the university's co-op office. I had never considered applying for a special projects grant from the ACM for outreach, and the idea of applying to Avon was even more shocking!

The last two sessions were aimed at helping people get a start on designing an outreach project. First, the whole group brainstormed ideas for target audiences and goals, and then the newbies in the room designed a few slides for an outreach presentation with guidance from the more experienced people. Second, the two groups split, with the newbies working more on design and the experienced folks discussing the biggest challenges they face and ways to overcome them.

These sessions again made clear that I need to "think bigger". One, Outreach need not aim only at schools; we can engage kids through libraries, 4-H (which has broadened its mission to include technology teams), the FFA, Boys and Girls Clubs, and the YMCA and YWCA. Some schools report interesting results from working with minority girls through mother/daughter groups at community centers. Sometimes, the daughters end up encouraging the moms to think bigger themselves and seek education for more challenging and interesting careers. Two, we have a lot more support from upper administration and from CS faculty at my school than most outreach groups have at their schools. This means that we could be more aggressive in our efforts. I think we will next year.

The workshop ended with a presentation by Gabe Cohen, the project manager for Google Apps. This was the only sales pitch we received from Google in the time we were here (other than being treated and fed well), and it lasted only fifteen minutes. Cohen showed a couple of new-ish features of the free Apps suite, including spreadsheets with built-in support for web-based form input. He closed hurriedly with a spin through the new AppEngine, which debuted to the public on Wednesday. It looks cool, but do I have time?

The workshop was well-done and worth the trip. The main point I take away is to be more aggressive on several fronts, especially in seeking funding opportunities. Several companies we work with have funded outreach activities at other schools, and our state legislative and executive branches have begun to take this issue seriously from the standpoint of economic development. I also need to find ways to leverage faculty interest in doing outreach and interest from our administration in both STEM education initiatives and community service and outreach.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

May 30, 2008 7:23 PM

K-12 Road Show Summit, Day One

The workshop has ended. Google was a great host, from beginning to end. They began offering food and drinks almost immediately, and we never hungered or thirsted for long. That part of the trip made Google feel like the young person's haven it is. Wherever we went, the meeting tables included recessed power and ethernet cables for every kind of laptop imaginable, including my new Mac. (Macbook Pros were everywhere we went at Google.) But we also learned right away that visitors also must stay within bounds. No wandering around was allowed; we had to remain within sight of a Googler. And we were told not to take any photos on the grounds or in the buildings.

The workshop was presented live from within Google Docs, which allowed the leaders and presenters to display from a common tool and to add content as we went along. The participants didn't have access to the doc, but we were it as a PDF file -- on the smallest flash drive I've ever owned. It's a 1GB stick with the dimensions of the delete key on my laptop (including height).

The introduction to the workshop consisted of a linked-list game in which each person introduced the person to his left, followed by remarks from Maggie Johnson, the Learning and Development Director at Google Engineering, and Chris Stephenson, the executive director of ACM's Computer Science Teachers Association. The game ran a bit long, but it let everyone see how many different kinds of people were in the room, including a lot of non-CS faculty who lead outreach activities for some of the bigger CS departments. Chris expressed happiness that K-12, community colleges, and universities were beginning to work together on the CS pipeline. Outreach is necessary, but it can also be joyful. (This brought to mind her panel statement at SIGCSE, in a session I still haven't written up...)

Next up was Liz Adams reporting on her survey of people and places who are doing road shows or thinking about it. She has amassed a lot of raw data, which is probably most useful as a source of ideas. During her talk, someone asked, does anyone know if what they are doing is working? This led to a good discussion of assessment and just what you can learn. The goals of these road shows are many. When we meet with students, are we recruiting for our own school? Or are we trying to recruit for discipline, getting more kids to consider CS as a possible major? Are we working to reach more girls and underrepresented groups, or do we seek a rising tide? Perhaps we are doing service for the economy of our community, region, or state? The general answer is 'yes' to all of these things, which makes measuring success all the more difficult. While it's comforting to shoot wide, this may not be the most effective strategy for achieving any goal at all!

One idea I took away from this session was to ask students to complete a short post-event evaluation. I view most of our outreach activities these days as efforts to broaden interest in computer science generally, and to broaden students' views of the usefulness and attractiveness of computing even more generally. So I'd like to ask students about their perceptions of computing after we work with them. Comparing these answers to ones gathered before the activity would be even better. My department already asks students declaring CS majors to complete a short survey, and I plan to ensure it includes a question that will allow us to see whether our outreach activities have had any effect on the new students we see.

Then came a session called Five-Minute Madness, in which three people from existing outreach programs answered several questions in round-robin fashion, spending five minutes altogether on each. I heard a few useful nuggets here:

  • Simply asking a young student "What will you be when you grow up?" and then talking about what we do can be a powerful motivator for some kids.

  • Guidance counselors in the high schools are seriously misinformed about computing. No surprise there. But they often don't have access to the right information, or the time to look for it. The outreach program at UNC-Charlotte has produced a packet of information specifically for school counselors, and they visit with the counselors on any school visit they can.

  • Reaching the right teacher in a high school can be a challenge. It is hard to find "CS teachers" because so few states certify that specialty. Don't send letters addressed to the "computing teacher"; it will probably end up in the trash can!

  • We have to be creative in talking to people in the Department of Education, as well as making sure we time our mailings and offerings carefully. Know the state's rules about curriculum, testing, and the like.

  • It's a relationship. Treat initial contacts with a teacher like a first date. Take the time to connect with the teacher and cultivate something that can last. One panelist said, "We HS teachers need a little romance." If we do things right, these teachers can become our biggest advocates and do a lot of "recruiting" for us through their everyday, long-term relationship with the students.

Dinner in one of the Google cafeterias was just like dinner in one of my university's residence halls, only with more diverse fare. A remarkable number of employees were there. Ah, to be young again.

Our first day closed with people from five existing programs telling us about their road shows. My main thought throughout this session was that these people spend a lot of time talking to -- at -- the kids. I wonder how effective this is with high school students and imagine that as the audience gets younger, this approach becomes even less effective. That said, I saw a lot of good slides with information that we can use to do some things. The presenters have developed a lot of good material.

Off to bed. Traveling west makes for long, productive days, but it also makes me ready to sleep!


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

May 23, 2008 3:17 PM

The Split Mind of a Scientist

Someone pointed me toward a video of a talk given at Google by John Medina on his new book Brain Rules. I enjoyed the talk and will have to track down a copy of the book. Early on, he explains that the way he have designed our schools and workplaces produce the worst possible environments in which for us to learn and work. But my favorite passage came near the end, in response to the question, "Do you believe in magic?"

Hopefully I'm a nice guy, but I'm a really grumpy scientist, and in the end, I'm a reductionist. So if you can show me, [I'll believe it]. As a scientist, I have to be grumpy about everything and be able to be willing to believe anything. ... If you care what you believe, you should never be in the investigative fields -- ever. You can't care what you believe; you just have to care what's out there. And when you do that, your bandwidth is as wide as that sounds, and the rigor ... has to be as narrow as as the biggest bigot you've ever seen. Both are resident in a scientist's mind at the same time.

Yes. Unfortunately, public discourse seems to include an unusually high number of scientists are very good at the "being grumpy about everything" part and not so good at the "being able to be willing to believe anything" part. Notice that Medina said "be able to be willing to believe", not "be willing to believe". I think that some people are less able to be willing to believe something they don't already believe, which makes them not especially good candidates to be scientists.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 20, 2008 12:47 PM

Cognitive Surplus and the Future of Programming

the sitcom All in the Family

I grew up on the sitcom of the 1970s and 1980s. As kids, we watched almost everything we saw in reruns, whether from the '60s or the '70s, but I enjoyed so many of them. By the time I got to college, I had well-thought ideas on why The Dick Van Dyke Show remains one of the best sitcoms ever, why WKRP in Cincinnati was underrated for its quality, and why All in the Family was _the_ best sitcom ever. I still hold all these biases in my heart. Of course, I didn't limit myself to sitcoms; I also loved light-action dramas, especially The Rockford Files.

Little did I know then that my TV viewing was soaking up a cognitive surplus in a time of social transition, or that it had anything in common with gin pushcarts in the streets of London at the onset of the Industrial Revolution.

Clay Shirky has published a wonderful little essay, Gin, Television, and Social Surplus that taught me these things and put much of what we see on happening on the web into the context of a changing social, cultural, and economic order. Shirky contends that, as our economy and technology evolve, a "cognitive surplus" is created. Energy that used to be spent on activities required in the old way is now freed for other purposes. But society doesn't know what to do with this surplus immediately, and so there is a transition period where the surplus is dissipated in (we hope) harmless ways.

My generation, and perhaps my parents', was part of this transition. We consumed media content produced by others. Some denigrate that era as one of mindless consumption, but I think we should not be so harsh. Shows like All in the Family and, yes, WKRP in Cincinnati often tackled issues on the fault lines of our culture and gave people a different way to be exposed to new ideas. Even more frivolous shows such as The Dick Van Dyke Show and The Rockford Files helped people relax and enjoy, and this was especially useful for those who were unprepared for the expectations of a new world.

We are now seeing the advent of the new order in which people are not relegated to consuming from the media channels of others but are empowered to create and share their own content. Much attention is given by Shirky and many, many others to the traditional media such as audio and video, and these are surely where the new generation has had its first great opportunities to shape its world. As Shirky says:

Here's something four-year-olds know: A screen that ships without a mouse ships broken. Here's something four-year-olds know: Media that's targeted at you but doesn't include you may not be worth sitting still for.

But as I've been writing about here, lets not forget the next step: the power to create and shape the media themselves via programming. When people can write programs, they are not relegated even to using the media they have been given but are empowered to create new media, and thus to express and share ideas that may otherwise have been limited to the abstraction of words. Flickr and YouTube didn't drop from the sky; people with ideas created new channels of dissemination. The same is true of tools like Photoshop and technologies such as wikis: they are ideas turned into reality through code.

Do read Shirky's article, if you haven't already. It has me thinking about the challenge we academics face in reaching this new generation and engaging them in the power that is now available to them. Until we understand this world better, I think that we will do well to offer young people lots of options -- different ways to connect, and different paths to follow into futures that they are creating.

One thing we can learn from the democratized landscape of the web. I think, is that we are not offering one audience many choices; we are offering many audiences the one or two choices each that they need to get on board. We can do this through programming courses aimed at different audiences and through interdisciplinary major and minor programs that embed the power of computing in the context of problems and issues that matter to our students.

Let's keep around the good old CS majors as well, for those students who want to go deep creating the technology that others are using to create media and content -- just as we can use the new technologies and media channels to keep great old sitcoms available for geezers like me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

May 16, 2008 2:55 PM

The Seductiveness of Job Security

A former student recently mentioned a tough choice he faces. He has a great job at a Big Company here in the Midwest. The company loves him and wants him to stay for the long term. He likes the job, the company, and the community in which he lives. But this isn't the sort of job he originally had hoped for upon graduation.

Now a position of just the sort he was originally looking for is available to him in a sunny paradise. He says, "I have quite a decision to make.... it's hard to convince myself to leave the secure confines of [Big Company]. Now I see why their turnover rate is so low."

I had a hard time offering any advice. When I was growing up, my dad work for Ford Motor Company in an assembly plant, and he faced insecurity about the continuance of his job several times. I don't know how much this experience affected my outlook on jobs, but in any case my personality is one that tends to value security over big risk/big gain opportunities.

Now I hold a job with greater job security than anyone who works for a big corporation. An older colleague is fond of saying Real men don't accept tenure. I first heard him say that when I was in grad school, and I remember not getting it at all. What's not to like about tenure?

After a decade with tenure, I understand better now what he means. I always thought that the security provided by having tenure would promote taking risks, even if only of the intellectual sort. But too much security is just as likely to stunt growth and inhibit taking risks. I sometimes have to make a conscious effort to push myself out of my comfort zone. Intellectually, I feel free to try new things, but pushing myself out of a comfortable nest here into a new wnvironment -- well, that's another matter. What are the opportunity costs in that?

I love what Paul Graham says about young CS students and grads having the ability to take entrepreneurial risk, and how taking those risks may well be the safer choice in the long run. It's kind of like investing in stocks instead of bonds, I think. I encourage all of my students to give entrepreneurship a thought, and I encourage even more the ones whom I think have a significant chance to do something big. There is probably a bit of wistfulness in my encouragement, not having done that myself, but I don't think I'm simply projecting my own feelings. I really do believe that taking some employment risk, especially while young, is good for many CS grads.

But when faced with a concrete case -- a particular student having to make a particular decision -- I don't feel quite so cocksure in saying "go for it with abandon". This is not abstract theory; his job and home and fiancee are all in play. He will have to make this decision on his own, and I'd hate to push him toward something that isn't right for him from my cushy, secure seat in the tower. I feel a need to stay abstract in my advice and leave him to sort things out. Fortunately, he is a bright, level-headed guy, and I'm sure he'll do fine whichever way he chooses. I wish him luck.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

May 15, 2008 4:30 PM

Being Part of a Group

Surround yourself with smart, competent people, and you will find ideas in the air. One of the compelling thoughts in that article is this:

A scientific genius is not a person who does what no one else can do; he or she is someone who does what it takes many others to do.

For those of us who are not geniuses, the lesson is that we can still accomplish great things -- if we take part in the right sort of collaboration and be curious, inquisitive, and open to big ideas. I think this applies not only to inventions but also to ideas for start-ups and insight to class projects.

(So go to class. You'll find people there.)

But being in a group is not a path to easy accomplishment, as people who have tried to write a book in a group know:

Talking about a "group-book" is a lot of fun. Actually putting one together, maybe less fun.

The ongoing ChiliPLoP working group of which I am a member is another datapoint for Mitzenmacher's claim. Doing more than brainstorming ideas in a groups takes all the same effort, coordination, and individual and collective responsibility as any other sort of work.

(As an aside, I love Stigler's Law as quoted in the Gladwell article linked above! Self-reference can be a joy, especially with the twist engendered by this one.)


Posted by Eugene Wallingford | Permalink | Categories: General

May 13, 2008 9:15 AM

Solid and Relevant

I notice a common rhetorical device in many academic arguments. It goes like this. One person makes a claim and offers some evidence. Often, the claim involves doing something new or seeing something in a new way. The next person rebuts the argument with a claim that the old way of doing or seeing things is more "fundamental" -- it is the foundation on which other ways of doing and seeing are built. Oftentimes, the rebuttal comes with no particular supporting evidence, with the claimant relying on many in the discussion to accept the claim prima facie. We might call this The Fundamental Imperative.

This device is standard issue in the CS curriculum discussions about object-oriented programming and structured programming in first-year courses. I recently noticed its use on the SIGCSE mailing list, in a discussion of what mathematics courses should be required as part of a CS major. After several folks observed that calculus was being de-emphasized in some CS majors, in favor of more discrete mathematics, one frequent poster declared:

(In a word, computer science is no longer to be considered a hard science.)

If we know [the applicants'] school well we may decide to treat them as having solid and relevant math backgrounds, but we will no longer automatically make that assumption.

Often, the conversation ends there; folks don't want to argue against what is accepted as basic, fundamental, good, and true. But someone in this thread had the courage to call out the emperor:

If you want good physicists, then hire people who have calculus. If you want good computer scientists, then hire people who have discrete structures, theory of computation, and program verification.

I don't believe that people who are doing computer science are not doing "hard science" just because it is not physics. The world is bigger than that.

...

You say "solid and relevant" when you really should be saying "relevant". The math that CS majors take is solid. It may not be immediately relevant to problems [at your company]. That doesn't mean it is not "solid" or "hard science".

I sent this poster a private "thank you". For some reason, people who drop the The Fundamental Imperative into an argument seem to think that it is true absolutely, regardless of context. Sure, there may be students who would benefit from learning to program using a "back to the basics" approach, and there may be CS students for whom calculus will be an essential skill in their professional toolkits. But that's probably not true of all students, and it may well be that the world has changed enough that most students would benefit from different preparation.

"The Fundamental Imperative" is a nice formal name for this technique, but I tend to think of it as "if it was good enough for me...", because so often it comes down to old fogies like me projecting our experience onto the future. Both parties in such discussions would do well not to fall victim to their own storytelling.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

May 12, 2008 12:24 PM

Narrative Fallacy on My Mind

In his recent bestseller The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb uses the term narrative fallacy to describe man's penchant for creating a story after the fact, perhaps subconsciously, in order to explain why something happened -- to impute a cause for an event we did not expect. This fallacy derives from our habit of imposing patterns on data. Many view this as a weakness, but I think it is a strength as well. It is good when we use it to communicate ideas and to push us into backing up our stories with empirical investigation. It is bad when we let our stories become unexamined truth and when we use the stories to take actions that are not warranted or well-founded.

Of late, I've been thinking of the narrative fallacy in its broadest sense, telling ourselves stories that justify what we see or want to see. My entry on a response to the Onward! submission by my ChiliPLoP group was one trigger. Those of us who believe strongly that we could and perhaps should be doing something different in computer science education construct stories about what is wrong and what could be better; we're like anyone else. That one OOPSLA reviewer shed a critical light on our story, questioning its foundation. That is good! It forces us to re-examine our story, to consider to what extent it is narrative fallacy and to what extent it matches reality. In the best case, we now know more about how to tell the story better and what evidence might be useful in persuading others. In the worst, we may learn that our story is a crock. But that's a pretty good worst case, because it gets us back on the path to truth, if indeed we have fallen off.

A second trigger was finding a reference in Mark Guzdial's blog to a short piece on universal programming literacy at Ken Perlin's blog. "Universal programming literacy" is Perlin's term for something I've discussed here occasionally over the last year, the idea that all people might want or need to write computer programs. Perlin agrees but uses this article to consider whether it's a good idea to pursue the possibility that all children learn to program. It's wise to consider the soundness of your own ideas every once in a while. While Perlin may not be able to construct as challenging a counterargument as our OOPSLA reviewer did, he at least is able to begin exploring the truth of his axioms and the soundness of his own arguments. And the beauty of blogging is that readers can comment, which opens the door to other thinkers who might not be entirely sympathetic to the arguments. (I know...)

It is essential to expose our ideas to the light of scrutiny. It is perhaps even more important to expose the stories we construct subconsciously to explain the world around us, because they are most prone to being self-serving or simply convenient screens to protect our psyches. Once we have exposed the story, we must adopt a stance of skepticism and really listen to what we hear. This is the mindset of the scientist, but it can be hard to take on when our cherished beliefs are on the line.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns

April 24, 2008 6:56 AM

On the Small Doses Pattern

The Small Doses pattern I wrote up in my previous entry was triggered almost exclusively by the story I heard from Carl Page. The trigger lives on in the text that runs from "Often times, the value of Small Doses..." to the end, and in the paragraph beginning "There is value in distributing...". The story was light and humorous, just the sort of story that will stick with a person for twenty or more years.

As I finally wrote the pattern, it grew. That happens all the time when I write. It grew both in size and in seriousness. At first I resisted getting too serious, but increasingly I realized that the more serious kernel of truth needed telling. So I gave it a shot.

The result of this change in tone and scope means that the pattern you read is not yet ready for prime time. Rather than wait until it was ready, though, I decided to let the pattern be a self-illustration. I have put it out now, in its rough form. It is rough both in completeness and in quality. Perhaps my readers will help me improve. Perhaps I will have time and inspiration soon to tackle the next version.

In my fantasies, I have time to write more patterns in a Graduate Student pattern language (code name: Chrysalis), even a complete language, and cross-reference it with other pattern languages such as XP. Fantasies are what they are.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

April 16, 2008 4:07 PM

Right On Time

Last night, I attended a Billy Joel concert. I last saw him perform live a decade or so ago. Billy was a decade older, and I was a decade older. He looked it, and I'm sure I do, too.

But when he started to play the piano, it could have been 1998 in the arena. Or 1988. Or 1978. The music flowing from his hands and his dancing feet filled me. Throughout the night I was 19 again, then 14, 10, and 25. I was lying on my parents' living room floor; sitting in the hand-me-down recliner that filled my college dorm room; dancing in Market Square Arena with an old girlfriend. I was rebellious teen, wistful adult, and mesmerized child.

There are moments when time seems more illusion than reality. Last night I felt like Billy Pilgrim, living two-plus hours unstuck in time.

Oh, and the music. There are not many artists who can, in the course of an evening, give you so many different kinds of music. From the pounding rock of "You May Be Right" to the gentle, plaintive "She's Always A Woman", and everything between. The Latin rhythms of "Don't Ask Me Why" extended with an intro of Beethoven's "Ode to Joy", and a "Root Beer Rag" worthy of Joplin.

Last night, my daughters aged 15 and 11 attended the concert with me. Music lives on, and time folds back on itself yet again.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

March 13, 2008 8:06 PM

SIGCSE Day 1 -- This and That

[A transcript of the SIGCSE 2008 conference: Table of Contents]

This sort of entry usually comes after I write up the various conference sessions and have leftovers that didn't quite fit in an article. That may still happen, but I already have some sense of what will go where and have these items as miscellaneous observations.

First of all, I tried an experiment today. I did not blog in real-time. I used -- gasp! -- the antiquated technology of pen and paper to take notes during the sessions. On one or two occasions, I whipped open the laptop to do a quick Google search for a PhD dissertation or a book, but I steadfastly held back from the urge to type. I took notes on paper, but I couldn't fall into "writing" -- crafting sentences, then forming paragraphs, editing, ... All I could do was jot, and because I write slowly I had to be pickier about what I recorded. One result is that I paid more attention to the speakers, and less to a train of thought in my head. Another is that I'll have to write up the blog posts off-line, and that will take time!

As I looked through the conference program last night, I found myself putting on my department head hat, looking for sessions that would serve my department in the roles I now find myself in more often: CS1 for scientists, educational policy in CS, and the like. But when I got to the site and found myself having to choose between Door A and Door B... I found myself drifting into the room where Stuart Reges was talking about a cool question that seems to pick out good CS students, and the nifty assignments. Whatever my job title may be, I am a programmer and CS teacher. (More on both of those sessions in coming entries...)

Now, for a couple of non-CS, non-teaching observations.

  • I am amazed at how many healthy adults will walk out of their way, past a perfectly good set of stairs, to ride up an escalator. Sigh.
  • Staying at the discount motel over three miles away and without a car, I am relying on public buses. I have quickly learned that bus schedules are suggestions, not contracts. Deal with it. And in Portland, that often means: deal with it in the rain.
  • Schoolchildren here take standard mass transit buses to school. I never knew such a place existed.

There is so much for me to learn.


Posted by Eugene Wallingford | Permalink | Categories: General, Personal

March 12, 2008 4:32 PM

On the Roads Back in Portland

SIGCSE 2008 logo

With the exception of my annual visit to Carefree for ChiliPLoP, I don't often get a chance to return to a city for another conference. This year brings a pleasant return to Portland for SIGCSE 2008. OOPSLA'06 was in Portland, and I wrote up a little bit about running in Portland as part of my first visit to town. Because I was on the conference planning committee that year, I made three trips to the city, stayed in the same hotel three times, and ran several of the same routes three times. The convention center is right in town, which makes it hard to get to any nice parks to run, but Portland has a 3-mile loop alongside the Willamette River that provides a decent run.

This time, I am on my own dime and trying to save a little money by staying at a budget motel about 3.5 miles from the convention center. That meant figuring out bus routes and bus stops for the ride between the two -- no small feat for a guy who has never lived in a place where public transportation is common! It also meant planning some new runs, including a route back to the waterfront.

I arrived in town early enough yesterday to figure out the buses (I think) and still have time for an exploratory run. I ran toward the river, and then toward the convention center, until I knew the lay of the land well enough. The result was 4.5 miles of urban running in neighborhoods I'd never seen. This morning, used what I learned to get to the river, where I ran my first lap through the Governor Tom McCall Waterfront Park and the Eastbank Esplanade since October 2006. I ended up with about 8 miles under my belt, and a strong desire to return Saturday evening for three laps and what will be a 14-miler -- what would be my longest run since the Marine Corps Marathon. Let's see how I feel in a couple of days...

The rest of this week I am at SIGCSE, and I'm looking forward to seeing old friends and colleagues and to talking CS for a few days. Then on Sunday, four of us fly to Phoenix for ChiliPLoP and some intense work. This is a long time to be away from home and to miss my family, but the ideas should keep me busy.


Posted by Eugene Wallingford | Permalink | Categories: General, Running

February 28, 2008 7:10 PM

The Complement of Schadenfreude

Does it have a name?

Of course, Schadenfreude itself doesn't really have a name in English. It is a German word that means roughly delight in another person's misfortune. (However, I see that Wikipedia offers one, the 300+-year-old, apparently abandoned "epicaricacy".)

Last semester, a colleague described what struck me as the complement of Schadenfreude. He reported that one of our close friends, a retired professor here, expressed a strong unhappiness or distaste for faculty who succeeded in publishing academic papers. This matters to him because he is one of those folks. His friend came to the university in a different era, when we were a teacher's college without any pretension to being a comprehensive university. The new faculty who publish and talk about their research, she said, are "just showing off". Their success caused her pain, even if they didn't brag about their success.

This is not the opposite of Schadenfreude. That is happiness in another's good fortune, which Wikipedia tells us matches the Buddhist concept of mudita. What our friend feels inverts both the emotion and the trigger.

I don't think that her condition corresponds to envy. When someone is envious, they want what someone else has. Our friend doesn't want what the others have; she is saddened, almost angered, that others have it. No one should.

The closest concept I can think of is "sour grapes", a metaphor from one of Aesop's beloved fables. But in this story, the fox does want the grapes, and professes to despise them only when he can't reach them. I believe that our friend really doesn't want the success of research; she earnestly believes that our mission is to teach, not publish, and that energy spent doing research is energy misspent. And that makes her feel bad.

When my colleague told me his story, I joked that the name for this condition should be freudenschade. I proposed this even though I know a little German and know how non-sensical it is. But it seemed fun. Sadly, I wasn't the first person to coin the word... Google tells me that at least one other person has. You may be tempted to say that I feel freudenschade that someone else coined the term "freudenschade" first, but I don't. What I feel is envy!

The particular story that led to my discussion is almost beside the point. I'm on a mission that has moved beyond it. I am not aware of a German word for the complement of Schadenfreude. Nor am I aware of an English word for it. Is there a word for it anywhere, in English, German, or some other language?

I'm curious... Perhaps the Lazyweb can help me.


Posted by Eugene Wallingford | Permalink | Categories: General

February 24, 2008 12:48 PM

Getting Lost

While catching up on some work at the office yesterday -- a rare Saturday indeed -- I listened to Peter Turchi's OOPSLA 2007 keynote address, available from the conference podcast page. Turchi is a writer with whom conference chair Richard Gabriel studied while pursuing his MFA at Warren Wilson College. I would not put this talk in the same class as Robert Hass's OOPSLA 2005 keynote, but perhaps that has more to do with my listening to an audio recording of it and not being there in the moment. Still, I found it to be worth listening as Turchi encouraged us to "get lost" when we want to create. We usually think of getting lost as something that happens to us when we are trying to get somewhere else. That makes getting lost something we wish wouldn't happen at all. But when we get lost in a new land inside our minds, we discover something new that we could not have seen before, at least not in the same way.

As I listened, I heard three ideas that captured much of the essence of Turchi's keynote. First was that we should strive to avoid preconception. This can be tough to do, because ultimately it means that we must work without knowing what is good or bad! The notions of good and bad are themselves preconceptions. They are valuable to scientists and engineers as they polish up a solution, but they often are impediments to discovering or creating a solution in the first place.

Second was the warning that a failure to get lost is a failure of imagination. Often, when we work deeply in an area for a while, we sometimes feel as if we can't see anything new and creative because we know and understand the landscape so well. We have become "experts", which isn't always as dandy a status as it may seem. It limits what we see. In such times, we need to step off the easy path and exercise our imaginations in a new way. What must I do in order to see something new?

This leads to the third theme I pulled from Turchi's talk: getting lost takes work and preparation. When we get stuck, we have to work to imagine our way out of the rut. For the creative person, though, it's about more about getting out of a rut. The creative person needs to get lost in a new place all the time, in order to see something new. For many of us, getting lost may seem like as something that just happens, but the person who wants to be lost has to prepare to start.

Turchi mentioned Robert Louis Stevenson as someone with a particular appreciation for "the happy accident that planning can produce". But artists are not the only folks who benefit from these happy accidents or who should work to produce the conditions in which they can occur. Scientific research operates on a similar plane. I am reminded again of Robert Root-Bernstein's ideas for actively engaging the unexpected. Writers can't leave getting lost to chance, and neither can scientists.

Turchi comes from the world of writing, not the world of science. Do his ideas apply to the computer scientist's form of writing, programming? I think so. A couple of years ago, I described a structured form of getting lost called air-drop programming, which adventurous programmers use to learn a legacy code base. One can use the same idea to learn a new framework or API, or even to learn a new programming language. Cut all ties to the familiar, jump right in, and see what you learn!

What about teaching? Yes. A colleague stopped by my office late last week to describe a great day of class in which he had covered almost none of what he had planned. A student had asked a question whose answer led to another, and then another, and pretty soon the class was deep in a discussion that was as valuable, or more, than the planned activities. My colleague couldn't have planned this unexpectedly good discussion, but his and the class's work put them in a position where it could happen. Of course, unexpected exploration takes time... When will they cover all the material of the course? I suspect the students will be just fine as they make adjustments downstream this semester.

What about running? Well, of course. The topic of air-drop programming came up during a conversation about a general tourist pattern for learning a new town. Running in a new town is a great way to learn the lay of the land. Sometimes I have to work not to remember landmarks along the way, so that I can see new things on my way back to the hotel. As I wrote after a glorious morning run at ChiliPLoP three years ago, sometimes you run to get from Point A to Point B; sometimes, you should just run. That applies to your hometown, too. I once read about an elite women's runner who recommended being dropped off far from your usual running routes and working your way back home through unfamiliar streets and terrain. I've done something like this myself, though not often enough, and it is a great way to revitalize my running whenever the trails start look like the same old same old.

It seems that getting lost is a universal pattern, which made it a perfect topic for an OOPSLA keynote talk.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Running, Software Development, Teaching and Learning

February 20, 2008 2:55 PM

You Know You're Doing Important Work...

... when Charlie Eppes invokes your research area on Numb3rs. In the episode I saw last Friday, the team used a recommender system, among other snazzy techie glitz, to track down a Robin Hood who was robbing from the dishonestly rich and giving to the poor through a collection of charities. A colleague of mine does work in recommender systems and collaborative filtering, so I thought of him immediately. His kind of work has entered the vernacular now.

I don't recall the Numb3rs crew ever referring to knowledge-based systems or task-specific architectures, which was my area in the old days. Nor do I remember any references to design patterns or to programming language topics, which is where I have spent my time in the last decade or so. Should I feel left out?

But Charlie and Amita did use the idea of steganography in an episode two years ago, to find a pornographic image hidden inside an ordinary image. I have given talks on steganography on campus occasionally in the last couple of years. The first time was at a conference on camouflage, and most recently I spoke to a graphic design class, earlier this month. (My next engagement is at UNI's Saturday Science Showcase, a public outreach lecture series my college runs in the spring.) So I feel like at least some of my intellectual work has been validated.

Coincidentally, I usually bill my talks on this topic as "Numb3rs Meets The Da Vinci Code: Information Masquerading as Art", and one of the demonstrations I do is to hide an image of Numb3rs guys in a digitized version of the Mona Lisa. The talk is a lot of fun for me, but I wonder if college kids these days pay much attention to network television, let alone da Vinci's art.

Lest you think that only we nth-tier researchers care to have our areas trumpeted in the pop world, even the great ones can draw such pleasure. Last spring, Grady Booch gave a keynote address at SIGCSE. As a part of his opening, he played for us a clip from a TV show that had brightened his day, because it mentioned, among other snazzy techie glitz, the Unified Modeling Language he had helped to create. Oh, and that video clip came from... Numb3rs!


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

January 24, 2008 4:18 PM

The Stars Trust Me

My horoscope says so:

Thursday, January 24

Scorpio (October 24-November 22) -- You are smart enough to realize meeting force with force will only result in non-productive developments. To your credit, you will turn volatile matters around with wisdom, consideration, and gentleness.

Now, I may not really be smart enough, or wise enough, or even gentle enough. But on days like today it is good to hear such advice. Managing a team, a faculty, or a class involves a lot or relationships and a lot of personalities. Using wisdom, consideration, and gentleness is usually a more effective way to deal with unexpected conflicts than responding in kind or brute force.

Some days, my horoscope fits my situation perfectly. Today is one. But I promise not to turn to the zodiac for future blogging inspiration, unless it delivers a similarly general piece of advice.


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading

January 23, 2008 11:15 AM

MetaBlog: Good News, No News

One piece of good news from the past week: My permalinks should work now! Our college web server is once again behaving as it should, which means that http://www.cs.uni.edu/~wallingf/blog/ will not redirect to a http://cns2.uni.edu/ URL. This means that my permalinks, which are in the www.cs.uni.edu domain, will once again work. This makes me happy, and I hope that it makes it easier for folks to link directly to articles that they discuss in their own blogs. There may still be a problem with the category pages, but the sysadmins should have that fixed soon.

Now for that Bloglines issue... I haven't had much luck getting help from the Bloglines team, but I'll keep trying.


Posted by Eugene Wallingford | Permalink | Categories: General

December 18, 2007 4:40 PM

Notes on a SECANT Workshop: Table of Contents

[Nothing new here for regular readers... This post implements an idea that I saw on Brian Marick's blog and liked: a table of contents for a set of conference posts coupled with cross-links as notes at the top of each entry. I have done a table of contents before, for OOPSLA 2005 -- though, sadly, not for 2004 or 2006 -- but I like the addition of the link back from entries to the index. This may help readers follow my entries, especially when they are out of order, and it may help me when I occasionally want to link back to the workshop as a unit.]

This set of entries records my experiences at the SECANT 2007 workshop November 17-18, hosted by the Purdue Department of Computer Science.

Primary entries:

  • Workshop Intro: Teaching Science and Computing
    -- on building a community
  • Workshop 1: Creating a Dialogue Between Science and CS
    -- How can we help scientists and CS folks work together?
  • Workshop 2: Exception Gnomes, Garbage Collection Fairies, and Problems
    -- on a hodgepodge of sessions around the intersection of science ed and computing
  • Workshop 3: The Next Generation
    -- what scientists are doing out in the world and how computer scientists are helping them
  • Workshop 4: Programming Scientists
    -- Should scientists learn to program? And, if so, how?
  • Workshop 5: Wrap-Up
    -- on how to cause change and disseminate results

Ancillary entries:

The next few items on the newsfeed will be these entries, updated with the "transcript" cross-link. [Done]


Posted by Eugene Wallingford | Permalink | Categories: General

December 18, 2007 2:12 PM

Post-Semester This and That

Now that things have wound down for the semester, I hope to do some mental clean-up and some CS. As much as I enjoyed the SECANT workshop last month (blogged in detail ending here), travel that late in a semester compresses the rest of the term into an uncomfortably small box. That said, going to conferences and workshops is essential:

Wherever you work, most of the smart people are somewhere else.

I saw that quote attributed to Bill Joy in an article by Tim Bray. Joy was speaking of decentralization and the web, but it applies to the pre-web network that makes up any scholarly discipline. Even with the web, it's good to get out of the tower every so often and participate in an old-fashioned conversation.

One part of the semester clean-up will be assessing the state of my department head backlog. Most days, I add more things to my to-do list than I am unable to cross off. Some of them are must-dos, details, and others are ideas, dreams. By the end of the semester, I have to be honest that many of the latter won't be done, soon if ever. I don't do a hard delete of most of these items; I just push them onto a "possibilities" list that can grow as large as it likes without affecting my mental hygiene.

I recently told my dean that, after two and a half years as head, I had almost come to peace with what I have taken to calling "time management by burying". He just smiled and said that the favorite part of his semester is doing just that, looking at his backlog and saying to himself, "Well, guess I'm not going to do that" as he deleted it from the list for good. Maybe I should be more ruthless myself. Or maybe that works better if you are a dean...

I've been following the story of the University of Michigan hiring West Virginia University's head football coach. Whatever one thinks of the situation -- and I think it brings shame to both Michigan and its new coach -- there was a very pragmatic piece of advice to be learned about managing people from one Pittsburgh Post-Gazette sports article about it. Says Bob Reynolds, former chief operating officer of Fidelity Investments:

I've been the COO of a 45,000-person company. When somebody's producing, you ask, 'What can I do for you to make your life better?' Not 'What can I do to make your life more miserable?'

That's a good thought for an academic department head to start each day with. And maybe a CS instructor, too.


Posted by Eugene Wallingford | Permalink | Categories: General

December 17, 2007 5:02 PM

An Unexpected Opportunity

I had to drive to Des Moines for a luncheon today. Four hours driving, round-trip, for a 1.25-hour lunch -- the things I do for my employer! The purpose of the trip was university outreach: I was asked to represent the university at a lunch meeting of the Greater Des Moines Committee, in place of our president and dean.

The luncheon was valuable for making connections to the movers and shakers in the capital city, and for talking to business leaders about computer science enrollments, math and science in the K-12 schools, and IT policy for the state. The lunch speaker, Ted Crosbie, the chief technology officer of Iowa, gave a good talk on economic development and the future of the state's technology efforts.

But was it all worth four hours on the road? Probably so, but I will give a firm Yes, for an unexpected reason.

A couple of minutes after I took my seat for lunch, former Iowa Governor Terry Branstad (1983-1999) sat down at our table. He struck up a nice conversation. Then, a couple of minutes later, former Iowa Governor Robert Ray (1969-1983) joined us. Very cool. I was impressed at how involved and informed these retired public officials remain in the affairs of the state, especially in economic development. The latter is, of course, something of great importance to my department and its students, as well as the university as a whole.

Then on the drive home, I saw a bald eagle soar majestically over a small riverbed. A thing of beauty.


Posted by Eugene Wallingford | Permalink | Categories: General

November 22, 2007 6:16 PM

For the Fruits of This Creation

On this and every day:

For the harvests of the Spirit, Thanks be to God;
For the good we all inherit, Thanks be to God;
For the wonders that astound us,
For the truths that will confound us,
Most of all that love has found us, Thanks be to God.

(Lyric by Fred Pratt Green, copyright 1970. Sung to a traditional Welsh melody.)

Among so many things, I'm thankful for the chance to write here and to have people read what I write.

Happy Thanksgiving.


Posted by Eugene Wallingford | Permalink | Categories: General

November 20, 2007 4:30 PM

Workshop 5: Wrap-Up

[A transcript of the SECANT 2007 workshop: Table of Contents]

The last bit of the SECANT workshop focused on how to build a community at this intersection of CS and science. The group had a wide-ranging discussion which I won't try to report here. Most of it was pretty routine and would not be of interest to someone who didn't attend. But there were a couple of points that I'll comment on.

On how to cause change.     At one point the discussion turned philosophical, as folks considered more generally how one can create change in a larger community. Should the group try to convince other faculty of the value of these ideas first, and then involve them in the change? Should the group create great materials and courses first and then use them to convince other faculty? In my experience, these don't work all that well. You can attract a few people who are already predisposed to the idea, or who are open to change because they do not have their own ideas to drive into the future. But folks who are predisposed against the idea will remain so, and resist, and folks who are indifferent will be hard to move simply because of inertia. If it ain't broke, don't fix it.

Others expressed these misgivings. Ruth Chabay suggested that perhaps the best way to move the science community toward computational science is by producing students who can use computation effectively. Those students will use computation to solve problems. They will learn deeper. This will catch the eye of other instructors. As a result, these folks will see an opportunity to change how they teach, say, physics. We wouldn't have to push them to change; they would pull change in. Her analogy was to the use of desktop calculators in math, chemistry, and physics classes in the 1970s and 1980s. Such a guerilla approach to change might work, if one could create a computational science course good enough to change students and attractive enough to draw students to take it. This is no small order, but it is probably easier than trying to move a stodgy academic establishment with brute force.

On technology for dissemination.     Man, does the world change fast. Folks talked about Facebook and Twitter as the primary avenues for reaching students. Blogs and wikis were almost an afterthought. Among our students, e-mail is nearly dead, only 20 years or so after it began to enter the undergraduate mainstream. I get older faster than the calendar says because the world is changing faster than the days are passing.

Miscellaneous.     Purdue has a beautiful new computer science building, the sort of building that only a large, research school can have. What we might do with a building at an appropriate scale for our department! An extra treat for me was a chance to visit a student lounge in the building that is named for the parents of a net acquaintance of mine, after he and his extended family made a donation to the building fund. Very cool.

I might trade my department's physical space for Purdue CS's building, but I would not trade my campus for theirs. It's mostly buildings and pavement, with huge amounts of auto traffic in addition to the foot traffic. Our campus is smaller, greener, and prettier. Being large has its ups and its downs.

Thanks to a recommendation of the workshop's local organizer, I was able to enjoy some time running on campus. Within a few minutes I found my way to some trails that head out into more serene places. A nice way to close the two days.

All in all, the workshop was well worth my time. I'll share some of the ideas among my science colleagues at UNI and see what else we can do in our own department.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

November 15, 2007 9:13 PM

Making Time to Do What You Love

Earlier this week, I read The Geomblog's A day in the life..., in which Suresh listed what he did on Monday. Research did not appear on the list.

I felt immediate and intense empathy. On Monday, I had spent all morning on our college's Preview Day, on which high school students who are considering studying CS at my university visit campus with their parents. It is a major recruiting effort in our college. I spent the early morning preparing my discussion with them and the rest of the morning visiting with them. The afternoon was full of administrative details, computer labs and registration and prospective grad students. On Tuesday, when I read the blog entry, I had taught compilers -- an oasis of CS in the midst of my weeks -- and done more administration: graduate assistantships, advising, equipment purchases, and a backlog of correspondence. Precious little CS in two days, and no research or other scholarly activity.

Alas, that is all too typical. Attending an NSF workshop this week is a wonderful chance to think about computer science, its application in the sciences, and how to teach it. Not research, but computer science. I only wish I had a week or five after it ends to carry to fruition some of the ideas swirling around my mind! I will have an opportuniy to work more on some of these ideas when I return to the office, as a part of my department's curricular efforts, but that work will be spread over many weeks and months.

That is not the sort of intense, concentrated work that I and many other academics prefer to do. Academics are bred for their ability to focus on a single problem and work intensely on it for long periods of time. Then comes academic positions that can spread us quite then. An administrative position takes that to another level.

Today at the workshop, I felt a desire to bow down before an academic who understands all this and is ready to take matters into his own hands. Some folks were discussing the shortcomings of the current Mac OS X version of VPython, the installation of which requires X11, Xcode, and Fink. Bruce Sherwood is one of the folks in charge of VPython. He apologized for the state of the Mac port and explained that the team needs a Mac guru to build a native port. They are looking for one, but such folks are scarce. Never fear, though... If they can't find someone soon, Sherwood said,

I'm retiring so that I can work on this.

Now that is commitment to a project. We should all have as much moxie! What do you say, Suresh?


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

November 12, 2007 7:27 AM

Notes to My Bloglines Readers

My apologies to the 130-odd of you who read this blog via Bloglines. A couple of you have alerted me to a technical issue with links disappearing from my posts when you read Knowing and Doing through the Bloglines interface. The problem is intermittent, which makes it frustrating for you all and harder for me to track down.

I've validated my RSS feed at http://feedvalidator.org and looked for some clues in the HTML source. No luck. At this point, I have asked the folks at Bloglines to see if they can find something in my feed that interacts badly with their software. I'll keep you posted.


Posted by Eugene Wallingford | Permalink | Categories: General

November 07, 2007 7:45 AM

Magic Books

Last Saturday morning, I opened a book at random, just to fill some time, and ended up writing a blog entry on electronic communities. It was as if the book were magic... I opened to a page, read a couple of sentences, and was launched on what seemed like the perfect path for that morning. That experience echoed one of the things Vonnegut himself has often said: there is something special about books.

This is one reason that I don't worry about getting dumber by reading books, because for me books have always served up magic.

I remember reading just that back in high school, in Richard Bach's Illusions:

I noticed something strange about the book. "The pages don't have numbers on them, Don."

"No," he said. "You just open it and whatever you need most is there."

"A magic book!"

These days, I often have just this experience on the web, as I read blogs and follow links off to unexpected places. An academic book or conference proceedings can do the same. Bach would have said, "But of course."

"No you can do it with any book. You can do it with an old newspaper, if you read carefully enough. Haven't you done that, hold some problem in your mind, then open any book handy and see what it tells you?"

I do that sometimes, but I'm just as likely to catch a little magic when my mind is fallow, and I grab a paper of one of my many stacks for a lunch jaunt. Holding a particular problem in my mind sometimes puts too much pressure on whatever might happen.

Indeed, this comes back to the theme of the article I wrote on Saturday morning. On one hand there are traditional media and traditional communities, and on the other are newfangled electronic media and electronic communities. The traditional experiences often seem to hold some special magic for us. But the magic is not in any particular technology; it is in the intersection between ideas out there and our inner lives.

When I feel something special in the asynchronicity of a book's magic, and think that the predetermination of an RSS feed makes it less spontaneous, that just reflects my experience, maybe my lack of imagination. If I look back honestly, I know that I have stumbled across old papers and old blog posts and old web pages that served up magic to me in much the same way that books have done. And, like electronic communities, the digital world of information creates new possibilities for us. A book can be magic for me only if I have a copy handy. On the web, every article is just a click a way. That's a powerful new sort of magic.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

November 06, 2007 6:53 AM

Lack of Confidence and Teamwork

Over on one of the mailing lists I browse -- maverick software development -- there has been a lot of talk about how a lack of trust is one of the primary dysfunctions of teams. The discussion started as a discussion of Patrick Lencioni's The Five Dysfunctions of a Team but has taken on its own life based on the experiences of the members of the list.

One writer there made the bold claim that all team dysfunctions are rooted in a lack of trust. Others, such as fear of conflict and lack of commitment to shared goals, grow solely from a lack of trust among team members and leaders. This is, in fact, what Lencioni claims in his book, that a lack of trust creates an environment in which people fear conflict, which ensures a lack of commitment and ultimately an avoidance of accountability, ending in an inattention to the results produced by the team.

The writer who made this claim asked list members for specific counterexamples. I don't know if I can do that, but I will say that it's amazing what a lack of confidence can do to an individual's outlook and performance, and ultimately on his or her ability to contribute positively as a team member.

When a person lacks confidence in his ability, he will be inclined to interpret every contingent signal in a different way than it was intended. This interpretation is often extreme, and very often wrong. This creates an impediment to performance and to interaction.

I see it in students all the time. A lack of confidence makes it hard to learn! If I don't trust what I know or can do, then every new idea looks scary. How can I understand this if I don't understand the more fundamental material? I don't want to ask this question, because the teacher, or my classmates, will see how little I know. There's no sense in trying this; I'll just fail.

This is, I think a problem in CS classes between female and male students. Male students seem more likely than females to bluff their way through a course, pretending they understand something more deeply than they do. This gives everyone a distorted image of the overall understanding of the class, and leaves many female students thinking that they are alone in not "getting it". One of the best benefits of teaching a CS class via discussion rather than lecture is that over time the bluffers are eventually outed by the facts. I still recall one of our female students telling me in the middle of one of my courses taught in this way that she finally saw that no one else had any better grasp on the material than she did and that, all things considered, she was doing pretty well. Yes!

I see the effects of lack of confidence in my faculty colleagues, too. This usually shows up in a narrow context, where the person doesn't know a particular area of computing very well, or lacks experience in a certain forum, and as a result shies away from interacting in venues that rely on this topic. I also see this spill over into other interactions, where a lack of confidence in one area sets the tone for fear of conflict (which might expose an ignorance) and disengagement from the team.

I see it in myself, as instructor on some occasions and as a faculty member on others. Whenever possible I use a lack of confidence in my understanding of a topic as a spur to learn more and get better. But in the rush of days this ideal outlook often falls victim to rapidly receding minutes.

A personal lack of confidence has been most salient to me in my role as a department head. This was a position for which I had received no direct training, and grousing about the performance of other heads offers only the flimsiest foundation for doing a better job. I've been sensitized to nearly every interaction I have. Was that a slight, or standard operating procedure? Should I worry that my colleague is displeased with something I've done, or was that just healthy feedback? Am I doing a good enough job, or are the faculty simply tolerating me? As in so many other contexts, these thoughts snowball until they are large enough to blot everything else out of one's sight.

The claimant on the mailing list might say that trust is the real issue here. If the student trusts his teacher, or the faculty member trusts his teammates, or the department head trusts his faculty, either they would not lack confidence or would not let it affect their reactions. But that is precisely the point: they are reactions, from deep within. I think we feel our lack of confidence prior to processing the emotion and acting on trust. Lack of confidence is probably not more fundamental than lack of trust, but I think they are often orthogonal to one another.

How does one get over a lack of confidence? The simplest way is to learn what we need to know, to improve our skills. In the mean time, a positive attitude -- perhaps enabled by a sense of trust in our teammates and situation -- can do wonders. Institutionally, we can have, or work to get, support from above. A faculty member who trusts that she has room to grow in the presence of her colleagues and head, or a new department who trusts that he has room to grow in the presence of his dean, will be able to survive a lack of confidence while in the process of learning. I've seen new deans and heads cultivate that sort of trust by acting cautiously at the outset of their tenure, so as not to lose trust before the relationship is on firm ground.

In the context of software development, the set of tasks for which someone is responsible is often more crisply delineated than the set of tasks for a student or manager. In one way, that is good news. If your lack of confidence stems from not knowing how Spring or continuation passing style works, you can learn it! But it's not too simple, as there are time constraints and team relationships to navigate along the way.

Ultimately, a mindset of detachment is perhaps the best tool a person who lacks confidence can have. Unfortunately, I do not think that detachment and lack of confidence are as common a package as we might hope. Fortunately, one can cultivate a sense of detachment over time, which makes dealing with recurring doubts about one's capabilities easier to manage over time.

If only it were as easy to do these things as it is to say them!


Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Software Development

November 03, 2007 4:47 PM

Electronic Communities and Dancing Animals

I volunteered to help with a local 5K/10K race this morning. When I arrived at my spot along the course, I had half an hour to fill before the race began, and 45 minutes or so before the first runners would reach me. At first I considered taking a short nap but feared I'd sleep too long. Not much help to the runners in that! So I picked up Kurt Vonnegut's A Man Without a Country, which was in my front seat on its way back to the library. (I wrote a recent article motivated by something else I read in this last book of Vonnegut's.)

I opened the book to Page 61, and my eyes fell immediately to:

Electronic communities build nothing. You end up with nothing. We are dancing animals.

This passage follows a wonderful story about how Kurt mails his manuscripts, daily coming into contact with international voices and a flamboyant postal employee on whom he has a crush. I've heard this sentiment before, in many different contexts and from many different people, but fundamentally I disagree with the claim. Let me tell you about two stories of this sort that stick in my mind, and my reactions at the time.

A decade or so ago, the famed philosopher and AI critic Hubert Dreyfus came to our campus to deliver a lecture as part of an endowed lecture series in the humanities. Had I been blogging at that time, I surely would have written a long review of this talk! Instead, all I have a notebook on my bookshelf full of pages and pages of notes. (Perhaps one of these days...) Dreyfus claimed that the Internet was leading to a disintegration of society by creating barriers to people connecting in the real world. Electronic communication was supplanting face-to-face communication but giving us only an illusion of a real connection; in fact, we were isolating ourselves from one another.

In the question-and-answer session that followed, I offered a counterargument. Back in the mid-1980s I became quite involved in several Usenet newsgroups, both for research and entertainment. In the basketball and football newsgroups, I found intelligent, informed, well-rounded people with who to discuss sports at a deeper level than I could with anyone in my local physical world. These groups became an important part of my day. But as the number of people with Internet access exploded, especially on college campuses, the signal-to-noise ratio in the newsgroups fell precipitously. Eventually, a core group of the old posters moved much of discussion off-group to a private mailing list, and ultimately I was invited to join them.

This mailing list continues to this day, taking on and losing members as lives change and opportunities arise. We still discuss sports and politics, pop culture and world affairs. It is a community as real to me as most others, and I consider some of the folks there to be good friends whom I'm lucky to have come to know. Members of the basketball group get together in person annually for the first two rounds of the NCAA tournament, and wherever we travel for business or pleasure we are likely to be in the neighborhood of a friend we can join for a meal and a little face-to-face communication. Like any real community, there are folks in the group whom I like a lot and others with whom I've made little or no personal connection. On-line we have good moments and disagreements and occasional hurt feelings, like any other community of people.

The second story I remember most is from Vonnegut himself, when he, too, visited my campus back when. At one of the sessions I attended, someone asked him about the fate of books in the Digital Age. Vonnegut was certain that books would continue on in much their current form, because there was something special about the feel of a book in one's hands, the touch of paper on the skin, the smell of the print and binding. Even then I recall disagreeing with this -- not because I don't also feel that something special in the feel of a book in my hands or the touch of the paper on my skin. A book is an artifact of history, an invention of technology. Technology changes, and no matter how personally we experience a particular technology's outward appearance, it is more likely to be different in a few years than to be the same.

My Usenet newsgroup story seems to contradict Dreyfus's thesis, but he held that, because we took it upon ourselves to meet in person, my story actually supported it. To me that seemed a too convenient way for him to dismiss the key point: our sports list is essentially an electronic community, one whose primary existence is virtual. Were the Internet to disappear tomorrow, some of the personal connections we've made would live on, but the community would die.

And keep in mind that I am old guy... Today's youth grow up in a very different world of technology than we did. One of the specific sessions I regret missing by missing OOPSLA was the keynote by Jim Purbrick and Mark Lentczner on Second Life, a new sort of virtual world that may well revolutionize the idea of electronic community not only for personal interaction but for professional, corporate, and economic interaction as well. As an example, OOPSLA itself had an island in Second Life as a way to promote interaction among attendees before and during the conference.

The trend in the world these days is toward more electronic interaction, not less, and new kinds that support wider channels of communication and richer texture in the interchange. There are risks in this trend, to be sure. Who among us hasn't heard the already classic joke about the guy who needs a first life before he can have a Second Life? But I think that this trend is just another step in the evolution of human community. We'll find ways to minimize the risks while maximizing the benefits. The next generation will be better prepared for this task than old fogies like me.

All that said, I am sympathetic to the sentiment that Vonnegut expressed in the passage quoted above, because I think underlying the sentiment is the core of a truth about being human. He expresses his take on that truth in the book, too, for as I turned the page of the book I read:

We are dancing animals. How beautiful it is to get up and go out and do something. We are here on Earth to fart around. Don't let anybody tell you any different.

I know this beauty, and I'm sure you do. We are physical beings. The ability and desire to make and share ideas distinguish us from the rest of the world, but still we are dancing animals. There seems in us an innate need to do, not just think, to move and see and touch and smell and hear. Perhaps this innate trait is why I love to run.

But I am also aware that some folks can't run, or for whatever reason cannot sense our physical world in the same way. Yet many who can't still try to go out and do. At my marathon last weekend, I saw men who had lost use of their legs -- or lost their legs altogether -- making their way over 26.2 tough miles in wheelchairs. The long uphill stretches at the beginning of the course made their success seem impossible, because every time they released their wheels to grab for the next pull forward they lost a little ground. Yet they persevered. These runners' desire to achieve in the face of challenge made my own difficulties seem small.

I suspect that these runners' desire to complete the marathon had as much to do with a sense of loss as with their innate nature as physical beings. And I think that this accounts for Vonnegut's and others' sentiment about the insufficiency of electronic communities: a sense of loss as they watch the world around evolve quickly into something very different from the world in which they grew.

Living in the physical world is clearly an important part of being human. But it seems to be neither necessary nor sufficient as a condition.

Like Vonnegut, I grew up in a world of books. To me, there is still something special about the feel of a book in my hands, the touch of paper on my skin, the smell of the print and binding of a new book the first time I open it. But these are not necessary parts of the world; they are artifacts of history. The sensual feel of a book will change, and humanity will survive, perhaps none they worse for it.

I can't say that face-to-face communities are merely an artifact of history, soon to pass, but I see no reason to believe that the electronic communities we build now -- we do build them, and they so seem to last, at least on the short time scale we have for judging them -- cannot augment our face-to-face communities in valuable ways. I think that they will allow us to create forms of community that were not available to us before, and thus enrich human experience, not diminish it. While we are indeed dancing animals, as Vonnegut describes us, we are also playing animals and creative animals and thinking animals. And, at our core, we are connection-making animals, between ideas and between people. Anything that helps us to make more, different, and better connections has a good chance of surviving in some form as we move into the future. Whether dinosaurs like Vonnegut or I can survive there, I don't know!


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

October 19, 2007 4:42 PM

Working Hard, Losing Ground

Caution: Sisyphus at Work

Some days I read a paper or two and feel like I've lost ground. I just have more to read, think, and do. Of course, this phenomenon is universal... As Design Observer tells us, reading makes us more ignorant:

According to the Mexican critic Gabriel Zaid, writing in "So Many Books: Reading and Publishing in an Age of Abundance", ... "If a person read a book a day, he would be neglecting to read 4,000 others... and his ignorance would grow 4,000 times faster than his knowledge."

Don't read a book today! Now there is a slogan modern man can get behind. It seems that a few college students have already signed on.

My hope for most days is just the opposite. Here is a nice graphic slogan for this hope, courtesy of Brian Marick:

to be less wrong than yesterday

But it's hard to feel that way some days. The universe of knowing and doing is large. The best antidote to Sisyphean despair is to set a few measurable goals that one can reach with a reasonable short-term effort. Each step can give a bit of satisfaction, and -- if you take enough such steps -- you can end up someplace new. A lot like writing code.


Posted by Eugene Wallingford | Permalink | Categories: General

October 06, 2007 8:16 PM

Today I Wrote a Program

Today I wrote a program, just for fun. I wrote a solution to the classic WordLadder game, which is a common nifty assignment used in the introductory Data Structures course. I had never assigned it in one of my courses and had never had any other reason to solve it. But my daughter came home yesterday with a math assignment that included a few of these problems, such as converting "heart" to "spade", and in the course of talking with her I ended up doing a few of the WordLadder problems on my own. I'm a hopeless puzzle junkie.

Some days, an almost irrational desire to write a program comes over me, and last night's fun made me think, "I wonder how I might do this in code?" So I used a few spare minutes throughout today to implement one of my ideas from last night -- a simple breadth-first search that finds all of the shortest solutions in a particular dictionary.

A few of those spare minutes came at the public library, while the same daughter was participating in a writers' workshop for youth. As I listened to their discussion of a couple of poems written by kids in the workshop in the background, I thought to myself, "I'm writing here, too." But then it occurred to me that the kids in the workshop wouldn't call what I was doing "writing". Nor would their workshop leader or most people that we call "writers". Nor would most computer scientists, not without the rest of the phrase: "writing a program".

Granted, I wasn't writing a poem. But I was exploring an idea that had come into my mind, one that drove forward. I wasn't sure what sort of program I would end up, and arrived at the answer only after having gone down a couple of expected paths and found them wanting. My stanzas, er, subprocedures, developed over time. One grew and shrank, changed name, and ultimately became much simpler and clearer than what I had in mind when I started.

I was telling a story as much as I was solving a problem. When I finished, I had a program that communicates to my daughter an idea I described only sketchily last night. The names of my variables and procedures tell the story, even without looking at too much of their detail. I was writing as a way to think, to find out what I really thought last night.

Today I wrote a program, and it was fun.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

October 03, 2007 5:24 PM

Walk the Wall, Seeger

Foley break Mayo

There is a great scene toward the end of one of my favorite movies, An Officer and a Gentleman. The self-centered and childlike protagonist, Zach Mayo, has been broken down by Drill Instructor Foley. He is now maturing under the Foley's tough hand. The basic training cohort is running the obstacle course for its last time. Mayo is en route to a course record, and his classmates are urging him on. But as his passes one of his classmates on the course, he suddenly stops. Casey Seeger has been struggling with wall for the movie, and it looks like she still isn't going to make it. But if she doesn't, she won't graduate. Mayo sets aside his record and stands with Seeger, cheering her and coaching her over the wall. Ultimately, she makes it over -- barely -- and the whole class gathers to cheer as Mayo and Seeger finish the run together. This is one of the triumphant scenes of the film.

I thought of this scene while running mile repeats on the track this morning. Three young women in the ROTC program were on the track, with two helping the third run sprints. The two ran alongside their friend, coaxing her and helping her continue when she clearly wanted to stop. If I recall correctly from my sister's time in ROTC, morning PT (physical training) is a big challenge for many student candidates and, as in An Officer and a Gentleman, they must meet certain fitness thresholds in order to proceed with the program -- even if they are in non-combat roles, such as nurses.

It was refreshing to see that sort of teamwork, and friendship, among students on the track.

It is great when this happens in one our classes. But when it does, it is generally an informal process that grows among students who were already friends when they came to class. It is not a part of our school culture, especially in computer science.

Some places, it is part of the culture. A professor here recently related a story from his time teaching in Taiwan. In his courses there, the students in the class identified a leader, and then they worked together to make sure that everyone in the class succeeded. This was something that students expected of themselves, not something the faculty required.

I have seen this sort of collectivism imposed from above by CS professors, particularly in project courses that require teamwork. In my experience, it rarely works well when foisted on students. The better students resent having their grade tied to a weaker student's, or a lazier one's. (Hey, it's all about the grade, right?) The weaker students resent being made someone else's burden. Maybe this is a symptom of the Rugged Individualism that defines the West, but working collectively is generally just not part of our culture.

And I understand how the students feel. When I found myself in situations like this as a student, I played along, because I did what my instructors asked me to do. And I could be helpful. But I don't think it ever felt natural to me; it was an external requirement.

Recently I found myself discussing pair programming in CS1 with a former student who now teaches for us. He is considering pairing students in the lab portion of his non-majors course. Even after a decade, he remembers (fondly, I think) working with a different student each week in my CS1 lab. But the lab constituted only a quarter of the course grade, and the lab exercises did not require long-term commitment to helping the weakest members of the class succeed. Even still, I had students express dissatisfaction at "wasting their time".

This is one of the things that I like about the agile software methods: it promotes a culture of unity and of teamwork. Pair programming is one practice that supports this culture, but so are collective ownership, continuous integration, and coding standard. Some students and programmers, including some of the best, balk at being forced into "team". Whatever the psychological, social, and political issues, and whatever my personal preferences as a programmer, there seems something attractive about a team working together to get better, both as a team and as individuals.

I wish the young women I saw this morning well. I hope they succeed, as a team and as individuals. They can make it over the wall.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

October 02, 2007 6:58 AM

The Right (Kind of) Stuff

As you seek the three great virtues of a programmer, you seek to cultivate...

... the kind of laziness that makes you want to minimize future effort but investing effort today, to maximize your productivity and performance over the long haul, not the kind that leads you to avoid essential work or makes you want to cut corners.

... the kind of impatience that encourages you to work harder, not the kind of impatience that steals your spirit when you hit a wall or makes you want to cut corners.

... the kind of hubris that makes you think that you can do it, to trust yourself, not the kind of hubris that makes you think you don't have to listen to the problem, your code, or other people -- or the kind that makes you want to cut corners.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

September 30, 2007 11:16 AM

Unexpected Fun Cleaning out My Closet

The last week or so I've been trying to steal a few minutes each day to clean up the closet in my home work area. One of the big jobs has been to get rid of several years of journals and proceedings that built up from 1998 to 2002, when it seems I had time only to skim my incoming periodicals.

I seem genetically unable to simply through these into a recycling bin; instead, I sit on the floor and thumb through each, looking at least at the table of contents to see if there is anything I still want to read. Most of the day-to-day concerns in 2000 are of no particular interest now. But I do like to look at the letters to the editor in Communications of the ACM, IEEE Computer, and IEEE Spectrum, and some of the standing columns in SIGPLAN Notices, especially on Forth and on parsing. Out of every ten periodicals or so, I would guess I have saved a single paper or article for later reading.

One of the unexpected joys has been stumbling upon all of the IEEE Spectrum issues. It's one of the few general engineering generals I've ever received, and besides it has the bimonthly Reflections column by Robert Lucky, which I rediscovered accidentally earlier this month. I had forgotten in the off-months of Reflections, Spectrum runs a column called Technically Speaking, which I also enjoy quite a bit. According to its by-line, this column is "a commentary on technical culture and the use and misuse of technical language". I love words and learning about their origin and evolution, and this column used to feed my habit.

Most months, Technically Speaking includes a sidebar called "Worth repeating", which presents a quote of interest. Here are a couple that struck me as I've gone through my old stash.

From April 2000:

Engineering, like poetry, is an attempt to approach perfection. And engineers, like poets, are seldom completely satisfied with their creations.... However, while poets can go back to a particular poem hundreds of times between its first publication and its final version in their collected works, engineers can seldom make major revision in a completed structure. But an engineer can certainly learn from his mistakes.

This is from Henry Petroski, in To Engineer is Human. The process of discovery in which an engineer creates a new something is similar to the poet's process of discovery. Both lead to a first version by way of tinkering and revision. As Petroski notes, though, when engineers who build bridges and other singular structures publish their first version, it is their last version. But I think that smaller products which are mass produced often can be improved over time, in new versions. And software is different... Not only can we grow a product through a conscious process of refactoring, revision, and rewriting from scratch, but after we publish Version 1.0 we can continue to evolve the product behind its interface -- even while it is alive, servicing users. Software is a new sort of medium, whose malleability makes cleaving too closely to the engineering mindset misleading. (Of course, software developers should still learn from their mistakes!)

From June 2000:

You cannot have good science without having good science fans. Today science fans are people who are only interested in the results of science. They are not interested in a good play in science as a football fan is interested in a good play in football. We are not going to be able to have an excellent scientific effort unless the man in the street appreciates science.

This is reminiscent of an ongoing theme in this blog and in the larger computer science community. It continues to be a theme in all of science as well. How do we reform -- re-form -- our education system so that most kids at least appreciate what science is and means? Setting our goal as high as creating fans as into science as into football or NASCAR would be ambitious indeed!

Oh, and don't think that this ongoing theme in the computer science and general scientific world is a new one. The quote above is from Edward Teller, taken off the dust jacket of a book named Rays: Visible and Invisible, published in 1958. The more things change, the more they stay the same. Perhaps it should comfort us that the problem we face is at least half a century old. We shouldn't feel guilty that we cannot solve it over night.

And finally, from August 2000:

To the outsider, science often seems to be a frightful jumble of facts with very little that looks human and inspiring about it. To the working scientist, it is so full of interest and so fascinating that he can only pity the layman.

I think the key here is make moire people insiders. This is what Alan Kay urges us to do -- he's been saying this for thirty years. The best way to share the thrill is to help people to do what we do, not (just) tell them stories.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General