July 25, 2012 4:09 PM

I Gotta Get More Famous...

Duncan Davidson tweeted a check of Google search suggestions for his name. Now, Duncan's a well known guy in a certain corner of the technical world, but he didn't even crack the top six:

a NameSurfer cap of the popularity of Eugene versus Duncan

A spell-corrected "dunkin donuts" did, though. Ha.

Hey, why not try "Eugene"? Until 1970, it was a much more common first name for boys than "Duncan":

a NameSurfer cap of the popularity of Eugene versus Duncan

I'll bet you didn't know that Eugene had ever been so popular! Alas, since 1930, it has been in steady decline. Duncan, on the other hand, has been on the rise since 1970 and passed Eugene in 2000. I haven't grabbed 2010 data to update my Name Surfer app, but I'm sure the recent trajectories have continued.

So:

top ten Google search suggestions for 'eugene'

Eugene, Oregon, is my Duncan Hines.

I don't make my first appearance in the top ten until we get to "eugene wal":

top ten Google search suggestions for 'eugene wal'

One more letter vaults me near the top of the list:

top ten Google search suggestions for 'eugene wall'

Finally, we add an 'i' and push the pesky Eugene Wallace off his throne. Indeed, I begin to dominate:

top ten Google search suggestions for 'eugene walli'

Even so, my blog barely nudges past Mr. Wallace. I am so unpopular that Google would rather believe users have misspelled his name than believe they are looking for my Twitter page.

I think I'd better start working on my legacy.


Posted by Eugene Wallingford | Permalink | Categories: Personal

July 23, 2012 3:14 PM

Letting Go of Old Strengths

Ward Cunningham commented on what it's like to be "an old guy who's still a programmer" in his recent Dr. Dobb's interview:

A lot of people think that you can't be old and be good, and that's not true. You just have to be willing to let go of the strengths that you had a year ago and get some new strengths this year. Because it does change fast, and if you're not willing to do that, then you're not really able to be a programmer.""

That made me think of the last comment I made in my posts on JRubyConf:

There is a lot of stuff I don't know. I won't run out of things to read and learn and do for a long, long, time.

This is an ongoing theme in the life of a programmer, in the life of a teacher, and the life of an academic: the choice we make each day between keeping up and settling down. Keeping up is a lot more fun, but it's work. If you aren't comfortable giving up what you were awesome at yesterday, it's even more painful. I've been lucky mostly to enjoy learning new stuff more than I've enjoyed knowing the old stuff. May you be so lucky.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 20, 2012 3:39 PM

A Philosopher of Imitation

Ian Bogost, in The Great Pretender: Turing as a Philosopher of Imitation, writes:

Intelligence -- whatever it is, the thing that goes on inside a human or a machine -- is less interesting and productive a topic of conversation than the effects of such a process, the experience it creates in observers and interlocutors.

This is a very nice one-sentence summary of Turing's thesis in Computing Machinery and Intelligence. I wrote a bit about Turing's ideas on machine intelligence a few months back, but the key idea in Bogost's essay relates more closely to my discussion in Turing's ideas on representation and universal machines.

In this centennial year of his birth, we can hardly go wrong in considering again and again the depth of Turing's contributions. Bogost uses a lovely turn of phrase in his title: a philosopher of imitation. What may sound like a slight or a trifle is, in fact, the highest of compliments. Turing made that thinkable.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 18, 2012 2:31 PM

Names, Values, and The Battle of Bull Run

the cover of 'Encyclopedia Brown Finds the Clues'

Author Donald Sobol died Monday. I know him best from his long-running series, Encyclopedia Brown. Like many kids of my day, I loved these stories. I couldn't get enough. Each book consisted of ten or so short mysteries solved by Encyclopedia or Sally Kimball, his de facto partner in the Brown Detective Agency. I wanted to be Encyclopedia.

The stories were brain teasers. Solving them required knowledge and, more important, careful observation and logical deduction. I learned to pay close attention while reading Encyclopedia Brown, otherwise I had no hope of solving the crime before Encyclopedia revealed the solution. In many ways, these stories prepared me for a career in math and science. They certainly were a lot of fun.

One of the stories I remember best after all these years is "The Case of the Civil War Sword", from the very first Encyclopedia Brown book. I'm not the only person who found it memorable; Rob Bricken ranks it #9 among the ten most difficult Encyclopedia Brown mysteries. The solution to this case turned on the fact that one battle had two different names. Northerners often named battles for nearby bodies of water or prominent natural features, while Southerners named them for the nearest town or prominent man-made features. So, the First Battle of Bull Run and the First Battle of Manassas were the same event.

This case taught me a bit of historical trivia and opened my mind to the idea that naming things from the Civil War was not trivial at all.

This story taught me more than history, though. As a young boy, it stood out as an example of something I surely already knew: names aren't unique. The same value can have different names. In a way, Encyclopedia Brown taught me one of my first lessons about computer science.

~~~~

IMAGE: the cover of Encyclopedia Brown Finds the Clues, 1966. Source: Topless Robot.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 16, 2012 3:02 PM

Refactoring Everywhere: In Code and In Text

Charlie Stross is a sci-fi writer. Some of my friends have recommended his fiction, but I've not read any. In Writing a novel in Scrivener: lessons learned, he, well, describes what he has learned writing novels using Scrivener, an app for writers well known in the Mac OS X world.

I've used it before on several novels, notably ones where the plot got so gnarly and tangled up that I badly needed a tool for refactoring plot strands, but the novel I've finished, "Neptune's Brood", is the first one that was written from start to finish in Scrivener...

... It doesn't completely replace the word processor in my workflow, but it relegates it to a markup and proofing tool rather than being a central element of the process of creating a book. And that's about as major a change as the author's job has undergone since WYSIWYG word processing came along in the late 80s....

My suspicion is that if this sort of tool spreads, the long-term result may be better structured novels with fewer dangling plot threads and internal inconsistencies. But time will tell.

Stross's lessons don't all revolve around refactoring, but being able to manage and manipulate the structure of the evolving novel seems central to his satisfaction.

I've read a lot of novels that seemed like they could have used a little refactoring. I always figured it was just me.

The experience of writing anything in long form can probably be improved by a good refactoring tool. I know I find myself doing some pretty large refactorings when I'm working on the set of lecture notes for a course.

Programmers and computer scientists have the advantage of being more comfortable writing text in code, using tools such as LaTex and Scribble, or homegrown systems. My sense, though, is that fewer programmers use tools like this, at least at full power, than might benefit from doing so.

Like Stross, I have a predisposition against using tools with proprietary data formats. I've never lost data stored in plaintext to version creep or application obsolescence. I do use apps such as VoodooPad for specific tasks, though I am keenly aware of the exit strategy (export to text or RTFD ) and the pain trade-off at exit (the more VoodooPad docs I create, the more docs I have to remember to export before losing access to the app). One of the things I like most about MacJournal is that it's nothing but a veneer over a set of Unix directories and RTF documents. The flip side is that it can't do for me nearly what Scrivener can do.

Thinking about a prose writing tool that supports refactoring raises an obvious question: what sort of refactoring operations might it provide automatically? Some of the standard code refactorings might have natural analogues in writing, such as Extract Chapter or Inline Digression.

Thinking about automated support for refactoring raises another obvious question, the importance of which is surely as clear to novelists as to software developers: Where are the unit tests? How will we know we haven't broken the story?

I'm not being facetious. The biggest fear I have when I refactor a module of a course I teach is that I will break something somewhere down the line in the course. Your advice is welcome!


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 14, 2012 11:01 AM

"Most Happiness Comes From Friction"

Last time, I mentioned again the value in having students learn broadly across the sciences and humanities, including computer science. This is a challenge going in both directions. Most students like to concentrate on one area, for a lot of different reasons. Computer science looks intimidating to students in other majors, perhaps especially to the humanities-inclined.

There is hope. Earlier this year, the Harvard Magazine ran The Frisson of Friction, an essay by Sarah Zhang, a non-CS student who decided to take CS 50, Harvard's intro to computer science. Zhang tells the story of finding a thorny, semicolon-induced bug in a program (an extension for Google's Chrome browser) on the eve of her 21st birthday. Eventually, she succeeded. In retrospect, she writes:

Plenty of people could have coded the same extension more elegantly and in less time. I will never be as good a programmer as -- to set the standard absurdly high -- Mark Zuckerberg. But accomplishments can be measured in terms relative to ourselves, rather than to others. Rather than sticking to what we're already good at as the surest path to résumé-worthy achievements, we should see the value in novel challenges. How else will we discover possibilities that lie just beyond the visible horizon?

... Even the best birthday cake is no substitute for the deep satisfaction of accomplishing what we had previously deemed impossible -- whether it's writing a program or writing a play.

The essay addresses some of the issues that keep students from seeking out novel challenges, such as fear of low grades and fear of looking foolish. At places like Harvard, students who are used to succeeding find themselves boxed in by their friends' expectations, and their own, but those feelings are familiar to students at any school. Then you have advisors who subtly discourage venturing too far from the comfortable, out of their own unfamiliarity and fear. This is a social issue as big as any pedagogical challenge we face in trying to make introductory computer science more accessible to more people.

With work, we can help students feel the deep satisfaction that Zhang experienced. Overcoming challenges often leads to that feeling. She quotes a passage about programmers in Silicon Valley, who thrive on such challenges: "Most happiness probably comes from friction." Much satisfaction and happiness come out of the friction inherent in making things. Writing prose and writing programs share this characteristic.

Sharing the deep satisfaction of computer science is a problem with many facets. Those of us who know the satisfaction know it's a problem worth solving.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

July 13, 2012 12:02 PM

How Science -- and Computing -- Are Changing History

While reading a recent Harvard Magazine article about Eric Mazur's peer instruction technique in physics teaching, I ran across a link to an older paper that fascinated me even more! Who Killed the Men of England? tells several stories of research at the intersection of history, archaeology, genomics, evolution, demography, and simulation, such as the conquest of Roman England by the Anglo Saxons.

Not only in this instance, but across entire fields of inquiry, the traditional boundaries between history and prehistory have been melting away as the study of the human past based on the written record increasingly incorporates the material record of the natural and physical sciences. Recognizing this shift, and seeking to establish fruitful collaborations, a group of Harvard and MIT scholars have begun working together as part of a new initiative for the study of the human past. Organized by [professor of medieval history Michael] McCormick, who studies the fall of the Roman empire, the aim is to bring together researchers from the physical, life, and computer sciences and the humanities to explore the kinds of new data that will advance our understanding of human history.

... The study of the human past, in other words, has entered a new phase in which science has begun to tell stories that were once the sole domain of humanists.

I love history as much as computing and was mesmerized by these stories of how scientists reading the "material record" of the world are adding to our knowledge of the human past.

However, this is more than simply a one-way path of information flowing from scientists to humanists. The scientific data and models themselves are underconstrained. The historians, cultural anthropologists, and demographers are able to provide context to the data and models and so extract even more meaning from them. This is a true collaboration. Very cool.

The rise of science is erasing boundaries between the disciplines that we all studied in school. Scholars are able to define new disciplines, such as "the study of the human past", mentioned in the passage above. These disciplines are organized with a greater focus on what is being studied than on how we are studying it.

We are also blurring the line between history and pre-history. It used to be that history required a written record, but that is no longer a hard limit. Science can read nature's record. Computer scientists can build models using genomic data and migration data that suggest possible paths of change when the written and scientific record are incomplete. These ideas become part of the raw material that humanists use to construct a coherent story of the past.

This change in how we are able to study the world highlights the importance of a broad education, something I've written about a few times recently [ 1 | 2 | 3 ] and not so recently. This sort of scholarship is best done by people who are good at several things, or at least curious and interested enough in several things to get to know them intimately. As I wrote in Failure and the Liberal Arts, it's important both not to be too narrowly trained and not to be too narrowly "liberally educated".

Even at a place like Harvard, this can leave scholars in a quandary:

McCormick is fired with enthusiasm for the future of his discipline. "It is exciting. I jump up every morning. But it is also challenging. Division and department boundaries are real. Even with a generally supportive attitude, it is difficult [to raise funds, to admit students who are excellent in more than one discipline, and so on]. ..."

So I will continue to tell computer science students to take courses from all over the university, not just from CS and math. This is one point of influence I have as a professor, advisor, and department head. And I will continue to look for ways to encourage non-CS students to take CS courses and students outside the sciences to study science, including CS. As that paragraph ends:

"... This is a whole new way of studying the past. It is a unique intellectual opportunity and practically all the pieces are in place. This should happen here--it will happen, whether we are part of it or not."

"Here" doesn't have to be Harvard. There is a lot of work to be done.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

July 11, 2012 2:45 PM

A Few Comments on the Alan Kay Interview, and Especially Patterns

Alan Kay

Many of my friends and colleagues on Twitter today are discussing the Interview with Alan Kay posted by Dr. Dobb's yesterday. I read the piece this morning while riding the exercise bike and could not contain my desire to underline passages, star paragraphs, and mark it up with my own comments. That's hard to do while riding hard, hurting a little, and perspiring a lot. My desire propelled me forward in the face of all these obstacles.

Kay is always provocative, and in this interview he leaves no oxen ungored. Like most people do when whenever they read outrageous and provocative claims, I cheered when Kay said something I agreed with and hissed -- or blushed -- when he said something that gored me or one of my pet oxen. Twitter is a natural place to share one's cheers and boos for an artyicle with or by Alan Kay, given the amazing density of soundbites one finds in his comments about the world of computing.

(One might say the same thing about Brian Foote, the source of both soundbites in that paragraph.)

I won't air all my cheers and hisses here. Read the article, if you haven't already, and enjoy your own. I will comment on one paragraph that didn't quite make me blush:

The most disastrous thing about programming -- to pick one of the 10 most disastrous things about programming -- there's a very popular movement based on pattern languages. When Christopher Alexander first did that in architecture, he was looking at 2,000 years of ways that humans have made themselves comfortable. So there was actually something to it, because he was dealing with a genome that hasn't changed that much. I think he got a few hundred valuable patterns out of it. But the bug in trying to do that in computing is the assumption that we know anything at all about programming. So extracting patterns from today's programming practices ennobles them in a way they don't deserve. It actually gives them more cachet.

Long-time Knowing and Doing readers know that patterns are one of my pet oxen, so it would have been natural for me to react somewhat as Keith Ray did and chide Kay for what appears to be a typical "Hey, kids, get off my lawn!" attitude. But that's not my style, and I'm such a big fan of Kay's larger vision for computing that my first reaction was to feel a little sheepish. Have I been wasting my time on a bad idea, distracting myself from something more important? I puzzled over this all morning, and especially as I read other people's reactions to the interview.

Ultimately, I think that Kay is too pessimistic when he says we hardly know anything at all about programming. We may well be closer to the level of the Egyptians who built the pyramids than we are to the engineers who built the Empire State Building. But I simply don't believe that people such as Ward Cunningham, Ralph Johnson, and Martin Fowler don't have a lot to teach most of us about how to make better software.

Wherever we are, I think it's useful to identify, describe, and catalog the patterns we see in our software. Doing so enables us to talk about our code at a level higher than parentheses and semicolons. It helps us bring other programmers up to speed more quickly, so that we don't all have to struggle through all the same detours and tar pits our forebears struggled through. It also makes it possible for us to talk about the strengths and weaknesses of our current patterns and to seek out better ideas and to adopt -- or design -- more powerful languages. These are themes Kay himself expresses in this very same interview: the importance of knowing our history, of making more powerful languages, and of education.

Kay says something about education in this interview that is relevant to the conversation on patterns:

Education is a double-edged sword. You have to start where people are, but if you stay there, you're not educating.

The real bug in what he says about patterns lies at one edge of the sword. We may not know very much about how to make software yet, but if we want to remedy that, we need to start where people are. Most software patterns are an effort to reach programmers who work in the trenches, to teach them a little of what we do know about how to make software. I can yammer on all I want about functional programming. If a Java practitioner doesn't appreciate the idea of a Value Object yet, then my words are likely wasted.

Ward Cunningham

Ironically, many argue that the biggest disappointment of the software patterns effort lies at the other edge of education's sword: an inability to move the programming world quickly enough from where it was in the mid-1990s to a better place. In his own Dr. Dobb's interview, Ward Cunningham observed with a hint of sadness that an unexpected effect of the Gang of Four Design Patterns book was to extend the life of C++ by a decade, rather than reinvigorating Smalltalk (or turning people on to Lisp). Changing the mindset of a large community takes time. Many in the software patterns community tried to move people past a static view of OO design embodied in the GoF book, but the vocabulary calcified more quickly than they could respond.

Perhaps that is all Kay meant by his criticism that patterns "ennoble" practices in a way they don't deserve. But if so, it hardly qualifies in my mind as "one of the 10 most disastrous things about programming". I can think of a lot worse.

Kurt Vonnegut

To all this, I can only echo the Bokononists in Kurt Vonnegut's novel Cat's Cradle: "Busy, busy, busy." The machinery of life is usually more complicated and unpredictable than we expect or prefer. As a result, reasonable efforts don't always turn out as we intend them to. So it goes. I don't think that means we should stop trying.

Don't let my hissing about one paragraph in the interview dissuade you from reading the Dr. Dobb's interview. As usual, Kay stimulates our minds and encourages us to do better.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

July 06, 2012 4:19 PM

Assume a Good Compiler and Write Readable Code

Greg Robbins and Ron Avitzur, the authors of MacOS's original Graphing Calculator, offer nine tips for Designing Applications for the Power Macintosh. All of them are useful whatever your target machine. One of my favorites is:

5. Avoid programming cleverness. Instead, assume a good compiler and write readable code.

This is good programming advice in nearly every situation, for all the software engineering reasons we know. Perhaps surprisingly, it is good advice even when you are writing code that has to be fast and small, as Robbins and Avitzur were:

Cycle-counting and compiler-specific optimizations are favorite pastimes of hackers, and sometimes they're important. But we could never have completed the Graphing Calculator in under six months had we worried about optimizing each routine. Rather, we dealt with speed problems only when they were perceptible to users.

We made no attempt to look at performance bottlenecks or at the compiled code of the Calculator until after running execution profiles. We were surprised where the time was being spent. Most of the time that the Calculator is compute-bound it's either in the math libraries or in QuickDraw. So little time is spent in our code that even compiling it unoptimized didn't slow it down perceptibly. Improving our code's performance meant calling the math libraries less often.

This has been my experience with every large program or set of programs I've written, too. I know where the code is spending its time. Then I run the profiler, and it shows me I'm wrong. Donald Knuth famously warned us against small efficiencies and premature optimization.

Robbins and Avitzur's advice also has a user-centered dimension.

Programmers are often tempted to spend time saving a few bytes or cycles or to fine-tune an algorithm. If the change isn't visible to users, however, the benefits may not extend beyond the programmer's satisfaction. When most of the code in an application is straightforward and readable, maintenance and improvements are easier to make. Those are changes that users will notice.

We write code for our users. Programmer satisfaction comes second. This passage reminds me of a lesson I internalized from the early days extreme programming: At the end of the day, if you haven't added value for your customer, you haven't earned your day's pay.


Posted by Eugene Wallingford | Permalink | Categories: Software Development

July 05, 2012 2:48 PM

The Value of a Liberal Education, Advertising Edition

A few days ago, I mentioned James Webb Young's A Technique for Producing Ideas. It turns out that Young was in advertising. He writes:

The construction of an advertisement is the construction of a new pattern in this kaleidoscopic world in which we live. The more of the elements of that world which are stored away in that pattern-making machine, the mind, the more the chances are increased for the production of new and striking combinations, or ideas. Advertising students who get restless about the "practical" value of general college subjects might consider this.

Computer Science students, too.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 03, 2012 3:28 PM

A Little Zen, A Little Course Prep

I listened to about 3/4 of Zen and the Art of Motorcycle Maintenance on a long-ish drive recently. It's been a while since I've read the whole book, but I listen to it on tape once a year or so. It always gets my mind in the mood to think about learning to read, write, and debug programs.

This fall, I will be teaching our third course for first time since became head seven years ago. In that time, we changed the name of the course from "Object-Oriented Programming" to "Intermediate Computing". In many ways, the new name is an improvement. We want students in this course to learn a number of skills and tools in the service of writing larger programs. At a fundamental level, though OOP remains the centerpiece of everything we do in the course.

As I listened to Pirsig make his way across the Great Plains, a few ideas stood out as prepare to teach one of my favorite courses:

The importance of making your own thing, not just imitating others. This is always a challenge in programming courses, but for most people it is essential if we hope for students to maximize their learning. It underlies several other parts of Pirsig's zen and art, such as caring about our artifacts, and the desire to go beyond what something is to what it means.

The value of reading code, both good and bad. Even after only one year of programming, most students have begun to develop a nose for which is which, and nearly all have enough experience that they can figure out the difference with minimal interference from the instructor. If we can get them thinking about what features of a program make it good or bad, we can move on to the more important question: How can we write good programs? If we can get students to think about this, then they can see the "rules" we teach them for what they really are: guidelines, heuristics that point us in the direction of good code. They can learn the rules with an end in mind, and not as an end in themselves.

The value of grounding abstractions in everyday life. When we can ground our classwork in their own experiences, they are better prepared to learn from it. Note that this may well involve undermining their naive ideas about how something works, or turning a conventional wisdom from their first year on its head. The key is to make what they see and do matter to them.

One idea remains fuzzy in my head but won't let me go. While defining the analytic method, Pirsig talks briefly about the difference between analysis by composition and analysis by function. Given that this course is teaches object-oriented programming in Java, there are so many ways in which this distinction could matter: composition and inheritance, instance variables and methods, state and behavior. I'm not sure whether there is anything particular useful in Pirsig's philosophical discussion of this, so I'll think some more about it.

I'm also thinking a bit about a non-Zen idea for the course: Mark Guzdial's method of worked examples and self-explanation. My courses usually include a few worked examples, but Mark has taken the idea to another level. More important, he pairs it with an explicit step in which students explain examples to themselves and others. This draws on results from research in CS education showing that learning and retention are improved when students explain something in their own words. I think this could be especially valuable in a course that asks students to learn a new style of writing code.

One final problem is on my mind right now, a more practical matter: a textbook for the course. When I last taught this course, I used Tim Budd's Understanding Object-Oriented Programming with Java. I have written in the past that I don't like textbooks much, but I always liked this book. I liked the previous multi-language incarnation of the book even more. Unfortunately, one of the purposes of this course is to have students learn Java reasonably well.

Also unfortunate is that Budd's OOP/Java book is now twelve years old. A lot has happened in the Java world in the meantime. Besides, as I found while looking for a compiler textbook last fall, the current asking price of over $120 seems steep -- especially for a CS textbook published in 2000!

So I persist in my quest. I'd love to find something that looks like it is from this century, perhaps even reflecting the impending evolution of the textbook we've all been anticipating. Short of that, I'm looking for a modern treatment of both OO principles and Java.

Of course, I'm a guy who still listens to books on tape, so take my sense of what's modern with a grain of salt.

As always, any pointers and suggestions are appreciated.


Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning