June 30, 2015 2:39 PM

Software Patterns Are Still For Humans

I recently found myself reading a few of Gregor Hohpe's blog posts and came across Design Patterns: More Than Meets The Eye. In it, Hohpe repeats a message that needs to be repeated every so often even now, twenty years after the publication of the GoF book: software patterns are fundamentally about human communication:

The primary purpose of patterns is to help humans understand design problems. Solving design problems is generally much more difficult than typing in some code, so patterns have enormous value in themselves. Patterns owe their popularity to this value. A better hammer can help speed up the construction of a bed, but a pattern helps us place the bed in a way that makes us feel comfortable.

The last sentence of that paragraph is marvelous.

Hohpe published that piece five and a half years ago. People who write or teach software patterns will find themselves telling a similar story and answering questions like the ones that motivated his post all the time. Earlier this year, Avdi Grimm wrote a like-minded piece, Patterns are for People, in which he took his shot at dispelling misunderstandings from colleagues and friends:

There's a meme, originating from certain corners of the Functional side of programming, that "patterns are a language smell". The implication being that "good" languages either already encode the patterns as language features, or they provide the tools to extend the language such that modeling the pattern explicitly isn't needed.

This misses the point on rather a lot of levels.

Design patterns that are akin to hammers for making better code are plentiful and often quite helpful. But we need more software patterns that help us place our beds in ways that increase human comfort.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

June 29, 2015 1:58 PM

Bridging the Gap Between Learning and Doing

a sketch of bridging the gap

I recently learned about the work of Amelia McNamara via this paper published as Research Memo M-2014-002 by the Viewpoints Research Institute. McNamara is attacking an important problem: the gap between programming tools for beginners and programming tools for practitioners. In Future of Statistical Programming, she writes:

The basic idea is that there's a gap between the tools we use for teaching/learning statistics, and the tools we use for doing statistics. Worse than that, there's no trajectory to make the connection between the tools for learning statistics and the tools for doing statistics. I think that learners of statistics should also be doers of statistics. So, a tool for statistical programming should be able to step learners from learning statistics and statistical programming to truly doing data analysis.

"Learners of statistics should also be doers of statistics." -- yes, indeed. We see the same gap in computer science. People who are learning to program are programmers. They are just working at a different level of abstraction and complexity. It's always a bit awkward, and often misleading, when we give novice programmers a different set of tools than we give professionals. Then we face a new learning barrier when we ask them to move up to professional tools.

That doesn't mean that we should turn students loose unprotected in the wilds of C++, but it does require that that we have a pedagogically sound trajectory for making the connection between novice languages and tools and those used by more advanced programmers.

It also doesn't mean that we can simply choose a professional language that is in some ways suitable for beginners, such as Python, and not think any more about the gap. My recent experience reminds me that there is still a lot of complexity to help our students deal with.

McNamara's Ph.D. dissertation explored some of the ways to bridge this gap in the realm of statistics. It starts from the position that the gap should not exist and suggests ways to bridge it, via both better curricula and better tools.

Whenever I experience this gap in my teaching or see researchers trying to make it go away, I think back to Alan Kay's early vision for Smalltalk. One of the central tenets of the Smalltalk agenda was to create a language flexible and rich enough that it could accompany the beginner as he or she grew in knowledge and skill, opening up to a new level each time the learner was ready for something more powerful. Just as a kindergartener learns the same English language used by Shakespeare and Joyce, a beginning programmer might learn the same language as Knuth and Steele, one that opens up to a new level each time the learner is ready.

We in CS haven't done especially good job at this over the years. Matthias Felleisen and the How to Design Programs crew have made perhaps the most successful effort thus far. (See *SL, Not Racket for a short note on the idea.) But this project has not made a lot of headway yet in CS education. Perhaps projects such as McNamara's can help make inroads for domain-specific programmers. Alan Kay may harbor a similar hope; he served as a member of McNamara's Ph.D. committee.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 22, 2015 3:27 PM

Strategy Under Time Constraints

an old analog chess clock

In Proving Too Much, Scott Alexander writes this about a rhetorical strategy that most people disapprove of:

Because here is a fundamental principle of the Dark Arts -- you don't need an argument that can't be disproven, only an argument that can't be disproven in the amount of time your opponent has available.

This is dark art in the world of ideas, where truth is more important than winning an argument. But it is a valuable strategy in games like chess, which are often played under time constraint. In competition, winning sometimes matters more the beauty or truth.

Suppose that my opponent has only a few minutes or seconds left on the clock. Suppose also that it's my move and that I have two possible moves to make. One is objectively better, in that it leads to the better expected outcome for me in theory, but that it is easy for my opponent to find good responses. The other move is weaker, perhaps even allowing my opponent to get an advantage over me, but that it would be hard for her to find the right path in the time available.

In this case, I may actually want to play the weaker move, because it maximizes my chance of winning in the circumstances of the game. My opponent has to use extra time to untangle the complexity of the position, and even if she finds the right move, there may not be enough time left to execute the plan. This approach is more volatile for me than playing the safer move, as it increases my risk of losing at the same time that it increases my chances of prevailing. But on balance, I am better off.

This may seem like a crazy strategy, but anyone who has played a lot of speed chess knows its value. Long-time world champion Emanuel Lasker was reputed to have employed a similar strategy, sometimes playing the move that would most unsettle the particular opponent he was playing that day, rather than the absolute best move. (Wikipedia says, though that this reputation may have been undeserved.)

There are chessplayers who would object to this strategy as much as people object to its use in argumentation. There is truth in chess, too, and most chessplayers deeply appreciate making beautiful moves and playing beautiful games. Some grandmasters have sought beautiful combinations to their own detriment. For example, Mikhail Tal may have been able to retain or regain his world title if not for a propensity to seek complication in search of beauty. He gave us many brilliancies as a result, but he also lost just often enough to keep him on the fringes of the world championship.

Much of the time, though, we chessplayers are trying to win the game, and practicing the dark arts is occasionally the best way to do so. That may mean making a move that confounds the opponent just long enough to win the game.


Posted by Eugene Wallingford | Permalink | Categories: General

June 16, 2015 3:17 PM

Dr. Seuss on Research

Reading about the unusual ideas in TempleOS reminded me of a piece of advice I received from the great philosopher of epistemology, Dr. Seuss:

If you want to get eggs
you can't buy at a store,
You have to do things
never thought of before.

As Peter T. Hooper learned in Scrambled Eggs Super, discovering or creating something new requires that we think unusual, or even outrageous, thoughts.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 14, 2015 9:17 AM

Software Has Its Own Gresham's Law

Let's call it Sustrik's Law, as its creator does:

Well-designed components are easy to replace. Eventually, they will be replaced by ones that are not so easy to replace.

This is a dandy observation of how software tends to get worse over time, in a natural process of bad components replacing good ones. It made me think of Gresham's Law, which I first encountered in my freshman macroeconomics course:

When a government overvalues one type of money and undervalues another, the undervalued money will leave the country or disappear from circulation into hoards, while the overvalued money will flood into circulation.

A more compact form of this law is, "Bad money drives good money out of circulation."

My memory of Gresham's Law focuses more on human behavior than government behavior. If people value gold more than a paper currency, even though the currency denominates a specific amount of gold, then they will use the paper money in transactions and hoard the gold. The government can redenominate the paper currency at any time, but the gold will always be gold. Bad money drives out the good.

In software, bad components drive good components out of a system for different reasons. Programmers don't hoard good components; they are of no particular value when not being using, even in the future. It's simply pragmatic. If a component is hard to replace, then we are less likely to replace it. It will remain a part of the system over time precisely because it's hard to take out. Conversely, a component that is easy to replace is one that we may replace.

We can also think of this in evolutionary terms, as Brian Foote and Joe Yoder did in The Selfish Class: A hard-to-replace component is better adapted for survival than one that is easy to replace. Designing components to be better for programmers may make them less likely to survive in the long term. How is that for the bad driving out the good?

When we look at this from the perspective of the software system itself, Sustrik's Law reminds us that software is subject to a particular kind of entropy, in which well-designed systems with clean interfaces devolve towards big balls of mud (another term coined by Foote and Yoder). Programmers do not yet have a simple formula to predict this entropy, such as Gibbs entropy law for thermodynamic systems, and may never. But then, computer science is still young. There is a lot we don't know.

Ideas about software have so many connections to other disciplines. I rely on many connections to help me think about them, too. Hat tips to Brian Rice for retweeting this tweet about Sustrik's Law, to Jeff Miller for reminding me about "The Selfish Class", and to Henrik Johansson for suggesting the connection to Gibb's formula.


Posted by Eugene Wallingford | Permalink | Categories: Software Development

June 12, 2015 2:39 PM

A Cool Example of Turning Data into Program: TempleOS

Hyperlinks that point to and execute code, not transfer us to a data file:

In a file from the TempleOS source code, one line contains the passage "Several other routines include a ...", where the "other routines" part is a hyperlink. Unlike in HTML, where that ... may lead to a page listing those other routines, here a DolDoc macro is used so that a grep is actually performed when you click on it. While the HTML version could become stale if no-one updated it, this is always up-to-date.

This comes from Richard Milton's A Constructive Look At TempleOS, which highlights some of the unusual features of an OS I had never heard of until I ran across his article. As I read it, I thought of Alan Kay's assertion that a real programming language should eliminate the need to have an operating system at all. The language should give programmers access to whatever they need to access and marshall the resources of the computer. Smalltalk is a language that aspired to this goal. Today, the best example of this idea is probably Racket, which continues to put more of the underlying system into the hands of programmers via the language itself. That is an essential element of the Racket Way.

TempleOS comes at this idea from the other side, as an operating system that puts as much computing as it can in the hands of the user. This includes programming, in the form of HolyC, a homegrown variant of C. TempleOS is written in HolyC, but HolyC is also the scripting language of the system's REPL. It's odd to talk about programming TempleOS at all, though. As Milton points out, like Xerox Alto, Oberon, and Plan 9, TempleOS "blurs the lines between programs and documents". Writing a program is like creating a document of any other sort, and creating a document of any sort is a form of programming.

Trading data for code creates a different kind of barrier for new users of TempleOS. It also pays dividends by injecting a tempting sort of dynamism to the system.

In any case, programmers of a certain age will feel a kinship with the kind of experience that TempleOS seeks to provide. We grew up in an age when every computer was an open laboratory, just waiting for us to explore them at every level. TempleOS has the feel -- and, perhaps unfortunately, the look -- of the 1970s and 1980s.

Hurray for crazy little operating systems like TempleOS. Maybe we can learn something useful from them. That's how the world of programming languages works, too. If not, the creator can have a lot of fun making a new world, and the rest of us can share in the fun vicariously.


Posted by Eugene Wallingford | Permalink | Categories: Computing

June 09, 2015 2:48 PM

I'm Behind on Blogging About My Courses...

... so much so, that I may never catch up. The last year and a half have been crazy, and I simply have not set aside enough time to blog. A big part of the time crunch was teaching three heavy preps in 2014: algorithms, agile software development, and our intro course. It is fitting, then, that blogging about my courses has suffered most of all -- even though, in the moment, I often have plenty to say. Offhand, I can think of several posts for which I once had big plans and for which I still have drafts or outlines sitting in my ideas/ folder:

  • readers' thoughts on teaching algorithms in 2014, along with changes I made to my course. Short version: The old canon still covers most of the important bases.
  • reflections on teaching agile dev again after four years. Short version: The best learning still happens in the trenches working with the students, who occasionally perplex me and often amaze me.
  • reflections on teaching Python in the intro for the first course for the first time. Short version: On balance, there are many positives, but wow, there is a lot of language there, and way too many resources.
  • a lament on teaching programming languages principles when the students don't seem to connect with the material. Surprise ending: Some students enjoyed the course more than I realized.

Thoughts on teaching Python stand out as especially trenchant even many months later. The intro course is so important, because it creates habits and mindsets in students that often long outlive the course. Teaching a large, powerful, popular programming language to beginners in the era of Google, Bing, and DuckDuckGo is a Sisyphean task. No matter how we try to guide the students' introduction to language features, the Almighty Search Engine sits ever at the ready, delivering size and complexity when they really need simple answers. Maybe we need language levels a lá the HtDP folks.

Alas, my backlog is so deep that I doubt I will ever have time to cover much of it. Life goes on, and new ideas pop up every day. Perhaps I can make time the posts outlined above.

Right now, my excitement comes from the prospect of teaching my compilers course again for the first time in two years. The standard material still provides a solid foundation for students who are heading off into the the world of software development. But in the time since I last taught the course, some neat things have happened in the compiler world that will make the course better, if only by putting the old stuff into a more modern context. Consider announcements just this week about Swift, in particular that the source code is being open-sourced and the run-time ported to Linux. The moment these two things happen, the language instantly becomes of greater interest to more of my students. Its openness also makes it more suitable as content for a university course.

So, there will be plenty to blog about, even if I leave my backlog untouched. That's a good thing.


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 07, 2015 9:26 AM

Agile Moments, Ernest Hemingway Edition

I couldn't help thinking of big visible charts when I read this paragraph in The Paris Review's interview with Ernest Hemingway:

[Hemingway] keeps track of his daily progress -- "so as not to kid myself" -- on a large chart made out of the side of a cardboard packing case and set up against the wall under the nose of a mounted gazelle head. The numbers on the chart showing the daily output of words differ from 450, 575, 462, 1250, back to 512, the higher figures on days [he] puts in extra work so he won't feel guilty spending the following day fishing on the Gulf Stream.

He uses the chart to keep himself honest. Even our greatest writers can delude themselves into thinking they are making enough progress when they aren't. All the more so for those of us who are still learning, whether how to run a marathon, how to write prose, or how to make software. When a group of people are working together, a chart can help the individuals maintain a common, and honest, understanding of how the team is doing.

Oh, and notice Hemingway's technology: the side of a cardboard packing case. No fancy dashboard for this writer who is known for his direct, unadorned style. If you think you need a digital dashboard with toggles, flashing lights, and subviews, you are doing it wrong. The point of the chart is to keep you honest, not give you another thing to do when you are not doing what you should be doing.

There is another lesson in this passage beyond the chart, about sustainable pace. Most of the numbers are in the ballpark of 500 (average: 499 3/4!), except for one day when he put in a double day. Perhaps 500 words a day is a pace that Hemingway finds productive over time. Yet he allows himself an occasional bit of overtime -- for something important, like time away from his writing desk, out on the water. Many of us programmers need to be reminded every so often that getting away from our work is valuable, and worth an occasional 0 on the big visible chart. It's also a more human motivation for overtime than the mad rush to a release date.

A few pages later in the interview, we read Hemingway repeating a common adage among writers that also echoes nicely against the agile practices:

You read what you have written and, as you always stop when you know what is going to happen next, you go on from there.

Hemingway stops each day at a point where the story will pull him forward the next morning. In this, XP devotees can recognize the habit of ending each day with a broken test. In the morning, or whenever we next fire up our editors, the broken test tells us exactly where to begin and gives us a concrete goal. By the time the test passes, our minds are ready to move on to something new.

Agility is useful when fighting bulls. Apparently, it helps when writing novels, too.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development

June 04, 2015 2:33 PM

If the Web is the Medium, What is the Message?

How's this for a first draft:

History may only be a list of surprises, but you sure as heck don't want to lose the list.

That's part of the message in Bret Victor's second 'Web of Alexandria' post. He Puts it in starker terms:

To forget the past is to destroy the future. This is where Dark Ages come from.

Those two posts followed a sobering observation:

60% of my fav links from 10 yrs ago are 404. I wonder if Library of Congress expects 60% of their collection to go up in smoke every decade.

But it's worse than that, Victor tells us in his follow-up. As his tweet notes, the web has turned out to be unreliable as a publication medium. We publish items because we want them to persist in the public record, but they don't rarely persist for very long. However, the web has turned out to be a pernicious conversational medium as well. We want certain items shared on the web to be ephemeral, yet often those items are the ones that last forever. At one time, this may have seemed like only an annoyance, but now we know it to be dangerous.

The problem isn't that the web is a bad medium. In one sense, the web isn't really a medium at all; it's an infrastructure that enables us to create new kinds of media with historically uncharacteristic ease. The problem is that we are using web-based media for many different purposes, without understanding how each medium determines "the social and temporal scope of its messages".

The same day I read Victor's blog post, I saw this old Vonnegut quote fly by on Twitter:

History is merely a list of surprises. ... It can only prepare us to be surprised yet again.

Alas, on the web, history appears to be a list of cat pictures and Tumblr memes, with all the important surprises deleted when the author changed internet service providers.

In a grand cosmic coincidence, on the same day I read Victor's blog post and saw the Vonnegut quote fly by, I also read a passage from Marshall McLuhan in a Farnam Street post. It ends:

The modern world abridges all historical times as readily as it reduces space. Everywhere and every age have become here and now. History has been abolished by our new media.

The internet certainly amplifies the scale of McLuhan's worry, but the web has created unique form of erasure. I'm sure McLuhan would join Victor in etching an item on history's list of surprises:

Protect the past.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

June 02, 2015 1:46 PM

"I Just Need a Programmer", Screenplay Edition

Noted TV writer, director, producer, and blogger Ken Levine takes on a frequently-asked question in the latest edition of his "Friday Questions" feature:

I have a great idea for a movie, but I'm not a writer, I'm not in show biz, and I don't live in New York or LA. What do I do with this great idea? (And I'm sure you've never heard this question before, right?)

Levine is gentle in response:

This question does come up frequently. I wish I had a more optimistic answer. But the truth is execution is more valued than ideas. ...

Is there any domain where this isn't true? Yet professionals in every domain seem to receive this question all the time. I certainly receive the "I just need a programmer..." phone call or e-mail every month. If I went to cocktail parties, maybe I'd hear it at them, too.

The bigger the gap between idea and product, the more valuable, relatively speaking, execution is than having ideas. For many app ideas, executing the idea is not all that far beyond the reach of many people. Learn a little Objective C, and away you go. In three or four years, you'll be set! By comparison, writing a screenplay that anyone in Hollywood will look at (let alone turn into a blockbuster film) seems like Mount Everest.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

June 01, 2015 2:21 PM

I'd Like to Be Bored for a While

When asked if he has ever been bored, Italo Calvino responded:

Yes, in my childhood. But it must be pointed out that childhood boredom is a special kind of boredom. It is a boredom full of dreams, a sort of projection into another place, into another reality. In adulthood boredom is made of repetition, it is the continuation of something from which we are no longer expecting any surprise. And I -- would that I had time to get bored today!

Children are better at boredom than adults are, because we let them be. We should let adults be good at boredom every ocacsionally.

(Passage from this Paris Review interview, which I quoted a couple of times several weeks ago.)


Posted by Eugene Wallingford | Permalink | Categories: General