September 22, 2023 8:38 PM

Time and Words

Earlier this week, I posted an item on Mastodon:

After a few months studying and using CSS, every once in a while I am able to change a style sheet and cause something I want to happen. It feels wonderful. Pretty soon, though, I'll make a typo somewhere, or misunderstand a property, and not be able to make anything useful happen. That makes me feel like I'm drowning in complexity.

Thankfully, the occasional wonderful feelings — and my strange willingness to struggle with programming errors — pull me forward.

I've been learning a lot while preparing to teach to our web development course [ 1 | 2 ]. Occasionally, I feel incompetent, or not very bright. It's good for me.

I haven't been blogging lately, but I've been writing lots of words. I've also been re-purposing and adapting many other words that are licensed to be reusable (thanks, David Humphrey and John Davis). Prepping a new course is usually prime blogging time for me, with my mind in a constant state of churn, but this course has me drowning in work to do. There is so much to learn, and so much new material — readings, session plans and notes, examples, homework assignments — to create.

I have made a few notes along the way, hoping to expand them into posts. Today they become part of this post.

VS Code

This is my first time in a long time using an editor configured to auto-complete and do all the modern things that developers expect these days. I figured, new tool, why not try a new workflow...

After a short period of breaking in, I'm quite enjoying the experience. One feature I didn't expect to use so much is the ability to collapse an HTML element. In a large HTML file, this has been a game changer for me. Yes, I know, this is old news to most of you. But as my brother loves to say when he gets something used or watches a movie everyone else has already seen, "But, hey, it's new to me!" VS Code's auto-complete across HTML, CSS, and JavaScript, with built-in documentation and links to MDN's more complete documentation, lets me type code much faster than ever before. It made me think of one of my favorite Kent Beck lines:

As the speed of development approaches infinity, reuse becomes irrelevant.

When programming, I often copy, paste, and adapt previous code. In VS Code, I have found myself moving fast enough that copy, paste, and adapt would slow me down. That sort of reuse has become irrelevant.

Examples > Prose

The class sessions for which I have written the longest and most complete notes for my students (and me) tend to be the ones for which I have the fewest, or least well-developed, code examples. The reverse is also true: lots of good examples and code tends to mean smaller class notes. Sometimes that is because I run out of time to write much prose to accompany the session. Just as often, though, it's because the examples give us plenty to do live in class, where the learning happens in the flow of writing and examining code.

This confirms something I've observed over many years of teaching: Examples First tends to work better for most students, even people like me who fancy themselves as top-down thinkers. Good examples and code exercises can create the conditions in which abstract knowledge can be learned. This is a sturdy pedagogical pattern.

Concepts and Patterns

There is so, so much to CSS! HTML itself has a lot of details for new coders to master before they reach fluency. Many of the websites aimed at teaching and presenting these topics quickly begin to feel like a downpour of information, even when the authors try to organize it all. It's too easy to fall into, "And then there's this other property...".

After a few weeks, I've settled into trying to help students learn two kinds of things for each topic:

  • a couple of basic concepts or principles
  • a few helpful patterns
My challenge is that I am still new enough to modern web design that identifying either in advance is a challenge. My understanding is constantly evolving, so my examples and notes are evolving, too. I will be better next time, or so I hope.

~~~~~

We are only five weeks into a fifteen week semester, so take any conclusions I draw with a grain of salt. We also haven't gotten to JavaScript yet, the teaching and learning of which will present a different sort of challenge than HTML and CSS with students who have little or no experience programming. Maybe I will make time to write up our experiences with JavaScript in a few weeks.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

July 10, 2023 12:28 PM

What We Know Affects What We See

Last time I posted this passage from Shop Class as Soulcraft, by Matthew Crawford:

Countless times since that day, a more experienced mechanic has pointed out to me something that was right in front of my face, but which I lacked the knowledge to see. It is an uncanny experience; the raw sensual data reaching my eye before and after are the same, but without the pertinent framework of meaning, the features in question are invisible. Once they have been pointed out, it seems impossible that I should not have seen them before.

We perceive in part based on what we know. A lack of knowledge can prevent us from seeing what is right in front of us. Our brains and eyes work together, and without a perceptual frame, they don't make sense of the pattern. Once we learn something, our eyes -- and brains -- can.

This reminds me of a line from the movie The Santa Clause, which my family watched several times when my daughters were younger. The new Santa Claus is at the North Pole, watching magical things outside his window, and comments to the elf whose been helping him, "I see it, but I don't believe it." She replies that adults don't understand: "Seeing isn't believing; believing is seeing." As a mechanic, Crawford came to understand that knowing is seeing.

Later in the book, Crawford describes another way that knowing and perceiving interact with one another, this time with negative results. He had been struggling to figure out why there was no spark at the spark plugs in his VW Bug, and his father -- an academic, not a mechanic -- told him about Ohm's Law:

Ohm's law is something explicit and rulelike, and is true in the way that propositions are true. Its utter simplicity makes it beautiful; a mind in possession of this equation is charmed with a sense of its own competence. We feel we have access to something universal, and this affords a pleasure that is quasi-religious, perhaps. But this charm of competence can get in the way of noticing things; it can displace, or perhaps hamper the development of, a different kind of knowledge that may be difficult to bring to explicit awareness, but is superior as a practical matter. It superiority lies in the fact that it begins with the typical rather than the universal, so it goes more rapidly and directly to particular causes, the kind that actually tend to cause ignition problems.

Rule-based, universal knowledge imposes a frame on the scene. Unfortunately, its universal nature can impede perception by blinding us to the particular details of the situation we are actually in. Instead of seeing what is there, we see the scene as our knowledge would have it.

the cover of the book Drawing on the Right Side of the Brain

This reminds me of a story and a technique from the book Drawing on the Right Side of the Brain, which I first wrote about in the earliest days of this blog. When asked to draw a chair, most people barely even look at the chair in front of them. Instead, they start drawing their idea of a chair, supplemented by a few details of the actual chair they see. That works about as well as diagnosing an engine by diagnosing your mind's eye of an engine, rather than the mess of parts in front of you.

In that blog post, I reported my experience with one of Edwards's techniques for seeing the chair, drawing the negative space:

One of her exercises asked the student to draw a chair. But, rather than trying to draw the chair itself, the student is to draw the space around the chair. You know, that little area hemmed in between the legs of the chair and the floor; the space between the bottom of the chair's back and its seat; and the space that is the rest of the room around the chair. In focusing on these spaces, I had to actually look at the space, because I don't have an image in my brain of an idealized space between the bottom of the chair's back and its seat. I had to look at the angles, and the shading, and that flaw in the seat fabric that makes the space seem a little ragged.

Sometimes, we have to trick our eyes into seeing, because otherwise our brains tell us what we see before we actually look at the scene. Abstract universal knowledge helps us reason about what we see, but it can also impede us from seeing in the first place.

What we know both enables and hampers what we perceive. This idea has me thinking about how my students this fall, non-CS majors who want to learn how to develop web sites, will encounter the course. Most will be novice programmers who don't know what they see when they are looking at code, or perhaps even at a page rendered in the browser. Debugging code will be a big part of their experience this semester. Are there exercises I can give them to help them see accurately?

As I said in my previous post, there's lots of good stuff happening in my brain as I read this book. Perhaps more posts will follow.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

December 22, 2022 1:21 PM

The Ability to Share Partial Results Accelerated Modern Science

This passage is from Lewis Thomas's The Lives of a Cell, in the essay "On Societies as Organisms":

The system of communications used in science should provide a neat, workable model for studying mechanisms of information-building in human society. Ziman, in a recent "Nature" essay, points out, "the invention of a mechanism for the systematic publication of fragments of scientific work may well have been the key event in the history of modern science." He continues:
A regular journal carries from one research worker to another the various ... observations which are of common interest. ... A typical scientific paper has never pretended to be more than another little piece in a larger jigsaw -- not significant in itself but as an element in a grander scheme. The technique of soliciting many modest contributions to the store of human knowledge has been the secret of Western science since the seventeenth century, for it achieves a corporate, collective power that is far greater than any one individual can exert [italics mine].

In the 21st century, sites like arXiv lowered the barrier to publishing and reading the work of other scientists further. So did blogs, where scientists could post even smaller, fresher fragments of knowledge. Blogs also democratized science, by enabling scientists to explain results for a wider audience and at greater length than journals allow. Then came social media sites like Twitter, which made it even easier for laypeople and scientists in other disciplines to watch -- and participate in -- the conversation.

I realize that this blog post quotes an essay that quotes another essay. But I would never have seen the Ziman passage without reading Lewis. Perhaps you would not have seen the Lewis passage without reading this post? When I was in college, the primary way I learned about things I didn't read myself was by hearing about them from classmates. That mode of sharing puts a high premium on having the right kind of friends. Now, blogs and social media extend our reach. They help us share ideas and inspirations, as well as helping us to collaborate on science.

~~~~

I first mentioned The Lives of a Cell a couple of weeks ago, in If only ants watched Netflix.... This post may not be the last to cite the book. I find something quotable and worth further thought every few pages.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

December 04, 2022 9:18 AM

If Only Ants Watched Netflix...

In the essay "On Societies as Organisms", Lewis Thomas says that we "violate science" when we try to read human meaning into the structures and behaviors of insects. But it's hard not to:

Ants are so much like human beings as to be an embarrassment. They farm fungi, raise aphids as livestock, launch armies into wars, use chemical sprays to alarm and confuse enemies, capture slaves. The families of weaver ants engage in child labor, holding their larvae like shuttles to spin out the thread that sews the leaves together for their fungus gardens. They exchange information ceaselessly. They do everything but watch television.

I'm not sure if humans should be embarrassed for still imitating some of the less savory behaviors of insects, or if ants should be embarrassed for reflecting some of the less savory behaviors of humans.

Biology has never been my forte, so I've read and learned less about it than many other sciences. Enjoying chemistry a bit at least helped keep me within range of the life sciences. I was fortunate to grow up in the Digital Age.

But with many people thinking the 21st century will the Age of Biology, I feel like I should get more in tune with the times. I picked up Thomas's now classic The Lives of a Cell, in which the quoted essay appears, as a brief foray into biological thinking about the world. I'm only a few pages in, but it is striking a chord. I can imagine so many parallels with computing and software. Perhaps I can be as at home in the 21st century as I was in the 20th.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

December 03, 2022 2:34 PM

Why Teachers Do All That Annoying Stuff

Most people, when they become teachers, tell themselves that they won't do all the annoying things that their teachers did. If they teach for very long, though, they almost all find themselves slipping back to practices they didn't like as a student but which they now understand from the other side of the fence. Dynomight has written a nice little essay explaining why. Like deadlines. Why have deadlines? Let students learn and work at their own pace. Grade what they turn in, and let them re-submit their work later to demonstrate their newfound learning.

Indeed, why not? Because students are clever and occasionally averse to work. A few of them will re-invent a vexing form of the ancient search technique "Generate and Test". From the essay:

  1. Write down some gibberish.
  2. Submit it.
  3. Make a random change, possibly informed by feedback on the last submission.
  4. Resubmit it. If the grade improved, keep it, otherwise revert to the old version.
  5. Goto 3.

You may think this is a caricature, but I see this pattern repeated even in the context of weekly homework assignments. A student will start early and begin a week-long email exchange where they eventually evolve a solution that they can turn in when the assignment is due.

I recognize that these students are responding in a rational way to the forces they face: usually, uncertainty and a lack of the basic understanding needed to even start the problem. My strategy is to try to engage them early on in the conversation in a way that helps them build that basic understanding and to quiet their uncertainty enough to make a more direct effort to solve the problem.

Why even require homework? Most students and teachers want for grades to reflect the student's level of mastery. If we eliminate homework, or make it optional, students have the opportunity to demonstrate their mastery on the final exam or final project. Why indeed? As the essay says:

But just try it. Here's what will happen:
  1. Like most other humans, your students will be lazy and fallible.
  2. So many of them will procrastinate and not do the homework.
  3. So they won't learn anything.
  4. So they will get a terrible grade on the final.
  5. And then they will blame you for not forcing them to do the homework.

Again, the essay is written in a humorous tone that exaggerates the foibles and motivations of students. However, I have been living a variation of this pattern in my compilers course over the last few years. Here's how things have evolved.

I assign the compiler project as six stages of two weeks each. At the end of the semester, I always ask students for ways I might improve the course. After a few years teaching the course, students began to tell me that they found themselves procrastinating at the start of each two-week cycle and then having to scramble in the last few days to catch up. They suggested I require future students to turn something in at the end of the first week, as a way to get them started working sooner.

I admired their self-awareness and added a "status check" at the midpoint of each two-week stage. The status check was not to be graded, but to serve as a milepost they could aim for in completing that cycle's work. The feedback I provided, informal as it was, helped them stay course, or get back on course, if they had missed something important.

For several years, this approach worked really well. A few teams gamed the system, of course (see generate-and-test above), but by and large students used the status checks as intended. They were able to stay on track time-wise and to get some early feedback that helped them improve their work. Students and professor alike were happy.

Over the last couple of years, though, more and more teams have begun to let the status checks slide. They are busy, overburdened in other courses or distracted by their own projects, and ungraded work loses priority. The result is exactly what the students who recommended the status checks knew would happen: procrastination and a mad scramble in the last few days of the stage. Unfortunately, this approach can lead a team to fall farther and farther behind with each passing stage. It's hard to produce a complete working compiler under these conditions.

Again, I recognize that students usually respond in a rational way to the forces they face. My job now is to figure out how we might remove those forces, or address them in a more productive way. I've begun thinking about alternatives, and I'll be debriefing the current offering of the course with my students over the next couple of weeks. Perhaps we can find something that works better for them.

That's certainly my goal. When a team succeeds at building a working compiler, and we use it to compile and run an impressive program -- there's no feeling quite as joyous for a programmer, or a student, or a professor. We all want that feeling.

Anyway, check out the full essay for an entertaining read that also explains quite nicely that teachers are usually responding in a rational way to the forces they face, too. Cut them a little slack.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Patterns, Teaching and Learning

October 30, 2022 9:32 AM

Recognize

From Robin Sloan Sloan's newsletter:

There was a book I wanted very badly to write; a book I had been making notes toward for nearly ten years. (In my database, the earliest one is dated December 13, 2013.) I had not, however, set down a single word of prose. Of course I hadn't! Many of you will recognize this feeling: your "best" ideas are the ones you are most reluctant to realize, because the instant you begin, they will drop out of the smooth hyperspace of abstraction, apparate right into the asteroid field of real work.

I can't really say that there is a book I want very badly to write. In the early 2000s I worked with several colleagues on elementary patterns, and we brainstormed writing an intro CS textbook built atop a pattern language. Posts from the early days of this blog discuss some of this work from ChiliPLoP, I think. I'm not sure that such a textbook could ever have worked in practice, but I think writing it would have been a worthwhile experience anyway, for personal growth. But writing such a book takes a level of commitment that I wasn't able to make.

That experience is one of the reasons I have so much respect for people who do write books.

While I do not have a book for which I've been making notes in recent years, I do recognize the feeling Sloan describes. It applies to blog posts and other small-scale writing. It also applies to new courses one might create, or old courses one might reorganize and teach in a very different way.

I've been fortunate to be able to create and re-create many courses over my career. I also have some ideas that sit in the back of my mind because I'm a little concerned about the commitment they will require, the time and the energy, the political wrangling. I'm also aware that the minute I begin to work on them, they will no longer be perfect abstractions in my mind; they will collide with reality and require compromises and real work.

(TIL I learned the word "apparate". I'm not sure how I feel about it yet.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Personal, Teaching and Learning

July 28, 2022 11:48 AM

A "Teaching with Patterns" Workshop

the logo of PLoP 2022, a red block with block letters, PLoP in white and 2022 in black

Yesterday afternoon I attended a Teaching with Patterns workshop that is part of PLoP 2022.

When the pandemic hit in 2020, PLoP went virtual, as did so many conferences. It was virtual last year and will be again this year as well. On the spur of the moment, I attended a couple of the 2020 keynote talks online. Without a real commitment to the conference, though I let my day-to-day teaching and admin keep me from really participating. (You may recall that I did it right for StrangeLoop last year, as evidenced by several blog posts ending here.)

When I peeked in on PLoP virtually a couple of years ago, it had already been too many years away from my last PLoP, which itself came after too many years away! Which is to say: I miss PLoP and my friends and colleagues in the software patterns world.

PLoP 2022 itself is in October. Teaching with Patterns was a part of PLoPourri, a series of events scheduled throughout the year in advance of the conference proper. It was organized by Michael Weiss of Carleton University in Ottawa, a researcher on open source software.

Upon joining the Zoom session for the workshop, I was pleased to see Michael, a colleague I first met at PLoP back in the 1990s, and Pavel Hruby, whom I met at one of the first ChiliPLoPs many years ago. There were a handful of other folks participating, too, bringing experience from a variety of domains. One, Curt McNamara, has significant experience using patterns in his course on sustainable graphic design.

The workshop was a relaxed affair. First, participants shared their experiences teaching with patterns. Then we all discussed our various approaches and shared lessons we have learned. Michael gave a few opening remarks and then asked four questions to prime the conversation:

  • How do you use patterns when you teach a topic?
  • What format do you use to teach with patterns?
  • Do you teach with your own or existing patterns?
  • What works best/worst?
He gave us a few minutes to think about our answers and to post anything we liked to the workshop's whiteboard.

My answers may sound familiar to anyone who has read my blog long enough to hear me talk about the courses I teach and, going way back, to my work at PLoP and ChiliPLoP.

I use patterns to teach design techniques and trade-offs. It's easy to to lecture students on particular design solutions, but that's not very effective at helping students learn how to do design, to make choices based on the forces at play in any given problem. Patterns help me to think about the context in which a technique applies, the tradeoffs it helps us to make, and the state of their solution.

I rarely have students read patterns themselves, at least in one of the stylized pattern forms. Instead, I integrate the patterns into the stories I tell, into the conversations I have with my students about problems and solutions, and finally into the session notes I write up for them and me.

These days, I mostly teach courses on programming languages and compilers. I have written my own patterns for the programming languages course and use them to structure two units, on structurally recursive programming and closures for managing state. I've long dreamed of writing more patterns for the compiler course, which seems primed for the approach: So many of the structures we build when writing a compiler are the result of decades of experience, with tradeoffs that are often left implicit in textbooks and in the heads of experts.

I have used my own patterns as well as patterns from the literature when teaching other courses as well, back in the days when I taught three courses a semester. When I taught knowledge-based systems every year, I used patterns that I wrote to document the "generic task" approach to KBS in which my work lived. In our introductory OOP courses, I tended to use patterns from the literature both as a way to teach design techniques and trade-offs and as way to prepare students to graduate into a world where OO design patterns were part of the working vocabulary.

I like how patterns help me discuss code with students and how they help me evaluate solutions -- and communicate how I evaluate solutions.

That is more information than I shared yesterday... This was only a one and a half-hour workshop, and all of the participants had valuable experience to share.

I didn't take notes on what others shared or on the conversations that followed, but I did note a couple of things in general. Several people use a set of patterns or even a small pattern language to structure their entire course. I have done this at the unit level of a course, never at the scale of the course. My compiler course would be a great candidate here, but doing so would require that I write a lot of new material to augment and replace some of what I have now.

Likewise, a couple of participants have experimented with asking students to write patterns themselves. My teaching context is different than theirs, so I'm not sure this is something that can work for me in class. I've tried it a time or two with master's-level students, though, and it was educational for both the student and me.

I did grab all the links posted in the chat room so that I can follow up on them to read more later:

I also learned one new tech thing at the workshop: Padlet, a real-time sharing platform that runs in the browser. Michael used a padlet as the workshop's whiteboard, in lieu of Zoom's built-in tool (which I've still never used). The padlet made it quite straightforward to make and post cards containing text and other media. I may look into it as a classroom tool in the future.

Padlet also made it pretty easy to export the whiteboard in several formats. I grabbed a PDF copy of the workshop whiteboard for future reference. I considered linking to that PDF here, but I didn't get permission from the group first. So you'll just have to trust me.

Attending this workshop reminded me that I do not get to teach enough, at least not formal classroom teaching, and I miss it! As I told Michael in a post-workshop thank-you e-mail:

That was great fun, Michael. Thanks for organizing! I learned a few things, but mostly it is energizing simply to hear others talk about what they are doing.
"Energizing" really is a good word for what I experienced. I'm hyped to start working in more detail on my fall course, which starts in less than four weeks.

I'm also energized to look into the remaining PLoPourri events between now and the conference. (I really wish now that I had signed up for The Future of Education workshop in May, which was on "patterns for reshaping learning and the campus".) PLoP itself is on a Monday this year, in late October, so there is a good chance I can make it as well.

I told Michael that Teaching with Patterns would be an excellent regular event at PLoP. He seemed to like that thought. There is a lot of teaching experience out there to be shared -- and a lot of energy to be shared as well.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

June 08, 2022 1:51 PM

Be a Long-Term Optimist and a Short-Term Realist

Before I went to bed last night, I stumbled across actor Robert De Niro speaking with Stephen Colbert on The Late Show. De Niro is, of course, an Oscar winner with fifty years working in films. I love to hear experts talk about what they do, so I stayed up a few extra minutes.

I think Colbert had just asked De Niro to give advice to actors who were starting out today, because De Niro was demurring: he didn't like to give advice, and everyone's circumstances are different. But then he said that, when he himself was starting out, he went on lots of auditions but always assumed that he wasn't going to get the job. There were so many ways not to get a job, so there was no reason to get his hopes up.

Colbert related that anecdote to his own experience getting started in show business. He said that whenever he had an acting job, he felt great, and whenever he didn't have a job, pessimism set in: he felt like he was never going to work again. De Niro immediately said, "oh, I never felt that way". He always felt like he was going to make it. He just had to keep going on auditions.

There was a smile on Colbert's face. He seemed to have trouble squaring De Niro's attitude toward auditions with his claimed confidence about eventual success. Colbert moved on with his interview.

It occurred to me that the combination of attitudes expressed by De Niro is a healthy, almost necessary, way to approach big goals. In the short term, accept that each step is uncertain and unlikely to pay off. Don't let those failures get you down; they are the price of admission. For the long term, though, believe deeply that you will succeed. That's the spirit you need to keep taking steps, trying new things when old things don't seem to work, and hanging around long enough for success to happen.

De Niro's short descriptions of his own experiences revealed how both sides of his demeanor contributed to him ultimately making it. He never knew what casting agents, directors, and producers were looking for, so he was willing to read every part in several different ways. Even though he didn't expect to get the job, maybe one of those people would remember him and mention him to a friend in the business, and maybe that connection would pay off. All he could do was audition.

The self-assurance De Niro seemed to feel almost naturally reminded me of things that Viktor Frankl and John McCain said about their ability to survive time in war camps. Somehow, they were able to maintain a confidence that they would eventually be free again. In the end, they were lucky to survive, but their belief that they would survive had given them a strength to persevere through much worse treatment than simply being rejected for a part in a movie. That perseverance helped them stay alive and take actions that would leave them in a position to be lucky.

I realize that the story De Niro tells, like those of Frankl and McCain, is potentially suspect due to survivor bias. We don't get to hear from people who believed that they would make it as actors but never did. Even so, their attitude seems like a pragmatic one to implement, if we can manage it: be a long-term optimist and a short-term realist. Do everything you can to hang around long enough for fortune to find us.

Like De Niro, I am not much one to give advice. In the haze of waking up and going back to sleep last night, though, I think his attitude gives us a useful model to follow.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

April 16, 2022 2:32 PM

More Fun, Better Code: A Bug Fix for my Pair-as-Set Implementation

In my previous post, I wrote joyously of a fun bit of programming: implementing ordered pairs using sets.

Alas, there was a bug in my solution. Thanks to Carl Friedrich Bolz-Tereick for finding it so quickly:

Heh, this is fun, great post! I wonder what happens though if a = b? Then the set is {{a}}. first should still work, but would second fail, because the set difference returns the empty set?

Carl Friedrich had found a hole in my small set of tests, which sufficed for my other implementations because the data structures I used separate cells for the first and second parts of the pair. A set will do that only if the first and second parts are different!

Obviously, enforcing a != b is unacceptable. My first code thought was to guard second()'s behavior:

    if my formula finds a result
       then return that result
       else return (first p)

This feels like a programming hack. Furthermore, it results in an impure implementation: it uses a boolean value and an if expression. But it does seem to work. That would have to be good enough unless I could find a better solution.

Perhaps I could use a different representation of the pair. Helpfully, Carl Friedrich followed up with pointers to several blog posts by Mark Dominus from last November that looked at the set encoding of ordered pairs in some depth. One of those posts taught me about another possibility: Wiener pairs. The idea is this:

    (a,b) = { {{a},∅}, {{b}} }

Dominus shows how Wiener pairs solve the a == b edge case in Kuratowski pairs, which makes it a viable alternative.

Would I ever have stumbled upon this representation, as I did onto the Kuratowski pairs? I don't think so. The representation is more complex, with higher-order sets. Even worse for my implementation, the formulas for first() and second() are much more complex. That makes it a lot less attractive to me, even if I never want to show this code to my students. I myself like to have a solid feel for the code I write, and this is still at the fringe of my understanding.

Fortunately, as I read more of Dominus's posts, I found there might be a way to save my Kuratowski-style solution. It turns out that the if expression I wrote above parallels the set logic used to implement a second() accessor for Kuratowski pairs: a choice between the set that works for a != b pairs and a fallback to a one-set solution.

From this Dominus post, we see the correct set expression for second() is:

the correct set expression for the second() function

... which can be simplified to:

an expression for the second() function simplified to a logical statement

The latter expression is useful for reasoning about second(), but it doesn't help me implement the function using set operations. I finally figured out what the former equation was saying: if (∪ p) is same as (∩ p), then the answer comes from (∩ p); otherwise, it comes from their difference.

I realized then that I could not write this function purely in terms of set operations. The computation requires the logic used to make this choice. I don't know where the boundary lies between pure set theory and the logic in the set comprehension, but a choice based on a set-empty? test is essential.

In any case, I think I can implement the my understanding of the set expression for second() as follows. If we define union-minus-intersection as:

    (set-minus (apply set-union aSet)
               (apply set-intersect aSet))
then:
    (second p) = (if (set-empty? union-minus-intersection)
                     (set-elem (apply set-intersect aSet))
                     (set-elem union-minus-intersection))

The then clause is the same as the body of first(), which must be true: if the union of the sets is the same as their intersection, then the answer comes from the interesection, just as first()'s answer does.

It turns out that this solution essentially implements my first code idea above: if my formula from the previous blog entry finds a result, then return that result. Otherwise, return first(p). The circle closes.

Success! Or, I should: Success!(?) After having a bug in my original solution, I need to stay humble. But I think this does it. It passes all of my original tests as well as tests for a == b, which is the main edge case in all the discussions I have now read about set implementations of pairs. Here is a link to the final code file, if you'd like to check it out. I include the two simple test scenarios, for both a == b and a == b, as Rackunit tests.

So, all in all, this was a very good week. I got to have some fun programming, twice. I learned some set theory, with help from a colleague on Twitter. I was also reacquainted with Mark Dominus's blog, the RSS feed for which I had somehow lost track of. I am glad to have it back in my newsreader.

This experience highlights one of the selfish reasons I like for students to ask questions in class. Sometimes, they lead to learning and enjoyment for me as well. (Thanks, Henry!) It also highlights one of the reasons I like Twitter. The friends we make there participate in our learning and enjoyment. (Thanks, Carl Friedrich!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

April 13, 2022 2:27 PM

Programming Fun: Implementing Pairs Using Sets

Yesterday's session of my course was a quiz preceded by a fun little introduction to data abstraction. As part of that, I use a common exercise. First, I define a simple API for ordered pairs: (make-pair a b), (first p), and (second p). Then I ask students to brainstorm all the ways that they could implement the API in Racket.

They usually have no trouble thinking of the data structures they've been using all semester. Pairs, sure. Lists, yes. Does Racket have a hash type? Yes. I remind my students about vectors, which we have not used much this semester. Most of them haven't programmed in a language with records yet, so I tell them about structs in C and show them Racket's struct type. This example has the added benefit of seeing that Racket generates constructor and accessor functions that do the work for us.

The big payoff, of course, comes when I ask them about using a Racket function -- the data type we have used most this semester -- to implement a pair. I demonstrate three possibilities: a selector function (which also uses booleans), message passaging (which also uses symbols), and pure functions. Most students look on these solutions, especially the one using pure functions, with amazement. I could see a couple of them actively puzzling through the code. That is one measure of success for the session.

This year, one student asked if we could use sets to implement a pair. Yes, of course, but I had never tried that... Challenge accepted!

While the students took their quiz, I set myself a problem.

The first question is how to represent the pair. (make-pair a b) could return {a, b}, but with no order we can't know which is first and which is second. My students and I had just looked at a selector function, (if arg a b), which can distinguish between the first item and the second. Maybe my pair-as-set could know which item is first, and we could figure out the second is the other.

So my next idea was (make-pair a b){{a, b}, a}. Now the pair knows both items, as well as which comes first, but I have a type problem. I can't apply set operations to all members of the set and will have to test the type of every item I pull from the set. This led me to try (make-pair a b){{a}, {a, b}}. What now?

My first attempt at writing (first p) and (second p) started to blow up into a lot of code. Our set implementation provides a way to iterate over the members of a set using accessors named set-elem and set-rest. In fine imperative fashion, I used them to start whacking out a solution. But the growing complexity of the code made clear to me that I was programming around sets, but not with sets.

When teaching functional programming style this semester, I have been trying a new tactic. Whenever we face a problem, I ask the students, "What function can help us here?" I decided to practice what I was preaching.

Given p = {{a}, {a, b}}, what function can help me compute the first member of the pair, a? Intersection! I just need to retrieve the singleton member of ∩ p:

    (first p) = (set-elem (apply set-intersect p))

What function can help me compute the second member of the pair, b? This is a bit trickier... I can use set subtraction, {a, b} - {a}, but I don't know which element in my set is {a, b} and which is {a}. Serendipitously, I just solved the latter subproblem with intersection.

Which function can help me compute {a, b} from p? Union! Now I have a clear path forward: (∪ p) – (∩ p):

    (second p) = (set-elem
                   (set-minus (apply set-union p)
                              (apply set-intersect p)))

I implemented these three functions, ran the tests I'd been using for my other implementations... and they passed. I'm not a set theorist, so I was not prepared to prove my solution correct, but the tests going all green gave me confidence in my new implementation.

Last night, I glanced at the web to see what other programmers had done for this problem. I didn't run across any code, but I did find a question and answer on the mathematics StackExchange that discusses the set theory behind the problem. The answer refers to something called "the Kuratowski definition", which resembles my solution. Actually, I should say that my solution resembles this definition, which is an established part of set theory. From the StackExchange page, I learned that there are other ways to express a pair as a set, though the alternatives look much more complex. I didn't know the set theory but stumbled onto something that works.

My solution is short and elegant. Admittedly, I stumbled into it, but at least I was using the tools and thinking patterns that I've been encouraging my students to use.

I'll admit that I am a little proud of myself. Please indulge me! Department heads don't get to solve interesting problems like this during most of the workday. Besides, in administration, "elegant" and "clever" solutions usually backfire.

I'm guessing that most of my students will be unimpressed, though they can enjoy my pleasure vicariously. Perhaps the ones who were actively puzzling through the pure-function code will appreciate this solution, too. And I hope that all of my students can at least see that the tools and skills they are learning will serve them well when they run into a problem that looks daunting.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

March 14, 2022 11:55 AM

Taking Plain Text Seriously Enough

Or, Plain Text and Spreadsheets -- Giving Up and Giving In

One day a couple of weeks ago, a colleague and I were discussing a student. They said, I think I had that student in class a few semesters ago, but I can't find the semester with his grade."

My first thought was, "I would just use grep to...". Then I remembered that my colleagues all use Excel for their grades.

The next day, I saw Derek Sivers' recent post that gives the advice I usually give when asked: Write plain text files.

Over my early years in computing, I lost access to a fair bit of writing that was done using various word processing applications. All stored data in proprietary formats. The programs are gone, or have evolved well beyond the version and OS I was using at the time, and my words are locked inside. Occasionally I manage to pull something useful out of one of those old files, but for the most part they are a graveyard.

No matter how old, the code and essays I wrote in plaintext are still open to me. I love being able to look at programs I wrote for my undergrad courses (including the first parser I ever wrote, in Pascal) and my senior honors project (an early effort to implement Swiss System pairings for chess tournament). All those programs have survived the move from 5-1/4" floppies, through several different media, and still open just fine in emacs. So do the files I used to create our wedding invitations, which I wrote in troff(!).

The advice to write in plain text transfers nicely from proprietary formats on our personal computers to tools that run out on the web. The same week as Sivers posted his piece, a prolific Goodreads reviewer reported losing all his work when Goodreads had a glitch. The reviewer may have written in plain text, but his reviewers are buried in someone else's system.

I feel bad for non-tech folks when they lose their data to a disappearing format or app. I'm more perplexed when a CS prof or professional programmer does. We know about plain text; we know the history of tools; we know that our plain text files will migrate into the future with us, usable in emacs and vi and whatever other plain text editors we have available there.

I am not blind, though, to the temptation. A spreadsheet program does a lot of work for us. Put some numbers here, add a formula or two over there, and boom! your grades are computed and ready for entry -- into the university's appalling proprietary system, where the data goes to die. (Try to find a student's grade from a forgotten semester within that system. It's a database, yet there are no affordances available to users for the simplest tasks...)

All of my grade data, along with most of what I produce, is in plain text. One cost of this choice is that I have to write my own code to process it. This takes a little time, but not all that much, to be honest. I don't need all of Numbers or Excel; all I need most of the time is the ability to do simple computations and a bit of sorting. If I use a comma-separated values format, all of my programming languages have tools to handle parsing, so I don't even have to do much input processing to get started. If I use Racket for my processing code, throwing a few parens into the mix enables Racket to read my files into lists that are ready for mapping and filtering to my heart's content.

Back when I started professoring, I wrote all of my grading software in whatever language we were using in the class in that semester. That seemed like a good way for me to live inside the language my students were using and remind myself what they might be feeling as they wrote programs. One nice side effect of this is that I have grading code written in everything from Cobol to Python and Racket. And data from all those courses is still searchable using grep, filterable using cut, and processable using any code I want to write today.

That is one advantage of plain text I value that Sivers doesn't emphasize: flexibility. Not only will plain text survive into the future... I can do anything I want with it. I don't often feel powerful in this world, but I feel powerful when I'm making data work for me.

In the end, I've traded the quick and easy power of Excel and its ilk for the flexible and power of plain text, at a cost of writing a little code for myself. I like writing code, so this sort of trade is usually congenial to me. Once I've made the trade, I end up building a set of tools that I can reuse, or mold to a new task with minimal effort. Over time, the cost reaches a baseline I can live with even when I might wish for a spreadsheet's ease. And on the day I want to write a complex function to sort a set of records, one that befuddles Numbers's sorting capabilities, I remember why I like the trade. (That happened yet again last Friday.)

A recent tweet from Michael Nielsen quotes physicist Steven Weinberg as saying, "This is often the way it is in physics -- our mistake is not that we take our theories too seriously, but that we do not take them seriously enough." I think this is often true of plain text: we in computer science forget to take its value and power seriously enough. If we take it seriously, then we ought to be eschewing the immediate benefits of tools that lock away our text and data in formats that are hard or impossible to use, or that may disappear tomorrow at some start-up's whim. This applies not only to our operating system and our code but also to what we write and to all the data we create. Even if it means foregoing our commercial spreadsheets except for the most fleeting of tasks.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

March 03, 2022 2:22 PM

Knowing Context Gives You Power, Both To Choose And To Make

We are at the point in my programming languages course where my students have learned a little Racket, a little functional programming, and a little recursive programming over inductive datatypes. Even though I've been able to connect many of the ideas we've studied to programming tasks out in the world that they care about themselves, a couple of students have asked, "Why are we doing this again?"

This is a natural question, and one I'm asked every time I teach this course. My students think that they will be heading out into the world to build software in Java or Python or C, and the ideas we've seen thus far seem pretty far removed from the world they think they will live in.

These paragraphs from near the end of Chelsea Troy's 3-part essay on API design do a nice job of capturing one part of the answer I give my students:

This is just one example to make a broader point: it is worthwhile for us as technologists to cultivate knowledge of the context surrounding our tools so we can make informed decisions about when and how to use them. In this case, we've managed to break down some web request protocols and build their pieces back up into a hybrid version that suits our needs.

When we understand where technology comes from, we can more effectively engage with its strengths, limitations, and use cases. We can also pick and choose the elements from that technology that we would like to carry into the solutions we build today.

The languages we use were designed and developed in a particular context. Knowing that context gives us multiple powers. One power is the ability to make informed decisions about the languages -- and language features -- we choose in any given situation. Another is the ability to make new languages, new features, and new combinations of features that solve the problem we face today in the way that works best in our current context.

Not knowing context limits us to our own experience. Troy does a wonderful job of demonstrating this using the history of web API practice. I hope my course can help students choose tools and write code more effectively when they encounter new languages and programming styles.

Computing changes. My students don't really know what world they will be working in in five years, or ten, or thirty. Context is a meta-tool that will serve them well.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

August 22, 2020 4:09 PM

Learning Something You Thought You Already Knew

Sandi Metz's latest newsletter is about the heuristic not to name a class after the design pattern it implements. Actually, it's about a case in which Metz wanted to name a class after the pattern it implements in her code and then realized what she had done. She decided that she either needed to have a better reason for doing it than "because it just felt right" or she needed practice what she preaches to the rest of us. What followed was some deep thinking about what makes the rule a good one to follow and her efforts to put her conclusions in writing for the benefit of her readers.

I recognize with Metz's sense of discomfort at breaking a rule when it feels right and her need to step back and understand the rule at a deeper level. Between her set up and her explanation, she writes:

I've built a newsletter around this rule not only because I believe that it's useful, but also because my initial attempts to explain it exposed deep holes in my understanding. This was a revelation. Had I not been writing a book, I might have hand-waved around these gaps in my knowledge forever.

People sometimes say, "If you you really want to understand something, teach it to others." Metz's story is a great example of why this is really true. I mean, sure, you can learn any new area and then benefit from explaining it to someone else. Processing knowledge and putting it in your own words helps to consolidate knowledge at the surface. But the real learning comes when you find yourself in a situation where you realize there's something you've taken for granted for months or for years, something you thought you knew, but suddenly you sense a deep hole lying under the surface of that supposed understanding. "I just know breaking the rule is the right thing to do here, but... but..."

I've been teaching long enough to have had this experience many times in many courses, covering many areas of knowledge. It can be terrifying, at least momentarily. The temptation to wave my hands and hurry past a student's question is enormous. To learn from teaching in these moments requires humility, self-awareness, and a willingness to think and work until you break through to that deeper understanding. Learning from these moments is what sets the best teachers and writers apart from the rest of us.

As you might guess from Metz's reaction to her conundrum, she's a pretty good teacher and writer. The story in the newsletter is from the new edition of her book "99 Bottles of OOP", which is now available. I enjoyed the first edition of "99 Bottles" and found it useful in my own teaching. It sounds like the second edition will be more than a cleanup; it will have a few twists that make it a better book.

I'm teaching our database systems course for the first time ever this fall. This is a brand new prep for me: I've never taught a database course before, anywhere. There are so many holes in my understanding, places where I've internalized good practices but don't grok them in the way an expert does. I hope I have enough humility and self-awareness this semester to do my students right.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 29, 2020 11:19 AM

Programming Healthily

... or at least differently. From Inside Google's Efforts to Engineer Its Food for Healthiness:

So, instead of merely changing the food, Bakker changed the foodscape, ensuring that nearly every option at Google is healthy -- or at least healthyish -- and that the options that weren't stayed out of sight and out of mind.

This is how I've been thinking lately about teaching functional programming to my students, who have experience in procedural Python and object-oriented Java. As with deciding what we eat, how we think about problems and programs is mostly unconscious, a mixture of habit and culture. It is also something intimate and individual, perhaps especially for relative beginners who have only recently begun to know a language well enough to solve problems with some facility. But even for us old hands, the languages and mental tools we use to write programs can become personal. That makes change, and growth, hard.

In my last few offerings of our programming languages course, in which students learn Racket and functional style, I've been trying to change the programming landscape my students see in small ways whenever I can. Here are a few things I've done:

  • I've shrunk the set of primitive functions that we use in the first few weeks. A larger set of tools offers more power and reach, but it also can distract. I'm hoping that students can more quickly master the small set of tools than the larger set and thus begin to feel more accomplished sooner.

  • We work for the first few weeks on problems with less need for the tools we've moved out of sight, such as local variables and counter variables. If a program doesn't really benefit from a local variable, students will be less likely to reach for one. Instead, perhaps, they will reach for a function that can help them get closer to their goal.

  • In a similar vein, I've tried to make explicit connections between specific triggers ("We're processing a list of data, so we'll need a loop.") with new responses ("map can help us here."). We then follow up these connections with immediate and frequent practice.

By keeping functional tools closer at hand, I'm trying to make it easier for these tools to become the new habit. I've also noticed how the way I speak about problems and problem solving can subtly shape how students approach problems, so I'm trying to change a few of my own habits, too.

It's hard for me to do all these things, but it's harder still when I'm not thinking about them. This feels like progress.

So far students seem to be responding well, but it will be a while before I feel confident that these changes in the course are really helping students. I don't want to displace procedural or object-oriented thinking from their minds, but rather to give them a new tool set they can bring out when they need or want to.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 06, 2020 3:13 PM

A Writing Game

I recently started reading posts in the archives of Jason Zweig's blog. He writes about finance for a living but blogs more widely, including quite a bit about writing itself. An article called On Writing Better: Sharpening Your Tools challenges writers to look at each word they write as "an alien object":

As the great Viennese journalist Karl Kraus wrote, "The closer one looks at a word, the farther away it moves." Your goal should be to treat every word you write as an alien object: You should be able to look at it and say, What is that doing here? Why did I use that word instead of a better one? What am I trying to say here? How can I get to where I'm going if I use such stale and lifeless words?

My mind immediately turned this into a writing game, an exercise that puts the idea into practice. Take any piece of writing.

  1. Choose a random word in the document.
  2. Change the word -- or delete it! -- in a way that improves the text.
  3. Go to 1.

Play the game for a fixed number of rounds or for a fixed period of time. A devilish alternative is to play until you get so frustrated with your writing that you can't continue. You could then judge your maturity as a writer by how long you can play in good spirits.

We could even automate the mechanics of the game by writing a program that chooses a random word in a document for us. Every time we save the document after a change, it jumps to a new word.

As with most first ideas, this one can probablyb be improved. Perhaps we should bias word selection toward words whose replacement or deletion are most likely to improve our writing. Changing "the" or "to" doesn't offer the same payoff as changing a lazy verb or deleting an abstract adverb. Or does it? I have a lot of room to improve as a writer; maybe fixing some "the"s and "to"s is exactly what I need to do. The Three Bears pattern suggests that we might learn something by tackling the extreme form of the challenge and seeing where it leads us.

Changing or deleting a single word can improve a piece of text, but there is bigger payoff available, if we consider the selected word in context. The best way to eliminate many vague nouns is to turn them back into verbs, where they act with vigor. To do that, we will have to change the structure of the sentence, and maybe the surrounding sentences. That forces us to think even more deeply about the text than changing a lone word. It also creates more words for us to fix in following rounds!

I like programming challenges of this sort. A writing challenge that constrains me in arbitrary ways might be just what I need to take time more often to improved my work. It might help me identify and break some bad habits along the way. Maybe I'll give this a try and report back. If you try it, please let me know the results!

And no, I did not play the game with this post. It can surely be improved.

Postscript. After drafting this post, I came across another article by Zweig that proposes just such a challenge for the narrower case of abstract adverbs:

The only way to see if a word is indispensable is to eliminate it and see whether you miss it. Try this exercise yourself:
  • Take any sentence containing "actually" or "literally" or any other abstract adverb, written by anyone ever.
  • Delete that adverb.
  • See if the sentence loses one iota of force or meaning.
  • I'd be amazed if it does (if so, please let me know).

We can specialize the writing game to focus on adverbs, another part of speech, or almost any writing weakness. The possibilities...


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

October 25, 2019 3:55 PM

Enjoyment Bias in Programming

Earlier this week, I read this snippet about the benefits of "enjoyment bias" in Morgan Housel's latest blog post:

2. Enjoyment bias: An inefficient investing strategy that you enjoy will outperform an efficient one that feels like work because anything that feels like work will eventually be abandoned.
Getting anything to work requires giving it an appropriate amount of time. Giving it time requires not getting bored or burning out. Not getting bored or burning out requires that you love what you're doing, because that's when the hard parts become acceptable.

The programmer in me immediately thought, "I have this pattern." My guess is that this bias applies to a lot of things outside of investing. In software development, the choices of development methodology and programming language often benefit from enjoyment bias.

In programming as in investing, we can take this too far and hurt ourselves, our teams, and our users. Anything can be overdone. But, in general, we are more likely to stick with the hard work of building software when we enjoy the way we are building it and the tools we are using. Don't let others shame you away from what works for you.

This bias actually reminded me of a short bit from one of Paul Graham's essays on, of all things, procrastination:

I think the way to "solve" the problem of procrastination is to let delight pull you instead of making a to-do list push you. Work on an ambitious project you really enjoy, and sail as close to the wind as you can, and you'll leave the right things undone.

Delight can keep you happily working when the going gets rough, and it can pull you toward work when a lack of delight would leave you killing time on stuff that doesn't matter.

(By the way, I think that several other biases described by Housel are also useful in programming. Consider the value of reasonable ignorance, number three on his list....)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

August 30, 2019 4:26 PM

Unknown Knowns and Explanation-Based Learning

Like me, you probably see references to this classic quote from Donald Rumsfeld all the time:

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know.

I recently ran across it again in an old Epsilon Theory post that uses it to frame the difference between decision making under risk (the known unknowns) and decision-making under uncertainty (the unknown unknowns). It's a good read.

Seeing the passage again for the umpteenth time, it occurred to me that no one ever seems to talk about the fourth quadrant in that grid: the unknown knowns. A quick web search turns up a few articles such as this one, which consider unknown knowns from the perspective of others in a community: maybe there are other people who know something that you do not. But my curiosity was focused on the first-person perspective that Rumsfeld was implying. As a knower, what does it mean for something to be an unknown known?

My first thought was that this combination might not be all that useful in the real world, such as the investing context that Ben Hunt writes about in Epsilon Theory. Perhaps it doesn't make any sense to think about things you don't know that you know.

As a student of AI, though, I suddenly made an odd connection ... to explanation-based learning. As I described in a blog post twelve years ago:

Back when I taught Artificial Intelligence every year, I used to relate a story from Russell and Norvig when talking about the role knowledge plays in how an agent can learn. Here is the quote that was my inspiration, from Pages 687-688 of their 2nd edition:

Sometimes one leaps to general conclusions after only one observation. Gary Larson once drew a cartoon in which a bespectacled caveman, Zog, is roasting his lizard on the end of a pointed stick. He is watched by an amazed crowd of his less intellectual contemporaries, who have been using their bare hands to hold their victuals over the fire. This enlightening experience is enough to convince the watchers of a general principle of painless cooking.

I continued to use this story long after I had moved on from this textbook, because it is a wonderful example of explanation-based learning.

In a mathematical sense, explanation-based learning isn't learning at all. The new fact that the program learns follows directly from other facts and inference rules already in its database. In EBL, the program constructs a proof of a new fact and adds the fact to its database, so that it is ready-at-hand the next time it needs it. The program has compiled a new fact, but in principle it doesn't know anything more than it did before, because it could always have deduced that fact from things it already knows.

As I read the Epsilon Theory article, it struck me that EBL helps a learner to surface unknown knowns by using specific experiences as triggers to combine knowledge it already into a piece of knowledge that is usable immediately without having to repeat the (perhaps costly) chain of inference ever again. Deducing deep truths every time you need them can indeed be quite costly, as anyone who has ever looked at the complexity of search in logical inference systems can tell you.

When I begin to think about unknown knowns in this way, perhaps it does make sense in some real-world scenarios to think about things you don't know you know. If I can figure it all out, maybe I can finally make my fortune in the stock market.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

June 18, 2019 3:09 PM

Notations, Representations, and Names

In The Power of Simple Representations, Keith Devlin takes on a quote attributed to the mathematician Gauss: "What we need are notions, not notations."

While most mathematicians would agree that Gauss was correct in pointing out that concepts, not symbol manipulation, are at the heart of mathematics, his words do have to be properly interpreted. While a notation does not matter, a representation can make a huge difference.

Spot on. Devlin's opening made me think of that short video of Richard Feynman that everyone always shares, on the difference between knowing the name of something and knowing something. I've seen people mis-interpret Feynman's words in both directions. The people who share this video sometimes seem to imply that names don't matter. Others dismiss the idea as nonsense: how can you not know the names of things and claim to know anything?

Devlin's distinction makes clear the sense in which Feynman is right. Names are like notations. The specific names we use don't really matter and could be changed, if we all agreed. But the "if we all agreed" part is crucial. Names do matter as a part of a larger model, a representation of the world that relates different ideas. Names are an index into the model. We need to know them so that we can speak with others, read their literature, and learn from them.

This brings to mind an article with a specific example of the importance of using the correct name: Through the Looking Glass, or ... This is the Red Pill, by Ben Hunt at Epsilon Theory:

I'm a big believer in calling things by their proper names. Why? Because if you make the mistake of conflating instability with volatility, and then you try to hedge your portfolio today with volatility "protection" ...., you are throwing your money away.

Calling a problem by the wrong name might lead you to the wrong remedy.

Feynman isn't telling us that names don't matter. He's telling us that knowing only names isn't valuable. Names are not useful outside the web of knowledge in which they mean something. As long as we interpret his words properly, they teach us something useful.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

May 19, 2019 10:48 AM

Me and Not Me

At one point in the novel "Outline", by Rachel Cusk, a middle-aged man relates a conversation that he had with his elderly mother, in which she says:

I could weep just to think that I'll never see you again as you were at the age of six -- I would give anything, she said, to meet that six-year-old one more time.

This made me think of two photographs I keep on the wall at my office, of my grown daughters when they were young. In one, my older daughter is four; in the other, my younger daughter is two. Every once in a while, my wife asks why I don't replace them with something newer. My answer is always the same: They are my two favorite pictures in the world. When my daughters were young, they seemed to be infinite bundles of wonder: always curious, discovering things and ideas everywhere they went, making connections. They were restless in a good way, joyful, and happy. We can be all of these things as we grow into adulthood, but I experienced them so much differently as a father, watching my girls live them.

I love the people my daughters are now, and are becoming, and cherish my relationship with them. Yet, like the old woman in Cusk's story, there is a small part of me that would love to meet those little girls again. When I see one of my daughters these days, she is both that little girl, grown up, and not that little girl, a new person shaped by her world and by her own choices. The photographs on my wall keep alive memories not just of a time but also of specific people.

As I thought about Cusk's story, it occurred to me that the idea of "her and not her" does not apply only to my daughters, or to my wife, old pictures of whom I enjoy with similar intensity. I am me and not me.

I'm both the little guy who loved to read encyclopedias and shoot baskets every day, and not him. I'm not the same guy who walked into high school in a new city excited about the possibilities it offered and nervous about how I would fit in, yet I grew out of him. I am at once the person who started college as an architecture major -- who from the time he was eight years old had wanted to be an architect -- and not him. I'm not the same person who defended a Ph.D. dissertation half a life ago, but who I am owes a lot to him. I am both the man my wife married and not, being now the man that man has become.

And, yes, the father of those little girls pictured on my wall: me and not me. This is true in how they saw me then and how they see me now.

I'm not sure how thinking about this distinction will affect future me. I hope that it will help me to appreciate everyone in my life, especially my daughters and my wife, a bit more for who they are and who they have been. Maybe it will even help me be more generous to 2019 me.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Personal

April 08, 2019 11:55 AM

TDD is a Means for Delaying Intuition

In one of his "Conversations with Tyler", Tyler Cowen talks with Daniel Kahneman about intuition and its relationship to thinking fast and slow. According to Kahneman, evidence supports his position that most people have not had the experience necessary to develop intuition that is good enough for solving problems directly. So he thinks that most people, including most so-called experts, should slow down.

So I think delaying intuition is a very good idea. Delaying intuition until the facts are in, at hand, and looking at dimensions of the problem separately and independently is a better use of information.

The problem with intuition is that it forms very quickly, so that you need to have special procedures in place to control it except in those rare cases...

...

Break the decision up. It's not so much a matter of time because you don't want people to get paralyzed by analysis. But it's a matter of planning how you're going to make the decision, and making it in stages, and not acting without an intuitive certainty that you are doing the right thing. But just delay it until all the information is available.

This is one of the things that I find most helpful about test-driven design when I practice it faithfully. It's pretty easy for me to think that I know everything I need to implement a program after I've read the spec and thought about it for a few minutes. I mean, I've written a lot of code over the years... If my intuition tells me where to go, I can start right away and have the whole path ahead of me in my mind.

But how often do I really have evidence that my intuitive plan is the correct one? If I'm wrong, I'll spend a bunch of time and write a bunch of code, only later to find out I'm wrong. What's worse, all that code I've written usually ends up feeling like a constraint within which I have to work as I try to dig myself out of the mess.

Writing one test at a time and implementing just the code I need to pass it is a way to force my intuitive mind to slow down. It helps me think about the actual problem I'm solving, rather than the abstraction my expert brain infers from the spec. The program grows slowly along side my understanding and points me in the direction of the next small step to take.

TDD is a procedure I can put in place to help me control my intuition until the facts are in, and it encourages me to look at different dimensions of the problem independently as write the code to solve them.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

April 04, 2019 4:40 PM

A Non-Software Example of Preparatory Refactoring

When I refactor my code, I most often think in terms of what Martin Fowler calls preparatory refactoring: if it's difficult to add a new feature to my program, I refactor the existing code into a state where the feature fits naturally somewhere, then I add the feature. As is often the case, Kent Beck captured this notion memorably in a tweet-sized aphorism. I was delighted to see that Martin's blog post, which I remember fondly, cites the same tweet!

Ideas from programming are usually instances of more general ideas from the broader world, specialized to a world of bits and computation. Back when I taught our object-oriented programming course every semester, I often referred my students to a web site that offered real-life examples of the various design patterns we were learning. I remember an example of Decorator that showed how we can embellish a painting with a matte or a frame, and its use of the U.S. Constitution's specification of the President to illustrate the idea of a Singleton. I can't find that site on the web just now, but there's almost surely a local copy buried in one of my course websites from way back.

The idea of refactoring is useful outside the world of software, too. Just yesterday, my dean suggested what I consider to be a preparatory refactoring.

A committee on campus is charged with proposing ways that the university can restructure its colleges and departments. With the exception of a merger of two colleges a few years ago, we have had essentially the same academic structure for my entire time here. In those years, disciplines have changed and budgets have changed, so now the administration is thinking that the university might be more efficient or more productive with a different structure. Thinking about the process from this perspective, restructuring is largely a reaction to change that has already happened.

My dean suggested another way to approach the task. Think, he said, of the new academic programs that you'd like to create in the future. We may not have money available right now to create a new major or to organize a new research center, but one day we might. What university structure would make adding this program go more smoothly and work better once in place? Which departments would the new program want to work with? Which administrative structures already in place would minimize unnecessary overhead of the new program? As much as possible, he suggested, let's try to create a new academic structure that suits the future programs we'd like to build. That will reduce friction later, which is good: Administrative friction often grinds new academic ideas to a halt before they get off the ground.

In programming terms, this is quite bit different than the sort of refactoring I prefer. I try to practice YAGNI and refactor only for the specific feature that I want to add right now, not taking into account imagined future features I may never need. In terms of academic structure, though, this sort of ip-front design makes a lot of sense. Academic structures are much harder to change than software; getting a few things right now may make future changes at the program level much easier to implement later.

Thinking about academic restructuring this way has another positive side effect: it might entice faculty to be more engaged, because what we do now matters to the future we would like to build. It's not merely a reaction to past changes.

My dean is suggesting that we build academic structures now that make the changes we want to implement later (when the resources and requisite will exist) easier to implement. Building those structures now may take more work than simply responding to past changes, but it will be worth the effort when we are ready to create new programs. I think Kent and Martin might be proud.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Patterns, Software Development

February 24, 2019 9:35 AM

Normative

In a conversation with Tyler Cowen, economist John Nye expresses disappointment with the nature of discourse in his discipline:

The thing I'm really frustrated by is that it doesn't matter whether people are writing from a socialist or a libertarian perspective. Too much of the discussion of political economy is normative. It's about "what should the ideal state be?"
I'm much more concerned with the questions of "what good states are possible?" And once good states are created that are possible, what good states are sustainable? And that, in my view, is a still understudied and very poorly understood issue.

For some reason, this made me think about software development. Programming styles, static and dynamic typing, software development methodologies, ... So much that is written about these topics tells us what's the right the thing to do. "Do this, and you will be able to reliably make good software."

I know I've been partisan on some of these issues over the course of my time as a programmer, and I still have my personal preferences. But these days especially I find myself more interested in "what good states are sustainable?". What has worked for teams? What conditions seem to make those practices work well or not so well? How do teams adapt to changes in the domain or the business or the team's make-up?

This isn't too say that we can't draw conclusions about forces and context. For example, small teams seem to make it easier to adapt to changing conditions; to make it work with bigger teams, we need to design systems that encourage attention and feedback. But it does mean that we can help others succeed without always telling them what they must do. We can help them find a groove that syncs with them and the conditions under which they work.

Standing back and surveying the big picture, it seems that a lot of good states are sustainable, so long as we pay attention and adapt to the world around us. And that should be good news to us all.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 08, 2019 2:10 PM

Sometimes, Copy and Paste Is the Right Thing To Do

Last week I blogged about writing code that is easy to delete, drawing on some great lines from an old 'programming is terrible' post. Here's another passage from @tef's post that's worth thinking more about:

Step 1: Copy-paste code

Building reusable code is something that's easier to do in hindsight with a couple of examples of use in the code base, than foresight of ones you might want later. On the plus side, you're probably re-using a lot of code already just by using the file-system, why worry that much? A little redundancy is healthy.

It's good to copy-paste code a couple of times, rather than making a library function, just to get a handle on how it will be used. Once you make something a shared API, you make it harder to change.

There's not a great one-liner in there, but these paragraphs point to a really important lesson, one that we programmers sometimes have a hard time learning. We are told so often "don't repeat yourself" that we come to think that all repetition is the same. It's not.

One use of repetition is in avoiding what @tef calls, in another 'programming is terrible' post, "preemptive guessing". Consider the creation of a new framework. Oftentimes, designing a framework upfront doesn't work very well because we don't know the domain's abstractions yet. One of the best ways to figure out what they are is to write several applications first, and let framework fall out the applications. While doing this, repetition is our friend: it's most useful to know what things don't change from one application to another. This repetition is a hint on how to build the framework we need. I learned this technique from Ralph Johnson.

I use and teach a similar technique for programming in smaller settings, too. When we see two bits of code that resemble one another, it often helps to increase the duplication in order to eliminate it. (I learned this idea from Kent Beck.) In this case, the goal of the duplication is to find useful abstractions. Sometimes, though, code duplication is really a hint to think differently about a problem. Factoring out a function or class -- finding a new abstraction -- may be incidental to the learning that takes place.

For me, this line from from the second programming-is-terrible post captures this idea perfectly:

... duplicate to find the right abstraction first, then deduplicate to implement it.

My spell checker objects to the word "deduplicate", but I'll allow it.

All of these ideas taken together are the reason that I think copy-and-paste gets an undeservedly bad name. Used properly, it is a valuable programming technique -- essential, really. I've long wanted to write a Big Ball of Mud-style paper about copy-and-paste patterns. There are plenty of good reasons why we write repetitive code and, as @tef says in the two posts I link to above, sometimes leaving duplication in your code is the right thing to do.

One final tribute to repetition for now. While researching this blog post, I ran across a blog entry of mine from October 2016. Apparently, I had just read @tef's Write code that is easy to delete... post and felt an undeniable urge to quote and comment on it. If you read that 2016 post, you'll see that my Writing code that is easy to delete post from last week duplicates it in spirit and, in a few cases, even the details. I swear that I read @tef's post again last week and wrote the new blog entry from scratch, with no memory of the 2016 events. I am perfectly happy with this second act. Sometimes, ideas circle through our brains again, changing us in imperceptible ways. As @tef says, a little redundancy is healthy.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 02, 2019 2:22 PM

Writing Code that is Easy to Delete

Last week someone tweeted a link to Write code that is easy to delete, not easy to extend. It contains a lot of great advice on how to create codebases that are easy to maintain and easy to change, the latter being an essential feature of almost any code that is the former. I liked this article so much that I wanted to share some of its advice here. What follows are a few of the many one- and two-liners that serve as useful slogans for building maintainable software, with light commentary.

... repeat yourself to avoid creating dependencies, but don't repeat yourself to manage them.

This line from the first page of the paper hooked me. I'm not sure I had ever had this thought, at least not so succinctly, but it captures a bit of understanding that I think I had. Reading this, I knew I wanted to read the rest of the article.

Make a util directory and keep different utilities in different files. A single util file will always grow until it is too big and yet too hard to split apart. Using a single util file is unhygienic.

This isn't the sort of witticism that I quote in the rest of this post, but its solid advice that I've come to live by over the years. I have this pattern.

Boiler plate is a lot like copy-pasting, but you change some of the code in a different place each time, rather than the same bit over and over.

I really like the author's distinction between boilerplate and copy-and-paste. Copy-and-paste has valuable uses (heresy, I know; more later), whereas boilerplate sucks the joy out of almost every programmer's life.

You are writing more lines of code, but you are writing those lines of code in the easy-to-delete parts.

Another neat distinction. Even when we understand that lines of code are an expense as much as (or instead of) an investment, we know that sometimes we have write more code. Just do it in units that are easy to delete.

A lesson in separating concerns, from Python libraries:

requests is about popular http adventures, urllib3 is about giving you the tools to choose your own adventure.

Layers! I have had users of both of these libraries suggest that the other should not exist, but they serve different audiences. They meet different needs in a way that that more than makes up for the cost of the supposed duplication.

Building a pleasant to use API and building an extensible API are often at odds with each other.

There's nothing earth-shattering in this observation, but I like to highlight different kinds of trade-off whenever I can. Every important decision we make writing programs is a trade-off.

Good APIs are designed with empathy for the programmers who will use it, and layering is realising we can't please everyone at once.

This advice elaborates on the quote earlier to repeat code in order not to create dependencies, but not to manage them. Creating a separate API is one way to avoid dependencies to code that are hard to delete.

Sometimes it's easier to delete one big mistake than try to delete 18 smaller interleaved mistakes.

Sometimes it really is best to write a big chunk of code precisely because it is easy to delete. An idea that is distributed throughout a bunch of functions or modules has to be disentangled before you can delete it.

Becoming a professional software developer is accumulating a back-catalogue of regrets and mistakes.

I'm going to use this line in my spring Programming Languages class. There are unforeseen advantages to all the practice we profs ask students to do. That's where experience comes from.

We are not building modules around being able to re-use them, but being able to change them.

This is another good bit of advice for my students, though I'll write this one more clearly. When students learn to program, textbooks often teach them that the main reason to write a function is that you can reuse it later, thus saving the effort of writing similar code again. That's certainly one benefit of writing a function, but experienced programmers know that there are other big wins in creating functions, classes, and modules, and that these wins are often even more valuable than reuse. In my courses, I try to help students appreciate the value of names in understanding and modifying code. Modularity also makes it easier to change and, yes, delete code. Unfortunately, students don't always get the right kind of experience in their courses to develop this deeper understanding.

Although the single responsibility principle suggests that "each module should only handle one hard problem", it is more important that "each hard problem is only handled by one module".

Lovely. The single module that handles a hard problem is a point of leverage. It can be deleted when the problem goes away. It can be rewritten from scratch when you understand the problem better or when the context around the problem changes.

This line is the heart of the article:

The strategies I've talked about -- layering, isolation, common interfaces, composition -- are not about writing good software, but how to build software that can change over time.

Good software is software that can you can change. One way to create software you can change is to write code that you can easily replace.

Good code isn't about getting it right the first time. Good code is just legacy code that doesn't get in the way.

A perfect aphorism to close to the article, and to perfect way to close this post: Good code is legacy code that doesn't get in the way.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 25, 2018 10:50 AM

Seek Out Idea Amplifiers, Not Sound Absorbers

Richard Hamming -- he of Hamming codes, Hamming numbers, and Hamming distance -- wasn't a big fan of brainstorming. He preferred to talk about his ideas with another capable person, because the back-and-forth was more likely to help the idea reach "critical mass". But not every capable person is a good partner for such conversations:

There is also the idea I used to call 'sound absorbers'. When you get too many sound absorbers, you give out an idea and they merely say, "Yes, yes, yes." What you want to do is get that critical mass in action; "Yes, that reminds me of so and so," or, "Have you thought about that or this?" When you talk to other people, you want to get rid of those sound absorbers who are nice people but merely say, "Oh yes," and to find those who will stimulate you right back.

What a simple bit of advice: seek out idea amplifiers, not sound absorbers. Talk with people who help you expand and refine your ideas, by making connections to other work or by pushing you to consider implications or new angles. I think that talking to the right people can boost your work in another important way, too: they will feed your motivation, helping you to maintain the energy you need to stay excited and active.

This is one of the great benefits of blogs and Twitter, used well. We have so many more opportunities than ever before to converse with the right people. Unlike Hamming, I can't walk the halls of Bell Labs, or on any regular basis walk the halls of a big research university or one of the great computing companies. But I can read the blogs written by the researchers who work there, follow them on Twitter, and participate in amplifying conversations. Blogs and Twitter are great sources of both ideas and encouragement.

(The passage quoted above comes from the question-and-answer portion of Hamming's classic 1986 talk, You and Your Research. I re-read it this morning, as I occasionally do, because it's one of those papers that causes me to be more intentional in how I approach my work. Like Hamming, "I have a lot of faults", which means that there is considerable value to be found in judicious self-management. I need periodic booster shots.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

November 17, 2018 4:00 PM

Superior Ideas

In February 1943, an American friend sent physicist Freeman Dyson a copy of Kurt Gödel's "The Consistency of the Continuum Hypothesis" while he was an undergrad at Cambridge. Dyson wrote home about it to his parents:

I have been reading the immortal work (it is only sixty pages long) alternately with The Magic Mountain and find it hard to say which one is better. Mann of course writes better English (or rather the translator does); on the other hand the superiority of the ideas in Gödel just about makes up for that.

Imagine that, only five years later, Dyson would be "drinking tea with Gödel at his home in Princeton". Of course, after having taken classes with the likes of Hardy and Dirac, Dyson was well-prepared. He seems to have found himself surrounded by superior ideas much of his life and, despite his modesty, added a few himself.

I've never read The Magic Mountain, or any Mann, for that matter. I will correct that soon. However, Mann will have to wait until I finish Dyson's Maker of Patterns, in which I found this passage. It is a quite readable memoir that interleaves letters Dyson wrote to his family over the course of thirty-some years with explanatory text and historical asides. I'm glad I picked it up.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

October 05, 2018 3:39 PM

Why Patterns Failed and Why You Should Care

Beds are useful. I enjoyed my bed at the hotel last night. But you won't find a pattern called bed in A Pattern Language, and that's because we don't have a problem with beds. Beds are solved. What we have a problem with is building with beds. And that's why, if you look in A Pattern Language, you'll find there are a number of patterns that involve beds.

Now, the analogy I want to make is, basically, the design patterns in the book Design Patterns are at the same level as the building blocks in the book A Pattern Language. So Bridge, which is one of the design patterns, is the same kind of thing as column. You can call it a pattern, but it's not really a pattern that gets at the root problem we're trying to solve. And if Alexander had written a book that had a pattern called Bed and another one called Chair, I imagine that that book would have failed and not inspired all of the people that the actual book did inspire. So that, I claim, is why patterns failed.

Those two nearly-sequential paragraphs are from Patterns Failed. Why? Should We Care?, a talk Brian Marick gave at Deconstruct 2017. It's a good talk. Marick's provocative claim that, as an idea, software patterns failed is various degrees of true and false depending on how you define 'patterns' and 'failed'. It's hard to declare the idea an absolute failure, because a lot of software developers and UI designers use patterns to good effect as a part of their work every day. But I agree with Brian that software patterns failed to live up to their promise or to the goals of the programmers who worked so hard to introduce Christopher Alexander's work to the software community. I agree, too, that the software pattern community's inability to document and share patterns at the granularity that made Alexander's patterns so irresistible is a big part of why.

I was a second- or third-generation member of the patterns community, joining in after Design Patterns had been published and, like Marick, I worked mostly on the periphery. Early on I wrote patterns that related to my knowledge-based systems work. Much of my pattern-writing, though, was at the level of elementary patterns, the patterns that novice programmers learn when they are first learning to program. Even at that level, the most useful patterns often were ones that operated up one level from the building blocks that novices knew.

Consider Guarded Linear Search, from the Loop Patterns paper that Owen Astrachan and I workshopped at PLoP 1998. It helps beginners learn how to arrange a loop and an if statement in a way that achieves a goal. Students in my beginning courses often commented how patterns like this one helped them write programs because, while they understood if statements and for statements and while statements, they didn't always know what to do with them when facing a programming task. At that most elementary level, the Guarded Linear Search pattern was a pleasing -- and useful -- whole.

That said, there aren't many "solved problems" for beginners, so we often wrote patterns that dropped down to the level of building blocks simply to help novices learn basic constructs. Some of the Loop Patterns paper does that, as does Joe Bergin's Patterns for Selection. But work in the elementary patterns community would have been much more valuable, and potentially had more effect, if we had thought harder about how to find and document patterns at the level of Alexander's patterns.

Perhaps the patterns sub-community in which I've worked in which best achieved its goals was the pedagogical patterns community. These are not software patterns but rather patterns of teaching techniques. They document solutions to problems that teachers face every day. I think I'd be willing to argue that the primary source of pedagogical patterns' effectiveness is that these solutions combine more primitive building blocks (delivery techniques, feedback techniques, interaction techniques) in a way that make learning and instruction both more effective and more fun. As a result, they captured a lot of teachers' interest.

I think that Marick's diagnosis also points out the error in a common criticism of software patterns. Over the years, we often heard folks say that software patterns existed only because people used horrible languages like C++ and Java. In a more powerful language, any purported pattern would be implemented as a language primitive or could be made a primitive by writing a macro. But this misses the point of Alexander's insight. The problem in software development isn't with inheritance and message passing and loops, just as the problem in architecture isn't with beds and windows and space. It's with finding ways to arrange these building blocks in a way that's comfortable and nice and, to use Alexander's term, "life-giving". That challenge exists in all programming languages.

Finally, you probably won't be surprised to learn that I agree with Marick that we should all care that software patterns failed to live up to their promise. Making software is fun, but it's also challenging. Alexander's idea of a pattern language is one way that we might help programmers do their jobs better, enjoy their jobs more, and produce software that is both functional and habitable. The first pass was a worthy effort, and a second pass, informed by the lessons of the first, might get us closer to the goal.

Thanks to Brian for giving this talk and to the folks at Deconstruct for posting it online. Go watch it.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

September 30, 2018 6:40 PM

Strange Loop 3: David Schmüdde and the Art of Misuse

the splash screen to open David Schmudde's talk, 'Misuse'

This talk, the first of the afternoon on Day 1, opened with a familiar image: René Magritte's "this is not a pipe" painting, next to a picture of an actual pipe from some e-commerce site. Throughout the talk, speaker David Schmüdde returned to the distinction between thing and referent as he looked at the phenomenon of software users who used -- misused -- software to do something other than intended by the designer. The things they did were, or became, art.

First, a disclaimer: David is a former student of mine, now a friend, and one of my favorite people in the world. I still have in my music carousel a CD labeled "Schmudde Music!!" that he made for me just before he graduated and headed off to a master's program in music at Northwestern.

I often say in my conference reports that I can't do a talk justice in a blog entry, but it's even more true of a talk such as this one. Schmüdde demonstrated multiple works of art, both static and dynamic, which created a vibe that loses most of its zing when linearized in text. So I'll limit myself here to a few stray observations and impressions from the talk, hoping that you'll be intrigued enough to watch the video when it's posted.

Art is a technological endeavor. Rembrandt and hip hop don't exist without advances in art-making technology.

Misuse can be a form of creative experimentation. Check out Jodi, a website created in 1995 and still available. In the browser, it seems to be a work of ASCII art, but show the page source... (That's a lot harder these days than it was in 1995.) Now that is ASCII art.

Schmüdde talked about another work of from the same era, entitled Rain. It used slowness -- of the network, of the browser -- as a feature. Old HTML (or was it a bug in an old version of Netscape Navigator?) allowed one HEAD tag in a file with multiple BODY tags. The artist created such a document that, when loaded in sequence, gave the appearance of rain falling in the browser. Misusing the tools under the conditions of the day enabled the artist to create an animation before animated GIFs, Flash, and other forms of animation existed.

The talk followed with examples and demos of other forms of software misuse, which could:

  • find bugs in a system
  • lead to new system features
  • illuminate a system in ways not anticipated by the software's creator
Schmüdde wondered, when we fix bugs in this way, do we make the resulting system, or the resulting interaction, less human?

Accidental misuse is life. We expect it. Intentional misuse is, or can be, art. It can surprise us.

What does art preservation look like for these works? The original hardware and software systems often are obsolete or, more likely, gone. To me, this is one of the great things about computers: we can simulate just about anything. Digital art preservation becomes a matter of simulating the systems or interactions that existed at the time the art was created. We are back to Magritte's pipe... This is not a work of art; it is a pointer to a work of art.

It is, of course, harder to recreate the experience of the art from the time it was created, but isn't this true of all art? Each of us experiences a work of art anew each time we encounter it. Our experience is never the same as the experience of those who were present when the work was first unveiled. It's often not even the same experience we ourselves had yesterday.

Schmüdde closed with a gentle plea to the technologists in the room to allow more art into their process. This is a new talk, and he was a little concerned about his ending. He may find a less abrupt way to end in the future, but to be honest, I though what he did this time worked well enough for the day.

Even taking my friendship with the speaker into account, this was the talk of the conference for me. It blended software, users, technology, ideas, programming, art, the making of things, and exploring software at its margins. These ideas may appear at the margin, but they often lie at the core of the work. And even when they don't, they surprise us or delight us or make us think.

This talk was a solid example of what makes Strange Loop a great conference every year. There were a couple of other talks this year that gave me a similar vibe, for example, Hannah Davis's "Generating Music..." talk on Day 1 and Ashley Williams's "A Tale of Two asyncs" talk on Day 2. The conference delivers top-notch technical content but also invites speakers who use technology, and explore its development, in ways that go beyond what you find in most CS classrooms.

For me, Day One of the conference ended better than most: over a beer with David at Flannery's with good conversation, both about ideas from his talk and about old times, families, and the future. A good day.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Personal

September 20, 2018 4:44 PM

Special Numbers in a Simple Language

This fall I am again teaching our course in compiler development. Working in teams of two or three, students will implement from scratch a complete compiler for a simple functional language that consists of little more than integers, booleans, an if statement, and recursive functions. Such a language isn't suitable for much, but it works great for writing programs that do simple arithmetic and number theory. In the past, I likened it to an integer assembly language. This semester, my students are compiling a Pascal-like language of this sort that call Flair.

If you've read my blog much in the falls over the last decade or so, you may recall that I love to write code in the languages for which my students write their compilers. It makes the language seem more real to them and to me, gives us all more opportunities to master the language, and gives us interesting test cases for their scanners, parsers, type checkers, and code generators. In recent years I've blogged about some of my explorations in these languages, including programs to compute Farey numbers and excellent numbers, as well as trying to solve one of my daughter's AP calculus problems.

When I run into a problem, I usually get an itch to write a program, and in the fall I want to write it in my students' language.

Yesterday, I began writing my first new Flair program of the semester. I ran across this tweet from James Tanton, which starts:

N is "special" if, in binary, N has a 1s and b 0s and a & b are each factors of N (so non-zero).

So, 10 is special because:

  • In binary, 10 is 1010.
  • 1010 contains two 1s and two 0s.
  • Two is a factor of 10.

9 is not special because its binary rep also contains two 1s and two 0s, but two is not a factor of 9. 3 is not special because its binary rep has no 0s at all.

My first thought upon seeing this tweet was, "I can write a Flair program to determine if a number is special." And that is what I started to do.

Flair doesn't have loops, so I usually start every new program by mapping out the functions I will need simply to implement the definition. This makes sure that I don't spend much time implementing loops that I don't need. I ended up writing headers and default bodies for three utility functions:

  • convert a decimal number to binary
  • count the number of times a particular digits occurs in a number
  • determine if a number x divides evenly into a number n

With these helpers, I was ready to apply the definition of specialness:

    return divides(count(1, to_binary(n)), n)
       and divides(count(0, to_binary(n)), n)

Calling to_binary on the same argument is wasteful, but Flair doesn't have local variables, either. So I added one more helper to implement the design pattern "Function Call as Variable Assignment", apply_definition:

    function apply_definition(binary_n : integer, n : integer) : boolean
and called it from the program's main:
    return apply_definition(to_binary(n), n)

This is only the beginning. I still have a lot of work to do to implement to_binary, count and divides, using recursive function calls to simulate loops. This is another essential design pattern in Flair-like languages.

As I prepared to discuss my new program in class today, I found bug: My divides test was checking for factors of binary_n, not the decimal n. I also renamed a function and one of its parameters. Explaining my programs to students, a generalization of rubber duck debugging, often helps me see ways to make a program better. That's one of the reasons I like to teach.

Today I asked my students to please write me a Flair compiler so that I can run my program. The course is officially underway.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

July 28, 2018 11:37 AM

Three Things I Read This Morning

Why I Don't Love Gödel, Escher, Bach

I saw a lot of favorable links to this post a while back and finally got around to it. Meh. I generally agree with the author: GEB is verbose in places, there's a lot of unnecessary name checking, and the dialogues that lead off each chapter are often tedious. I even trust the author's assertion that Hofstadter's forays beyond math, logic, and computers are shallow.

So what? Things don't have to be perfect for me to like them, or for them to make me think. GEB was a swirl of ideas that caused me to think and helped me make a few connections. I'm sure if I read the book now that I would feel differently about it, but reading it when I did, as an undergrad CS major thinking about AI and the future, it energized me.

I do thank the author for his pointer (in a footnote) to Vi Hart's wonderful Twelve Tones. You should watch it. Zombie Schonberg!

The Web Aesthetic

This post wasn't quite what I expected, but even a few years old it has something to say to web designers today.

Everything on the web ultimately needs to degrade down to plain text (images require alt text; videos require transcripts), so the text editor might just become the most powerful app in the designer's toolbox.

XP Challenge: Compilers

People outside the XP community often don't realize how seriously the popularizers of XP explored the limitations of their own ideas. This page documents one of several challenges that push XP values and practices to the limits: When do they break down? Can they be adapted successfully to the task? What are the consequences of applying them in such circumstances?

Re-reading this old wiki page was worth it if only for this great line from Ron Jeffries:

The point of XP is to win, not die bravely.

Yes.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

July 16, 2018 2:08 PM

Code is better when written with collaboration in mind

In Collaboration considered helpful, Ken Perlin writes:

In the last few days I have needed to make the transition from writing a piece of software all on my own to bringing in a collaborator. Which means I've needed to go into my code and change a lot of things, in order to make everything easier to understand and communicate.

There was a part of me that felt grumpy about this. After all, I already knew exactly where everything was before this other person ever came on board.

But then I looked at the changes I had made, and realized that the entire system was now much cleaner, more robust, and far easier to maintain. Clearly there is something intrinsically better about code that is designed for collaboration.

Knowing that our code must communicate with other people causes us to write better code.

The above is an usually large quotation from another post for me. Even so, Perlin makes a bigger point than this, so read the entire post.

By the way, Perlin's blog has probably been the happiest addition I've made to my blog reader over the past year. He writes short, thoughtful posts everyday, on topics that include programming, teaching, academic life, and virtual reality. I enjoy almost all of them and always look forward to the next one.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

March 06, 2018 4:11 PM

A Good Course in Epistemology

Theoretical physicist Marcelo Gleiser, in The More We Know, the More Mystery There Is:

But even if we did [bring the four fundamental forces together in a common framework], and it's a big if right now, this "unified theory" would be limited. For how could we be certain that a more powerful accelerator or dark matter detector wouldn't find evidence of new forces and particles that are not part of the current unification? We can't. So, dreamers of a final theory need to recalibrate their expectations and, perhaps, learn a bit of epistemology. To understand how we know is essential to understand how much we can know.

People are often surprised to hear that, in all my years of school, my favorite course was probably PHL 440 Epistemology, which I took in grad school as a cognate to my CS courses. I certainly enjoyed the CS courses I took as a grad student, and as an undergrad, too, and but my study of AI was enhanced significantly by courses in epistemology and cognitive psychology. The prof for PHL 440, Dr. Rich Hall, became a close advisor to my graduate work and a member of my dissertation committee. Dr. Hall introduced me to the work of Stephen Toulmin, whose model of argument influenced my work immensely.

I still have the primary volume of readings that Dr. Hall assigned in the course. Looking back now, I'd forgotten how many of W.V.O. Quine's papers we'd read... but I enjoyed them all. The course challenged most of my assumptions about what it means "to know". As I came to appreciate different views of what knowledge might be and how we come by it, my expectations of human behavior -- and my expectations for what AI could be -- changed. As Gleiser suggests, to understand how we know is essential to understanding what we can know, and how much.

Gleiser's epistemology meshes pretty well with my pragmatic view of science: it is descriptive, within a particular framework and necessarily limited by experience. This view may be why I gravitated to the pragmatists in my epistemology course (Peirce, James, Rorty), or perhaps the pragmatists persuaded me better than the others.

In any case, the Gleiser interview is a delightful and interesting read throughout. His humble of science may get you thinking about epistemology, too.

... and, yes, that's the person for whom a quine in programming is named. Thanks to Douglas Hofstadter for coining the term and for giving us programming nuts a puzzle to solve in every new language we learn.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Personal

December 20, 2017 3:51 PM

== false

While grading one last project for the semester, I ran across this gem:

    if checkInputParameters() == False:
       [...]

This code was not written by a beginning programmer, but a senior who likely will graduate in May. Sigh.

I inherited this course late in the semester, so it would be nice if I could absolve myself of responsibility for allowing a student to develop such a bad habit. But this isn't the first time I've seen this programming pattern. I've seen it in my courses and in other faculty's courses, in first-year courses and fourth-year courses. In fact, I was so certain that I blogged about the pattern before that I spent several minutes trolling the archive. Alas, I never found the entry.

Don't let anyone tell you that Twitter can't be helpful. Within minutes of tweeting my dismay, two colleagues suggested new strategies for dealing with this anti-pattern.

Sylvain Leroux invoked Star Trek:

James T. Kirk "Kobayashi Maru"-style solution: Hack CPython to properly parse that new syntax:

unless checkInputParameters(): ...

Maybe I'll turn this into a programming exercise for the next student in my compiler class who wanders down the path of temptation. Deep in my heart, though, I know that enterprising programmers will use the new type of statement to write this:

    unless checkInputParameters() == True:
       [...]

Agile Hulk suggested a fix:

    if (checkInputParameters() == False) == True:

This might actually be a productive pedagogical tack to follow: light-hearted and spot on, highlighting the anti-pattern's flaw by applying it to its own output. ("Anti-pattern eats its own dog food.") With any luck, some students will be enlightened, Zen-like. Others will furrow their brow and say "What?"

... and even if we look past the boolean crime in our featured code snippet, we really ought to take on that function name. 'Tis the season, though, and the semester is over. I'll save that rant for another time.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 25, 2017 7:52 AM

Sure, In the End, It's Just Objects and Functions

After implementing the patterns from James Noble's Arguments and Results, Gregory Brown reflects:

After taking a look at the finished Turtle object, I did wonder a little bit about whether the idea of a curried object in Ruby is nothing more than an ordinary object making use of object composition. However, because the name of the pattern is helpful for describing the intent of this sort of object in a succinct way, it may be a good label for us to use when discussing the merits of different design options.

In the end it is, indeed, all just objects, or functions, or whatever the primitives structures are in your programming style du jour. The value of design patterns in any styles lies in teaching us stereotypical patterns of usage, the web of objects or functions we might not have thought of in the course of design. Once we know about the patterns, the names become a valuable part of the vocabulary we use to talk about design.

I know that Brown's blog entry is six years old, that Noble's paper is seventeen, and that talking about design patterns is passé in many circles. But I still find the notion of patterns and pattern languages to be immensely useful, and even in 2017 I hear people express surprise at patterns are just commonsense or bad design dressed up in fancy writing. That makes me sad. We need more descriptions of experts' implicit holistic design knowledge, not fewer.

In closing, though, please don't read my reaction here as a diss of Brown's blog entry. His two entries on Noble's patterns do a nice job showing how these patterns can improve a Ruby program and exploring the trade-offs between the "before" and "after" versions of the code. I hope that he bring his Practicing Ruby project back to life.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 14, 2017 3:52 PM

Thinking about Next Semester's Course Already

We are deep into fall semester. The three teams in my compilers course are making steady progress toward a working compiler, and I'm getting so excited by that prospect that I've written a few new programs for them to compile. The latest two work with Kaprekar numbers.

Yet I've also found myself thinking already quite a bit about my spring programming languages course.

I have not made significant changes to this course (which introduces students to Racket, functional programming, recursive programming over algebraic data types, and a few principles of programming languages) in several years. I don't know if I'm headed for a major re-design yet, but I do know that several new ideas are commingling in my mind and encouraging me to think about improvements to the course.

The first trigger was reading How to Switch from the Imperative Mindset, which approaches learning functional style explicitly as a matter establishing new habits. My students come to the course having learned an imperative style in Python, perhaps with some OO in Java thrown in. Most of them are not yet 100% secure in their programming skills, and the thought of learning a new style is daunting. They don't come to the course asking for a new set of habits.

One way to develop a new set of habits is to recognize the cues that trigger an old habit, learn a new response, and then rehearse that response until it becomes a new habit. The How to Switch... post echoes a style that I have found effective when teaching OOP to programmers with experience in a procedural language, and I'm thinking about how to re-tool part of my course to use this style more explicitly when teaching FP.

My idea right now is something like this. Start with simple examples from the students' experience processing arrays and lists of data. Then work through solutions in sequence, such as:

  1. first, use a loop of the sort with which they are familiar, the body of which acts on each item in the collection
  2. then, move the action into a function, which the loop calls for each item in the collection
  3. finally, map the function over the items in the collection

We can also do this with built-in functions, perhaps to start, which eliminates the need to write a user-defined function.

In effect, this refactors code that the students are already comfortable with toward common functional patterns. I can use the same sequence of steps for mapping, folding, and reducing, which will reinforce the thinking habits students need to begin writing FP code from the original cues. I'm only just beginning to think about this approach, but I'm quite comfortable using a "refactoring to patterns" style in class.

Going in this direction will help me achieve another goal I have in mind for next semester: making class sessions more active. This was triggered by my post-mortem of the last course offering. Some early parts of the course consist of too much lecture. I want to get students writing small bits of code sooner, but with more support for taking small, reliable steps.

Paired this change to what happens in class are changes to what happens before students come to class. Rather than me talking about so many things in class, I hope to have

  • students reading clear expositions of the material, in small units that are followed immediately by
  • students doing more experimentation on their own in Dr. Racket, learning from the experiments and learning find information about language features as they need them.

This change will require me to package my notes differently and also to create triggers and scaffolding for the students' experimentation before coming to class. I'm thinking of this as something like a flipped classroom, but with "watching videos" replaced by "playing with code".

Finally, this blog post triggered a latent desire to make the course more effective for all students, wherever they are on the learning curve. Many students come to the course at roughly same level of experience and comfort, but a few come in struggling from their previous courses, and a few come in ready to take on bigger challenges. Even those broad categories are only approximate equivalence classes; each student is at a particular point in the development we hope for them. I'd like to create experiences that can help students all of these students learn something valuable for them.

I've only begun to think about the ideas in that post. Right now, I'm contemplating two of ideas from the section on getting to know my students better: gathering baseline data early on that I can use to anchor the course, and viewing grading as planning. Anything that can turn the drudgery of grading into a productive part of the course for me is likely to improve my experience in the course, and that is likely to improved my students' experience, too.

I have more questions than answers at this point. That's part of the fun of re-designing a course. I expect that things will take better shape over the next six weeks or so. If you have any suggestions, email me or tweet to me at @wallingf.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

October 31, 2017 3:44 PM

Chuck Moore, Extreme Programmer

If you have read any of Chuck Moore's work, you know that he is an extreme programmer in the literal sense of the word: a unique philosophy that embraces small, fast, self-contained programs in a spare language and an impressive development environment. But as I read the opening sections of Moore's book Programming a Problem-Oriented Language, I began to mistake him for an XP-style extreme programmer.

Here is You Aren't Gonna Need It or, as Moore says, Do Not Speculate!:

Do not put code in your program that might be used. Do not leave hooks on which you can hang extensions. The things you might want to do are infinite; that means that each one has 0 probability of realization. If you need an extension later, you can code it later - and probably do a better job than if you did it now. And if someone else adds the extension, will they notice the hooks you left? Will you document that aspect of your program?

Moore also has a corollary called Do It Yourself!, which encourages you, in general, to write your own subroutines rather than import one a canned subroutine from a library. That doesn't sound like Doing the Simplest Thing That Can Work, but his intent is similar: an external routine is a dependency, and it probably doesn't match the specific needs of the problem you are solving. Indeed, Moore closes this section with:

Let me take a stand: I can't solve the problems of the world. With luck, I can write a good program.

That could well be the manifesto of XP.

In the next section, a preview of the rest of the book, Moore tells his readers that they won't find flowcharts in the pages to come documenting the algorithms he describes:

I've never liked them, for they seem to include a useless amount of information - either too little or too much. Besides they imply a greater rigidity in program structure than usually exists. I will be quite specific about what I think you should do and how you should do it. But I will use words, and not diagrams. I doubt that you would give a diagram the attention it deserved, anyway. Or that I would in preparing it.

Other than using a few words, he lets the code stands for itself. Documentation would not add enough value to outweigh the cost of preparing it, or reading it. Usually, the best thing to do with Moore's books is to Listen To The Code: settle into your Forth environment, type the code in, run it, and learn from it.

There is no accounting for my weird fascination with stack-based languages. Chuck Moore always give me more to think about than I expected. I suspect that he would be at odds with some of the practices encouraged by XP, but I often feel the spirit of XP in his philosophy.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 12, 2017 3:35 PM

Sometimes, We Need to Make a Better Tool

I learned about a couple of cool CLI tools from Nikita Sobolev's Using Better CLIs. hub and tig look like they may be worth a deeper look. This article also reminded me of one of the examples in the blog entry I rmed the other day. It reflects a certain attitude about languages and development.

One of the common complaints about OOP is that what would be a single function in other programming styles usually ends up distributed across multiple classes in an OO program. For example, instead of:

    void draw(Shape s) {
       case s of
          Circle : [code for circle]
          Square : [code for square]
          ...
    }
the code for the individual shapes ends up in the classes for Circle, Square, and so on. If you have to change the drawing code for all of the shapes, you have to track down all of the classes and modify them individually.

This is true, and it is a serious issue. We can debate the relative benefits and costs of the different designs, of course, but we might also think about ways that our development tools can help us.

As a grad student in the early 1990s, I worked in a research group that used VisualWorks Smalltalk to build all of its software. Even within a single Smalltalk image, we faced this code-all-over-the-place problem. We were adding methods to classes and modifying methods all the time as part of our research. We spent a fair amount of time navigating from class to class to work on the individual methods.

Eventually, one of my fellow students had an epiphany: we could write a new code browser. We would open this browser on a particular class, and the browser would provide a pane listing and all of its subclasses, and all of their subclasses. When we selected a method in the root class, the browser enabled us to click on any of the subclasses, see the code for the subclass's corresponding method, and edit it there. If the class didn't have an overriding method, we could add one in the empty pane, with the method signature supplied by the browser.

This browser didn't solve all of the problems we had learning to manage a large code base spread out over many classes, but it was a huge win for dealing with the specific issue of an algorithm being distributed across several kinds of object. It also taught me two things:

  • to appreciate the level of control that Smalltalk gave developers to inspect code and shape the development experience
  • to appreciate the mindset that creating new tools is the way to mitigate many problems in software development, if not to solve them completely

The tool-making mindset is one that I came to appreciate and understand more and more as the years past. I'm disappointed whenever I don't put it to good use, but oftentimes I wise up and make the tools I need to help me do my work.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

September 26, 2017 3:58 PM

Learn Exceptions Later

Yesterday, I mentioned rewriting the rules for computing FIRST and FOLLOW sets using only "plain English". As I was refactoring my descriptions, I realized that one of the reasons students have difficulty with many textbook treatments of the algorithms is that the books give complete and correct definitions of the sets upfront. The presence of X := ε rules complicates the construction of both sets, but they are unnecessary to understanding the commonsense ideas that motivate the sets. Trying to deal with ε too soon can interfere with the students learning what they need to learn in order to eventually understand ε!

When I left the ε rules out of my descriptions, I ended up with what I thought were an approachable set of rules:

  • The FIRST set of a terminal contains only the terminal itself.

  • To compute FIRST for a non-terminal X, find all of the grammar rules that have X on the lefthand side. Add to FIRST(X) all of the items in the FIRST set of the first symbol of each righthand side.

  • The FOLLOW set of the start symbol contains the end-of-stream marker.

  • To compute FOLLOW for a non-terminal X, find all of the grammar rules that have X on the righthand side. If X is followed by a symbol in the rule, add to FOLLOW(X) all of the items in the FIRST set of that symbol. If X is the last symbol in the rule, add to FOLLOW(X) all of the items in the FOLLOW set of the symbol on the rule's lefthand side.

These rules are incomplete, but they have offsetting benefits. Each of these cases is easy to grok with a simple example or two. They also account for a big chunk of the work students need to do in constructing the sets for a typical grammar. As a result, they can get some practice building sets before diving into the gnarlier details ε, which affects both of the main rules above in a couple of ways.

These seems like a two-fold application of the Concrete, Then Abstract pattern. The first is the standard form: we get to see and work with accessible concrete examples before formalizing the rules in mathematical notation. The second involves the nature of the problem itself. The rules above are the concrete manifestation of FIRST and FOLLOW sets; students can master them before considering the more abstract ε cases. The abstract cases are the ones that benefit most from using formal notation.

I think this is an example of another pattern that works well when teaching. We might call it "Learn Exceptions Later", "Handle Exceptions Later", "Save Exceptions For Later", or even "Treat Exceptions as Exceptions". (Naming things is hard.) It is often possible to learn a substantial portion of an idea without considering exceptions at all, and doing so prepares students for learning the exceptions anyway.

I guess I now have at least one idea for my next PLoP paper.

Ironically, writing this post brings to mind a programming pattern that puts exceptions up top, which I learned during the summer Smalltalk taught me OOP. Instead of writing code like this:

    if normal_case(x) then
       // a bunch
       // of lines
       // of code
       // processing x
    else
       throw_an_error
you can write:
    if abnormal_case(x) then
       throw_an_error

// a bunch // of lines // of code // processing x

This idiom brings the exceptional case to the top of the function and dispatches with it immediately. On the other hand, it also makes the normal case the main focus of the function, unindented and clear to the eye. It may look like this idiom violates the "Save Exceptions For Later" pattern, but code of this sort can be a natural outgrowth of following the pattern. First, we implement the function to do its normal business and makes sure that it handles all of the usual cases. Only then do we concern ourselves with the exceptional case, and we build it into the function with minimal disruption to the code.

This pattern has served me well over the years, far beyond Smalltalk.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 25, 2017 3:01 PM

A Few Thoughts on My Compilers Course

I've been meaning to blog about my compilers course for more than a month, but life -- including my compilers course -- have kept me busy. Here are three quick notes to prime the pump.

  • I recently came across Lindsey Kuper's My First Fifteen Compilers and thought again about this unusual approach to a compiler course: one compiler a week, growing last week's compiler with a new feature or capability, until you have a complete system. Long, long-time readers of this blog may remember me writing about this idea once over a decade ago.

    The approach still intrigues me. Kuper says that it was "hugely motivating" to have a working compiler at the end of each week. In the end I always shy away from the approach because (1) I'm not yet willing to adopt for my course the Scheme-enabled micro-transformation model for building a compiler and (2) I haven't figured out how to make it work for a more traditional compiler.

    I'm sure I'll remain intrigued and consider it again in the future. Your suggestions are welcome!

  • Last week, I mentioned on Twitter that I was trying to explain how to compute FIRST and FOLLOW sets using only "plain English". It was hard. Writing a textual description of the process made me appreciate the value of using and understanding mathematical notation. It is so expressive and so concise. The problem for students is that it is also quite imposing until they get it. Before then, the notation can be a roadblock on the way to understanding something at an intuitive level.

    My usual approach in class to FIRST and FOLLOW sets, as for most topics, is to start with an example, reason about it in commonsense terms, and only then to formalize. The commonsense reasoning often helps students understand the formal expression, thus removing some of its bite. It's a variant of the "Concrete, Then Abstract" pattern.

    Mathematical definitions such as these can motivate some students to develop their formal reasoning skills. Many people prefer to let students develop their "mathematical maturity" in math courses, but this is really just an avoidance mechanism. "Let the Math department fail them" may solve a practical problem, sometimes we CS profs have to bite the bullet and help our students get better when they need it.

  • I have been trying to write more code for the course this semester, both for my enjoyment (and sanity) and for use in class. Earlier, I wrote a couple of toy programs such as a Fizzbuzz compiler. This weekend I took a deeper dive and began to implement my students' compiler project in full detail. It was a lot of fun to be deep in the mire of a real program again. I have already learned and re-learned a few things about Python, git, and bash, and I'm only a quarter of the way in! Now I just have to make time to do the rest as the semester moves forward.

In her post, Kuper said that her first compiler course was "a lot of hard work" but "the most fun I'd ever had writing code". I always tell my students that this course will be just like that for them. They are more likely to believe the first claim than the second. Diving in, I'm remembering those feelings firsthand. I think my students will be glad that I dove in. I'm reliving some of the challenges of doing everything that I ask them to do. This is already generating a new source of empathy for my students, which will probably be good for them come grading time.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

July 09, 2017 9:04 AM

What It's Like To Be A Scholar

I just finished reading Tyler Cowen's recent interview with historian Jill Lepore. When Cowen asks Lepore about E.B. White's classic Stuart Little, Lepore launches into a story that illustrates quite nicely what it's like to be a scholar.

First, she notes that she was writing a review of a history of children's literature and kept coming across throwaway lines of the sort "Stuart Little, published in 1945, was of course banned." This triggered the scholar's impulse:

And there's no footnote, no explanation, no nothing.

At the time, one of my kids was six, and he was reading Stuart Little, we were reading at night together, and I was like, "Wait, the story about the mouse who drives the little car and rides a sailboat across the pond in Central Park, that was a banned book? What do I not know about 1945 or this book? What am I missing?"

These last two sentences embody the scholar's orientation. "What don't I know about these two things I think I know well?"

And I was shocked. I really was shocked. And I was staggered that these histories of children's literature couldn't even identify the story. I got really interested in that question, and I did what I do when I get a little too curious about something, is I become obsessive about finding out everything that could possibly be found out.

Next comes obsession. Lepore then tells a short version of the story that became her investigative article for The New Yorker, which she wrote because sometimes I "just fall into a hole in the ground, and I can't get out until I have gotten to the very, very bottom of it."

Finally, three transcript pages later, Lepore says:

It was one of the most fun research benders I've ever been on.

It ends in fun.

You may be a scholar if you have this pattern. To me, one of the biggest downsides of becoming department head is having less time to fall down some unexpected hole and follow its questions until I reach the bottom. I miss that freedom.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Personal, Teaching and Learning

May 31, 2017 2:28 PM

Porting Programs, Refactoring, and Language Translation

In his commonplace book A Certain World, W.H. Auden quotes C.S. Lewis on the controversial nature of tramslation:

[T]ranslation, by its very nature, is a continuous implicit commentary. It can become less tendentious only by becoming less of a translation.

Lewis was merely acknowledging a truth about language: Translators must have a point of view, and often that point of view will be controversial.

I once saw Kurt Vonnegut speak with a foreign language class here many years ago. One of the students asked him what he thought about the quality of the translations done for his book. Vonnegut laughed and said that his books were so peculiar and so steeped in Americana that translating one was akin to writing a new book. He said that his translators deserved all the royalties from the books they created by translating him. They had to write brand new works.

These memories came to mind again recently while I was reading Tyler Cowen's conversation with Jhumpa Lahiri, especially when Lahiri said this:

At one point I was talking about this idea, in antiquity: in Latin, the word for "translator" is "interpreter". I teach translation now, and I talk a lot to my students about translation being the most intimate form of reading and how there was the time when translating and interpreting and analyzing were all one thing.

As my mind usually does, it began to think about computer programs.

Like many programmers, I often find myself porting a program from one language to another. This is clearly translation but, as Vonnegut and and Lahiri tell us, it is also a form of interpretation. To port a piece of code, I have to understand its meaning and express that meaning in a new language. That language has its own constructs, idioms, patterns, and set of community practices and expectations. To port a program, one must have a point of view, so the process can be, to use Lewis's word, tendentious.

I often refactor code, too, both my own programs and programs written by others. This, too, is a form of translation, even though it leaves the new code written in the same language as the original. Refactoring is necessarily an opinionated act, and thus tendentious.

Occasionally, I refactor a program in order to learn what it does and how it does it. In those cases, I'm not judging the original code as anything but ill-suited to my current state of knowledge. Even so, when I get done, I usually like my version better, if only a little bit. It expresses what I learned in the process of rewriting the code.

It has always been hard for me to port a program without refactoring it, and now I understand why. Both activities are a kind of translation, and translation is by its nature an activity that requires a point of view.

This fall, I will again teach our "Translation of Programming Languages" course. Writing a compiler requires one to become intimate not only with specific programs, the behavior of which the compiler must preserve, but also the language itself. At the end of the project, my students know the grammar, syntax, and semantics of our source language in a close, almost personal way. The target language, too. I don't mind if my students develop a strong point of view, even a controversial one, along the way. (I'm actually disappointed if the stronger students do not!) That's a part of writing new software, too.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Software Development, Teaching and Learning

May 30, 2017 4:28 PM

Learning by Guided Struggle

A few years ago, someone asked MathOverflow, "Why do so many textbooks have so much technical detail and so little enlightenment?" Some CS textbooks suffer from this problem, too, though these days the more likely case is that the book tries to provide too much context. They try to motivate so much that they drown the reader in a lot of unhelpful words.

The top answer on MathOverflow (via Brian Marick) points out that the real problem does not usually lie in the lack of motivation or context provided by textbooks. The real goal is to learn how to do math, not "know" it. That is even more true of software developent. A textbook can't really teach to write programs; most of that is learned through doing itself. Perhaps the best purpose that the text can serve, says the answer, is to show the reader what he or she needs to learn. From there, the reader must go off and practice.

How does learning occur from there?

Based on my own experience as both a student and a teacher, I have come to the conclusion that the best way to learn is through "guided struggle". You have to do the work yourself, but you need someone else there to either help you over obstacles you can't get around despite a lot of effort or provide you with some critical knowledge (usually the right perspective but sometimes a clever trick) you are missing. Without the prior effort by the student, the knowledge supplied by a teacher has much less impact.

Some college CS students seem to understand this, or perhaps they simply get lucky because they are motivated to program for some other reason. They go off and try to do something using the knowledge they have. Then they come to class, or to the prof's office hours, to ask questions about what does not go smoothly. Students who skip the practice and hope that lectures will soak into them like a magic balm generally find that they don't know much when they attempt problems after class.

The MathOverflow answer matches up pretty well with my experience. Teachers and written material can have strong positive effect on learning, but they are most effective once the student has engaged with the material by trying to solve problems or write code. The teacher's job then has two parts: First, create conditions in which students can work productively. Second, pay close attention to what students are doing, diagnose misconceptions and mistakes, and help students get back onto a productive path by pointing out missing practices or bits of knowledge.

All of this reminds me of some of mymore effective class sessions teaching novice programmers, using design patterns. A typical session looks something like this:

  • I give the students a problem to solve.
  • Students work on a solution, using techniques that have worked in the past.
  • They encounter problems, because the context around the problem has shifted in ways that they can't see given only what they know.
  • We discuss the forces at play and tease out the underlying problem.
  • I demonstrate the pattern's solution.
  • Ahhhh.

This is a guided struggle in the small. Students then go off to write a larger program that lets them struggle a bit more, and we discuss whatever gets in their way.

A final note... One of the comments on the answer points out that a good lecture can "do" math (or CS), rather than "tell", and that such lectures can be quite effective. I agree, but in my experience this is one of the hardest skills for a professor to develop. Once I have solved a problem, it is quite difficult for me to make it look to my students as if I am solving it anew in class. The more ingrained the skill, the harder it is for me to lecture about it in a way that is more helpful than telling a story. Such stories are an essential tool in the teacher's toolbox, but their lies more in motivating students than in teaching them how to write programs. Students still have to go off and do the hard work themselves. The teacher's job is to guide them through their struggles.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

April 28, 2017 11:27 AM

Data Compression and the Complexity of Consciousness

Okay, so this is cool:

Neuroscientists stimulate the brain with brief pulses of energy and then record the echoes that bounce back. Dreamless sleep and general anaesthesia return simple echoes; brains in conscious states produce more complex patterns. Then comes a little inspiration from data compression:

Excitingly, we can now quantify the complexity of these echoes by working out how compressible they are, similar to how simple algorithms compress digital photos into JPEG files. The ability to do this represents a first step towards a "consciousness-meter" that is both practically useful and theoretically motivated.

This made me think of Chris Ford's StrangeLoop 2015 talk about using compression to understand music. Using compressibility as a proxy for complexity gives us a first opportunity to understand all sorts of phenomena about which we are collecting data. Kolmogorov complexity is a fun tool for programmers to wield.

The passage above is from an Aeon article on the study of consciousness. I found it an interesting read from beginning to end.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

March 16, 2017 8:50 AM

Studying Code Is More Like Natural Science Than Reading

A key passage from Peter Seibel's 2014 essay, Code Is Not Literature:

But then it hit me. Code is not literature and we are not readers. Rather, interesting pieces of code are specimens and we are naturalists. So instead of trying to pick out a piece of code and reading it and then discussing it like a bunch of Comp Lit. grad students, I think a better model is for one of us to play the role of a 19th century naturalist returning from a trip to some exotic island to present to the local scientific society a discussion of the crazy beetles they found: "Look at the antenna on this monster! They look incredibly ungainly but the male of the species can use these to kill small frogs in whose carcass the females lay their eggs."

The point of such a presentation is to take a piece of code that the presenter has understood deeply and for them to help the audience understand the core ideas by pointing them out amidst the layers of evolutionary detritus (a.k.a. kludges) that are also part of almost all code. One reasonable approach might be to show the real code and then to show a stripped down reimplementation of just the key bits, kind of like a biologist staining a specimen to make various features easier to discern.

My scientist friends often like to joke that CS isn't science, even as they admire the work that computer scientists and programmers do. I think Seibel's essay expresses nicely one way in which studying software really is like what natural scientists do. True, programs are created by people; they don't exist in the world as we find it. (At least programs in the sense of code written by humans to run on a computer.) But they are created under conditions that look a lot more like biological evolution than, say, civil engineering.

As Hal Abelson says in the essay, most real programs end up containing a lot of stuff just to make it work in the complex environments in which they operate. The extraneous stuff enables the program to connect to external APIs and plug into existing frameworks and function properly in various settings. But the extraneous stuff is not the core idea of the program.

When we study code, we have to slash our way through the brush to find this core. When dealing with complex programs, this is not easy. The evidence of adaptation and accretion obscures everything we see. Many people do what Seibel does when they approach a new, hairy piece of code: they refactor it, decoding the meaning of the program and re-coding it in a structure that communicates their understanding in terms that express how they understand it. Who knows; the original program may well have looked like this simple core once, before it evolved strange appendages in order to adapt to the condition in which it needed to live.

The folks who helped to build the software patterns community recognized this. They accepted that every big program "in the wild" is complex and full of cruft. But they also asserted that we can study such programs and identify the recurring structures that enable complex software both to function as intended and to be open to change and growth at the hands of programmers.

One of the holy grails of software engineering is to find a way to express the core of a system in a clear way, segregating the extraneous stuff into modules that capture the key roles that each piece of cruft plays. Alas, our programs usually end up more like the human body: a mass of kludges that intertwine to do some very cool stuff just well enough to succeed in a harsh world.

And so: when we read code, we really do need to bring the mindset and skills of a scientist to our task. It's not like reading People magazine.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

January 28, 2017 8:10 AM

Curiosity on the Chessboard

I found a great story from Lubomir Kavalek in his recent column, Chess Champions and Their Queens. Many years ago, Kavalek was talking with Berry Withuis, a Dutch journalist, about Rashid Nezhmedtinov, who had played two brilliant queen sacrifices in the span of five years. The conversation reminded Withuis of a question he once asked of grandmaster David Bronstein:

"Is the Queen stronger than two light pieces?"

(The bishop and knight are minor, or "light", pieces.)

The former challenger for the world title took the question seriously. "I don't know," he said. "But I will tell you later."

That evening Bronstein played a simultaneous exhibition in Amsterdam and whenever he could, he sacrificed his Queen for two minor pieces. "Now I know," he told Withuis afterwards. "The Queen is stronger."

How is that for an empirical mind? Most chessplayers would have immediately answered "yes" to Withuis's question. But Bronstein -- one of the greatest players never to be world champion and author of perhaps the best book of tournament analysis in history -- didn't know for sure. So he ran an experiment!

We should all be so curious. And humble.

I wondered for a while if Bronstein could have improved his experiment by channeling Kent Beck's Three Bears pattern. (I'm a big fan of this learning technique and mention it occasionally here, most recently last summer.) This would require him to play many games from the other side of the sacrifice as well, with a queen against his opponents' two minor pieces. Then I realized that he would have a hard time convincing any of his opponents to sacrifice their queens so readily! This may be the sort of experiment that you can only conduct from one side, though in the era of chess computers we could perhaps find, or configure, willing collaborators.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

January 10, 2017 4:07 PM

Garbage Collection -- and the Tempting Illusion of Big Breakthroughs

Good advice in this paragraph, paraphrased lightly from Modern Garbage Collection:

Garbage collection is a hard problem, really hard, one that has been studied by an army of computer scientists for decades. Be very suspicious of supposed breakthroughs that everyone else missed. They are more likely to just be strange or unusual tradeoffs in disguise, avoided by others for reasons that may only become apparent later.

It's wise always to be on the lookout for "strange or unusual tradeoffs in disguise".


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

December 27, 2016 8:36 AM

There's No Right Way To Write All Programs

In the Paris Review's The Art of Fiction No. 183, the interviewer asks Tobias Wolff for some advice. Wolff demurs:

Writers often give advice they don't follow to the letter themselves. And so I tend to receive their commandments warily.

This was refreshing. I also tend to hear advice from successful people with caution.

Wolff is willing, however, to share stories about what has worked for him. He just doesn't think what works for him will necessarily work for anyone else. He doesn't even think that what works for him on one story will work for him on the next. Eventually, he sums up his advice with this:

There's no right way to tell all stories, only the right way to tell a particular story.

Wolff follows a few core practices that keep him moving forward every day, but he isn't dogmatic about them. He does whatever he needs to do to get the current story written -- even if it means moving to Italy for several months.

Wolfe is taking about short stories and novels, but this sentiment applies to more than writing. It captures what is, for me, the fundamental attitude of agile software developers: There is no right way to write all programs, only a good way to write each particular program. We find that certain programming practices -- taking small steps, writing tests early, refactoring mercilessly, pairing -- apply to most tasks. These practices are so powerful precisely because they give us feedback frequently and help us adjust course quickly.

But when conditions change around us, we must be willing to adapt. (Even if that means moving to Italy for several months.) This is what it means to be agile.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

December 09, 2016 1:54 PM

Two Quick Hits with a Mathematical Flavor

I've been wanting to write a blog entry or two lately about my compiler course and about papers I've read recently, but I've not managed to free up much time as semester winds down. That's one of the problems with having Big Ideas to write about: they seem daunting and, at the very least, take time to work through.

So instead here are two brief notes about articles that crossed my newsfeed recently and planted themselves in my mind. Perhaps you will enjoy them even without much commentary from me.

A Student's Unusual Proof Might Be A Better Proof

I asked a student to show that between any two rationals is a rational.

She did the following: if x < y are rational then take δ << y-x and rational and use x+δ.

I love the student's two proofs in article! Student programmers are similarly creative. Their unusual solutions often expose biases in my thinking and give me way to think about a problem. If nothing else, they help to understand better how students think about ideas that I take for granted.

Numberless Word Problems

Some girls entered a school art competition. Fewer boys than girls entered the competition.

She projected her screen and asked, "What math do you see in this problem?"

Pregnant pause.

"There isn't any math. There aren't any numbers."

I am fascinated by the possibility of adapting this idea to teaching students to think like a programmer. In an intro course, for example, students struggle with computational ideas such as loops and functions even though they have a lot of experience with these ideas embodied in their daily lives. Perhaps the language we use gets in the way of them developing their own algorithmic skills. Maybe I could use computationless word problems to get them started?

I'm giving serious thought to ways I might use this approach to help students learn functional programming in my Programming Languages course this spring. The authors describes how to write numberless word problems, and I'm wondering how I might bring the philosophy to computer science. If you have any ideas, please share!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

November 23, 2016 3:17 PM

Patterns and Motives

This week, I read a cool article that covered a lot of ground: Feynman diagrams, experiments at the Large Hadron Collider, algebraic geometry, pendulums, and periods. This paragraph even made me think about software:

That same answer -- the unique thing at the center of all these cohomology theories -- was what Grothendieck called a "motive". "In music it means a recurring theme. For Grothendieck a motive was something which is coming again and again in different forms, but it's really the same," said Pierre Cartier, a mathematician at the Institute of Advanced Scientific Studies outside Paris and a former colleague of Grothendieck's.

Something that comes again and again in different forms but is really the same thing... That sounds like a design pattern. Software patterns are quite different than numeric periods in algebra and geometry, but the idea feels familiar. The analogy to music feels familiar, too.

The article then says:

Motives are in a sense the fundamental building blocks of polynomial equations, in the same way that prime factors are the elemental pieces of larger numbers.

This is how I have often thought of elementary design patterns in software: as the elemental particles out of which all software is constructed. I'd like to think these thoughts again more actively in my programming languages course, when my students and I begin to learn and do functional programming.


Posted by Eugene Wallingford | Permalink | Categories: Patterns

September 28, 2016 3:16 PM

Language, and What It's Like To Be A Bat

My recent post about the two languages resonated in my mind with an article I finished reading the day I wrote the post: Two Heads, about the philosophers Paul and Pat Churchland. The Churchlands have been on a forty-year quest to change the language we use to describe our minds, from popular terms based in intuition and introspection to terms based in the language of neuroscience. Changing language is hard under any circumstances, and it is made harder when the science they need is still in its infancy. Besides, maybe more traditional philosophers are right and we need our traditional vocabulary to make sense of what it feels like to be human?

The New Yorker article closes with these paragraphs, which sounds as if they are part of a proposal for a science fiction novel:

Sometimes Paul likes to imagine a world in which language has disappeared altogether. We know that the two hemispheres of the brain can function separately but communicate silently through the corpus callosum, he reasons. Presumably, it will be possible, someday, for two separate brains to be linked artificially in a similar way and to exchange thoughts infinitely faster and more clearly than they can now through the muddled, custom-clotted, serially processed medium of speech. He already talks about himself and Pat as two hemispheres of the same brain. Who knows, he thinks, maybe in his children's lifetime this sort of talk will not be just a metaphor.

If, someday, two brains could be joined, what would be the result? A two-selved mutant like Joe-Jim, really just a drastic version of Siamese twins, or something subtler, like one brain only more so, the pathways from one set of neurons to another fusing over time into complex and unprecedented arrangements? Would it work only with similar brains, already sympathetic, or, at least, both human? Or might a human someday be joined to an animal, blending together two forms of thinking as well as two heads? If so, a philosopher might after all come to know what it is like to be a bat, although, since bats can't speak, perhaps he would be able only to sense its batness without being able to describe it.

(Joe-Jim is a character from a science fiction novel, Robert Heinlein's Orphans of the Sky.)

What a fascinating bit of speculation! Can anyone really wonder why kids are drawn to science fiction?

Let me add my own speculation to the mix: If we do ever find a way to figure out what it's like to be a bat, people will find a way to idescribe what it's like to be a bat. They will create the language they need. Making language is what we do.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns

August 21, 2016 10:23 AM

Parnas and Software Patterns

Earlier this week, I tweeted a paraphrase of David Parnas that a few people liked:

Parnas observes: "Professional Engineers rarely use 'engineer' as a verb." And it's not because they are busy 'architecting' systems.

The paraphrase comes from Parnas's On ICSE's "Most Influential" Papers, which appears in the July 1995 issue so ACM SIGSOFT's Software Engineering Notes. He wrote that paper in conjunction with his acceptance speech on receiving the Most Influential Paper award at ICSE 17. It's a good read on how the software engineering research community's influence was, at least at that time, limited to other researchers. Parnas asserts that researchers should strive to influence practitioners, the people who are actually writing software.

Why doesn't software engineering research influence practitioners? It's simple:

Computer professionals do not read our literature because it does not help them to do their job better.

In a section called "We are talking to ourselves", Parnas explains why the software engineering literature fails to connect with people who write software:

Most of our papers are written in the tradition of science, not engineering. We write up the results of our research for readers who are also researchers, not for practitioners. We base our papers on previous work done by other researchers, and often ignore current practical problems. In many other engineering fields, the results of research are reported in "How to" papers. Our papers are "This is what I did" papers. We describe our work, how our work differs from other people's work, what we will do next, etc. This is quite appropriate for papers directed to other researchers, but it is not what is needed in papers that are intended to influence practitioners. Practitioners are far more interested in being told the basic assumptions behind the work, than in knowing how this work differs from the work by the author's cohorts in the research game.

Around the time Parnas wrote this article and gave his acceptance speech at ICSE 17, the Pattern Languages of Programs conferences were taking off, with a similar motivation: to create a literature by and for software practitioners. Patterns describe how to create programs in practical terms. They describe techniques, but also the context in which they work, the forces that make them more and less applicable, and the patterns you can use to address the issues that arise after you the technique. The community encouraged writing in a straightforward style, using the vocabulary of professional developers.

At the early PLoP conferences, you could feel the tension between practitioners and academics, some of which grew out of the academic style of writing and the traditions of the scientific literature. I had to learn a lot about how to write for an audience of software developers. Fortunately, the people in the PLoP community took the time to help me get better. I have fond memories of receiving feedback from Frank Buschman, Peter Sommerlad, Ralph Johnson, Ward Cunningham, Kyle Brown, Ken, Auer, and many others. The feedback wasn't always easy to internalize -- it's hard to change! -- but it was necessary.

I'm not sure that an appreciably larger number of academics in software engineering and computer science more broadly write for the wider community of software practitioners these days than when Parnas made his remarks. There are certainly more venues available to us from patterns-related conferences, separate tracks at conferences like SPLASH, and blogs. Unfortunately, the academic reward structure isn't always friendly to this kind of writing, especially early in one's career. Some universities have begun to open up their definition of scholarship, though, which creates new opportunities.

At their best, software patterns are exactly what Parnas calls for: creating a how-to literature aimed at practitioners. Researchers and practitioners can both participate in this endeavor.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 28, 2016 2:37 PM

Functional Programming, Inlined Code, and a Programming Challenge

an example of the cover art for the Commander Keen series of games

I recently came across an old entry on Jonathan Blow's blog called John Carmack on Inlined Code. The bulk of the entry consists an even older email message that Carmack, lead programmer on video games such as Doom and Quake, sent to a mailing list, encouraging developers to consider inlining function calls as a matter of style. This email message is the earliest explanation I've seen of Carmack's drift toward functional programming, seeking to as many of its benefits as possible even in the harshly real-time environment of game programming.

The article is a great read, with much advice borne in the trenches of writing and testing large programs whose run-time performance is key to their success. Some of the ideas involve programming language:

It would be kind of nice if C had a "functional" keyword to enforce no global references.

... while others are more about design style:

The function that is least likely to cause a problem is one that doesn't exist, which is the benefit of inlining it.

... and still others remind us to rely on good tools to help avoid inevitable human error:

I now strongly encourage explicit loops for everything, and hope the compiler unrolls it properly.

(This one may come in handy as I prepare to teach my compiler course again this fall.)

This message-within-a-blog-entry itself quotes another email message, by Henry Spencer, which contains the seeds of a programming challenge. Spencer described a piece of flight software written in a particularly limiting style:

It disallowed both subroutine calls and backward branches, except for the one at the bottom of the main loop. Control flow went forward only. Sometimes one piece of code had to leave a note for a later piece telling it what to do, but this worked out well for testing: all data was allocated statically, and monitoring those variables gave a clear picture of most everything the software was doing.

Wow: one big loop, within which all control flows forward. To me, this sounds like a devilish challenge to take on when writing even a moderately complex program like a scanner or parser, which generally contain many loops within loops. In this regard, it reminds me of the Polymorphism Challenge's prohibition of if-statements and other forms of selection in code. The goal of that challenge was to help programmers really grok how the use of substitutable objects can lead to an entirely different style of program than we tend to create with traditional procedural programming.

Even though Carmack knew that "a great deal of stuff that goes on in the aerospace industry should not be emulated by anyone, and is often self destructive", he thought that this idea might have practical value, so he tried it out. The experience helped him evolve his programming style in a promising direction. This is a great example of the power of the pedagogical pattern known as Three Bears: take an idea to its extreme in order to learn the boundaries of its utility. Sometimes, you will find that those boundaries lie beyond what you originally thought.

Carmack's whole article is worth a read. Thanks to Jonathan Blow for preserving it for us.

~~~~

The image above is an example of the cover art for the "Commander Keen" series of video games, courtesy of Wikipedia. John Carmack was also the lead programmer for this series. What a remarkable oeuvre he has produced.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

June 13, 2016 2:47 PM

Patterns Are Obvious When You Know To Look For Them...

normalized counts of consecutive 8-digit primes (mod 31)

In Prime After Prime (great title!), Brian Hayes boils down into two sentences the fundamental challenge that faces people doing research:

What I find most surprising about the discovery is that no one noticed these patterns long ago. They are certainly conspicuous enough once you know how to look for them.

It would be so much easier to form hypotheses and run tests if interesting hypotheses were easier to find.

Once found, though, we can all see patterns. When they can be computed, we can all write programs to generate them! After reading a paper about the strong correlations among pairs of consecutive prime numbers, Hayes wrote a bunch of programs to visualize the patterns and to see what other patterns he might find. A lot of mathematicians did the same.

Evidently that was a common reaction. Evelyn Lamb, writing in Nature, quotes Soundararajan: "Every single person we've told this ends up writing their own computer program to check it for themselves."

Being able to program means being able to experiment with all kinds of phenomena, even those that seemingly took genius to discover in the first place.

Actually, though, Hayes's article gives a tour of the kind of thinking we all can do that can yield new insights. Once he had implemented some basic ideas from the research paper, he let his imagination roam. He tried different moduli. He visualized the data using heat maps. When he noticed some symmetries in his tables, he applied a cyclic shift to the data (which he termed a "twist") to see if some patterns were easier to identify in the new form.

Being curious and asking questions like these are one of the ways that researchers manage to stumble upon new patterns that no one has noticed before. Genius may be one way to make great discoveries, but it's not a reliable one for those of us who aren't geniuses. Exploring variations on a theme is a tactic we mortals can use.

Some of the heat maps that Hayes generates are quite beautiful. The image above is a heat map of the normalized counts of consecutive eight-digit primes, taken modulo 31. He has more fun making images of his twists and with other kinds of primes. I recommend reading the entire article, for its math, for its art, and as an implicit narration of how a computational scientist approaches a cool result.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

March 18, 2016 9:58 AM

Thoughts for Programmers from "Stay on the Bus"

Somehow, I recently came across a link to Stay on the Bus, an excerpt from a commencement speech Arno Rafael Minkkinen gave at the New England School of Photography in June 2004. It is also titled "The Helsinki Bus Station Theory: Finding Your Own Vision in Photography". I almost always enjoy reading the thoughts of an artist on his or her own art, and this was no exception. I also usually hear echoes of what I feel about my own arts and avocations. Here are three.

What happens inside your mind can happen inside a camera.

This is one of the great things about any art. What happens inside your mind can happen in a piano, on a canvas, or in a poem. When people find the art that channels their mind best, beautiful things -- and lives -- can happen.

One of the things I like about programming is that is really a meta-art. Whatever happens in your mind can happen inside a camera, inside a piano, on a canvas, or in a poem. Yet whatever happens inside a camera, inside a piano, or on a canvas can happen inside a computer, in the form of a program. Computing is a new medium for experimenting, simulating, and creating.

Teachers who say, "Oh, it's just student work," should maybe think twice about teaching.

Amen. There is no such thing as student work. It's the work our students are ready to make at a particular moment in time. My students are thinking interesting thoughts and doing their best to make something that reflects those thoughts. Each program, each course in their study is a step along a path.

All work is student work. It's just that some of us students are at a different point along our paths.

Georges Braque has said that out of limited means, new forms emerge. I say, we find out what we will do by knowing what we will not do.

This made me think of an entry I wrote many years ago, Patterns as a Source of Freedom. Artists understand better than programmers sometimes that subordinating to a form does not limit creativity; it unleashes it. I love Minkkinen's way of saying this: we find out what we will do by knowing what we will not do. In programming as in art, it is important to ask oneself, "What will I not do?" This is how we discover we will do, what we can do, and even what we must do.

Those are a few of the ideas that stood out to me as I read Minkkinen's address. The Helsinki Bus Station Theory is a useful story, too.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 28, 2016 2:56 PM

Remarkable Paragraphs: "Everything Is Computation"

Edge.org's 2016 question for sophisticated minds is, What do you consider the most interesting recent [scientific] news? What makes it important? Joscha Bach's answer is: Everything is computation. Read his essay, which contains some remarkable passages.

Computation changes our idea of knowledge: instead of treating it as justified true belief, knowledge describes a local minimum in capturing regularities between observables.

Epistemology was one of my two favorite courses in grad school (cognitive psych was the other), and "justified true belief" was the starting point for many interesting ideas of what constitutes knowledge. I don't see Bach's formulation as a replacement for justified true belief as a starting point, but rather as a specification of what beliefs are most justified in a given context. Still, Bach's way of using computation in such a concrete way to define "knowledge" is marvelous.

Knowledge is almost never static, but progressing on a gradient through a state space of possible world views. We will no longer aspire to teach our children the truth, because like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never ending land of insight.

Knowledge is a never-ending process of refactoring. The phrase "how to productively change their minds" reminds me of Jon Udell's recent blog post on liminal thinking at scale. From the perspective that knowledge is a function, "changing one's mind intelligently" is the dynamic computational process that keeps the mind at a local minimum.

A growing number of physicists understand that the universe is not mathematical, but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable, mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: they still seem to be confused by the apparent dichotomy between mechanism (rigid, moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.

This is a strong claim, and one I'm sympathetic with. However, I think that the apparent distinction between the computational sciences and the non-computational ones is a matter of time, not a difference in kind. It wasn't that long ago that most physicists thought of the universe in mathematical terms, not computational ones. I suspect that with a little more time, the orientation in other disciplines will begin to shift. Neuroscience and psychology are positioned well for such a phase shift.

In any case, Bach's response points our attention in a direction that has the potential to re-define every problem we try to solve. This may seem unthinkable to many, though perhaps not to computer scientists, especially those of us with an AI bent.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

September 22, 2015 2:57 PM

"Good Character" as an Instance of Postel's Law

Mike Feathers draws an analogy I'd never thought of before in The Universality of Postel's Law: what we think of as "good character" can be thought of as an application of Postel's Law to ordinary human relations.

Societies often have the notion of 'good character'. We can attempt all sorts of definitions but at its core, isn't good character just having tolerance for the foibles of others and being a person people can count on? Accepting wider variation at input and producing less variation at output? In systems terms that puts more work on the people who have that quality -- they have to have enough control to avoid 'going off' on people when others 'go off on them', but they get the benefit of being someone people want to connect with. I argue that those same dynamics occur in physical systems and software systems that have the Postel property.

These days, most people talk about Postel's Law as a social law, and criticisms of it even in software design refer to it as creating moral hazards for designers. But Postel coined this "principle of robustness" as a way to talk about implementing TCP, and most references I see to it now relate to HTML and web browsers. I think it's pretty cool when a software design principle applies more broadly in the design world, or can even be useful for understanding human behavior far removed from computing. That's the sign of a valuable pattern -- or anti-pattern.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Software Development

September 19, 2015 11:56 AM

Software Gets Easier to Consume Faster Than It Gets Easier to Make

In What Is the Business of Literature?, Richard Nash tells a story about how the ideas underlying writing, books, and publishing have evolved over the centuries, shaped by the desires of both creators and merchants. One of the key points is that technological innovation has generally had a far greater effect on the ability to consume literature than on the ability to create it.

But books are just one example of this phenomenon. It is, in fact, a pattern:

For the most part, however, the technical and business-model innovations in literature were one-sided, far better at supplying the means to read a book than to write one. ...

... This was by no means unique to books. The world has also become better at allowing people to buy a desk than to make a desk. In fact, from medieval to modern times, it has become easier to buy food than to make it; to buy clothes than to make them; to obtain legal advice than to know the law; to receive medical care than to actually stitch a wound.

One of the neat things about the last twenty years has been the relatively rapid increase in the ability for ordinary people to to write and disseminate creative works. But an imbalance remains.

Over a shorter time scale, this one-sidedness has been true of software as well. The fifty or sixty years of the Software Era have given us seismic changes in the availability, ubiquity, and backgrounding of software. People often overuse the word 'revolution', but these changes really have had an immense effect in how and when almost everyone uses software in their lives.

Yet creating software remains relatively difficult. The evolution of our tools for writing programs hasn't kept pace with the evolution in platforms for using them. Neither has the growth in our knowledge of how make great software.

There is, of course, a movement these days to teach more people how to program and to support other people who want to learn on their own. I think it's wonderful to open doors so that more people have the opportunity to make things. I'm curious to see if the current momentum bears fruit or is merely a fad in a world that goes through fashions faster than we can comprehend them. It's easier still to toss out a fashion that turns out to require a fair bit of work.

Writing software is still a challenge. Our technologies have not changed that fact. But this is also true, as Nash reminds us, of writing books, making furniture, and a host of other creative activities. He also reminds us that there is hope:

What we see again and again in our society is that people do not need to be encouraged to create, only that businesses want methods by which they can minimize the risk of investing in the creation.

The urge to make things is there. Give people the resources they need -- tools, knowledge, and, most of all, time -- and they will create. Maybe one of the new programmers can help us make better tools for making software, or lead us to new knowledge.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Software Development

August 04, 2015 1:00 PM

Concrete, Then Abstract

One of the things that ten years teaching the same topic has taught Daniel Lemire is that students generally learn more effectively when they learn practical skills first and only then confront the underlying theory:

Though I am probably biased, I find that it is a lot harder to take students from a theoretical understanding to a practical one... than to take someone with practical skills and teach him the theory. My instinct is that most people can more easily acquire an in-depth practical knowledge through practice (since the content is relevant) and they then can build on this knowledge to acquire the theory.

He summarizes the lesson he learned as:

A good example, well understood, is worth a hundred theorems.

My years of teaching have taught me similar lessons. I described a related idea in Examples First, Names Last: showing students examples of an idea before giving it a name.

Lemire's experience teaching XML and my experience teaching a number of topics, including the object-oriented programming example in that blog post, are specific examples of a pattern I usually call Concrete, Then Abstract. I have found this to be an effective strategy in my teaching and writing. I may have picked up the name from Ralph Johnson at ChiliPLoP 2003, where we were part of a hot topic group sketching programming patterns for beginning programmers. Ralph is a big proponent of showing concrete examples before introducing abstract ideas. You can see that in just about every pattern, paper, and book he has written.

My favorite example of "Concrete, Then Abstract" this week is in an old blog entry by Scott Vokes, Making Diagrams with Graphviz. I recently came back to an idea I've had on hold for a while: using Graphviz to generate a diagram showing all of my department's courses and prerequisites. Whenever I return to Graphviz after time away, I bypass its documentation for a while and pull up instead a cached link to Scott's short introduction. I immediately scroll down to this sample program written in Graphviz's language, DOT:

an example program in Graphviz's DOT language

... and the corresponding diagram produced by Graphviz:

an example diagram produced by Graphviz

This example makes me happy, and productive quickly. It demonstrates an assortment of the possibilities available in DOT, including several specific attributes, and shows how they are rendered by Graphviz. With this example as a starting point, I can experiment with variations of my own. If I ever want or need more, I dig deeper and review the grammar of DOT in more detail. By that time, I have a pretty good practical understanding of how the language works, which makes remembering how the grammar works easier.

Sometimes, the abstract idea to learn, or re-learn, is a context-free grammar. Sometimes, it's a rule for solving a class of problems or a design pattern. And sometimes, it's a theorem or a theory. In all these cases, examples provide hooks that help us learn an abstract idea that is initially hard for us to hold in our heads.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

June 30, 2015 2:39 PM

Software Patterns Are Still For Humans

I recently found myself reading a few of Gregor Hohpe's blog posts and came across Design Patterns: More Than Meets The Eye. In it, Hohpe repeats a message that needs to be repeated every so often even now, twenty years after the publication of the GoF book: software patterns are fundamentally about human communication:

The primary purpose of patterns is to help humans understand design problems. Solving design problems is generally much more difficult than typing in some code, so patterns have enormous value in themselves. Patterns owe their popularity to this value. A better hammer can help speed up the construction of a bed, but a pattern helps us place the bed in a way that makes us feel comfortable.

The last sentence of that paragraph is marvelous.

Hohpe published that piece five and a half years ago. People who write or teach software patterns will find themselves telling a similar story and answering questions like the ones that motivated his post all the time. Earlier this year, Avdi Grimm wrote a like-minded piece, Patterns are for People, in which he took his shot at dispelling misunderstandings from colleagues and friends:

There's a meme, originating from certain corners of the Functional side of programming, that "patterns are a language smell". The implication being that "good" languages either already encode the patterns as language features, or they provide the tools to extend the language such that modeling the pattern explicitly isn't needed.

This misses the point on rather a lot of levels.

Design patterns that are akin to hammers for making better code are plentiful and often quite helpful. But we need more software patterns that help us place our beds in ways that increase human comfort.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

March 24, 2015 3:40 PM

Some Thoughts on How to Teach Programming Better

In How Stephen King Teaches Writing, Jessica Lahey asks Stephen King why we should teach grammar:

Lahey: You write, "One either absorbs the grammatical principles of one's native language in conversation and in reading or one does not." If this is true, why teach grammar in school at all? Why bother to name the parts?

King: When we name the parts, we take away the mystery and turn writing into a problem that can be solved. I used to tell them that if you could put together a model car or assemble a piece of furniture from directions, you could write a sentence. Reading is the key, though. A kid who grows up hearing "It don't matter to me" can only learn doesn't if he/she reads it over and over again.

There are at least three nice ideas in King's answer.

  • It is helpful to beginners when we can turn writing into a problem that can be solved. Making concrete things out of ideas in our head is hard. When we giving students tools and techniques that help them to create basic sentences, paragraphs, and stories, we make the process of creating a bit more concrete and a bit less scary.

  • A first step in this direction is to give names to the things and ideas students need to think about when writing. We don't want students to memorize the names for their own sake; that's a step in the wrong direction. We simply need to have words for talking about the things we need to talk about -- and think about.

  • Reading is, as the old slogan tells us, fundamental. It helps to build knowledge of vocabulary, grammar, usage, and style in a way that the brain absorbs naturally. It creates habits of thought that are hard to undo later.

All of these are true of teaching programmers, too, in their own way.

  • We need ways to demystify the process and give students concrete steps they can take when they encounter a new problem. The design recipe used in the How to Design Programs approach is a great example. Naming recipes and their steps makes them a part of the vocabulary teachers and students can use to make programming a repeatable, reliable process.

  • I've often had great success by giving names to design and implementation patterns, and having those patterns become part of the vocabulary we use to discuss problems and solutions. I have a category for posts about patterns, and a fair number of those relate to teaching beginners. I wish there were more.

  • Finally, while it may not be practical to have students read a hundred programs before writing their first, we cannot underestimate the importance of students reading code in parallel with learning to write code. Reading lots of good examples is a great way for students to absorb ideas about how to write their own code. It also gives them the raw material they need to ask questions. I've long thought that Clancy's and Linn's work on case studies of programming deserves more attention.

Finding ways to integrate design recipes, patterns, and case studies is an area I'd like to explore more in my own teaching.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 11, 2015 4:15 PM

If Design is Important, Do It All The Time

And you don't have to be in software. In Jonathan Ive and the Future of Apple, Ian Parker describes how the process of developing products at Apple has changed during Ive's tenure.

... design had been "a vertical stripe in the chain of events" in a product's delivery; at Apple, it became "a long horizontal stripe, where design is part of every conversation." This cleared a path for other designers.

By the time the iPhone launched, Ive had become "the hub of the wheel".

The vertical stripe/horizontal stripe image brought to mind Kent Beck's reimagining of the software development cycle in XP. I was thinking the image in my head came from Extreme Programming Explained, but the closest thing to my memory I can find is in his IEEE Computer article, Embracing Change with Extreme Programming:

if it's important, do it all the time

My mental image has time on the x-axis, though, which meshes better with the vertical/horizontal metaphor of Robert Brunner, the designer quoted in the passage above.

design as a thin vertical stripe versus design as a long horizontal stripe

If analysis is important, do it all the time. If design is important, do it all the time. If implementation is important, do it all the time. If testing is important, do it all the time.

Ive and his team have shown that there is value in making design an ongoing part of the process for developing hardware products, too, where "design is part of every conversation". This kind of thinking is not just for software any more.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

February 27, 2015 3:37 PM

Bad Habits and Haphazard Design

With an expressive type system for its teaching
languages, HtDP could avoid this problem to some
extent, but adding such rich types would also take
the fun out of programming.

As we approach the midpoint of the semester, Matthias Felleisen's Turing Is Useless strikes a chord in me. My students have spent the last two months learning a little Racket, a little functional programming, and a little about how to write data-driven recursive programs. Yet bad habits learned in their previous courses, or at least unchecked by what they learned there, have made the task harder for many of them than it needed to be.

The essay's title plays off the Church-Turing thesis, which asserts that all programming languages have the same expressive power. This powerful claim is not good news for students who are learning to program, though:

Pragmatically speaking, the thesis is completely useless at best -- because it provides no guideline whatsoever as to how to construct programs -- and misleading at worst -- because it suggests any program is a good program.

With a Turing-universal language, a clever student can find a way to solve any problem with some program. Even uninspired but persistent students can tinker their way to a program that produces the right answers. Unfortunately, they don't understand that the right answers aren't the point; the right program is. Trolling StackOverflow will get them a program, but too often the students don't understand whether it is a good or bad program in their current situation. It just works.

I have not been as faithful to the HtDP approach this semester as I probably should have been, but I share its desire to help students to design programs systematically. We have looked at design patterns that implement specific strategies, not language features. Each strategy focuses on the definition of the data being processed and the definition of the value being produced. This has great value for me as the instructor, because I can usually see right away why a function isn't working for the student the way he or she intended: they have strayed from the data as defined by the problem.

This is also of great value to some of my students. They want to learn how to program in a reliable way, and having tools that guide their thinking is more important than finding yet another primitive Racket procedure to try. For others, though "garage programming" is good enough; they just want get the job done right now, regardless of which muscles they use. Design is not part of their attitude, and that's a hard habit to break. How use doth breed a habit in a student!

Last semester, I taught intro CS from what Felleisen calls a traditional text. Coupled that experience with my experience so far this semester, I'm thinking a lot these days about how we can help students develop a design-centered attitude at the outset of their undergrad courses. I have several blog entries in draft form about last semester, but one thing that stands out is the extent to which every step in the instruction is driven by the next cool programming construct. Put them all on the table, fiddle around for a while, and you'll make something that works. One conclusion we can draw from the Church-Turing thesis is that this isn't surprising. Unfortunately, odds are any program created this way is not a very good program.

~~~~~

(The sentence near the end that sounds like Shakespeare is. It's from The Two Gentlemen of Verona, with a suitable change in noun.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

February 24, 2015 3:03 PM

Why Can't I Use flatten on This Assignment?

The team is in the weight room. Your coach wants you to work on your upper-body strength and so has assigned you a regimen that includes bench presses and exercises for your lats and delts. You are on the bench, struggling.

"Hey, coach, this is pretty hard. Can I use my legs to help lift this weight?"

The coach shakes his head and wonders what he is going to do with you.

Using your legs is precisely not the point. You need to make your other muscles stronger. Stick to the bench.

~~~~

If athletics isn't your thing, we can tell this story with a piano. Or a pen and paper. Or a chessboard.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

January 23, 2015 3:35 PM

Agile Design and Jazz

Anyone interested in thinking about how programmers can design software without mapping its structure out in advance should read Ted Gioia's Jazz: The Aesthetics of Imperfection, which appeared in the Winter 1987 issue of The Hudson Review (Volume 39, Number 4, pages 585-600). It explores in some depth the ways in which jazz, which relies heavily on spur-of-the-moment improvisation and thus embraces imperfection, can still produce musical structure worthy of the term "art".

Gioia's contrast of producing musical form via the blueprint method and the retrospective method will resonate with anyone who has grown a large software system via small additions to, and refactorings of, an evolving code base. This paragraph brings to mind the idea of selecting a first test to pass at random:

Some may feel that the blueprint method is the only method by which an artist can adhere to form. But I believe this judgment to be quite wrong. We can imagine the artist beginning his work with an almost random maneuver, and then adapting his later moves to this initial gambit. For example, the musical improviser may begin his solo with a descending five-note phrase and then see, as he proceeds, that he can use this same five-note phrase in other contexts in the course of his improvisation.

Software is different from jazz performance in at least one way that makes the notion of retrospective form even more compelling. In jazz, the most valuable currency is live performance, and each performance is a new creation. We see something similar in the code kata, where each implementation starts from scratch. But unlike jazz, software can evolve over time. When we nurture the same code base over time, we can both incorporate new features and eliminate imperfections from previous iterations. In this way, software developers can create retrospective designs that both benefit from improvisation and reach stable architectures.

Another interesting connection crossed my mind as I read about the role the recording technology played in the development of jazz. With the invention of the phonograph:

... for the first time sounds could be recorded with the same precision that books achieved in recording words. Few realize how important the existence of the phonograph was to the development of improvised music. Hitherto, the only method of preserving musical ideas was through notation, and here the cumbersome task of writing down parts made any significant preservation of improvisations unfeasible. But with the development of the phonograph, improvised music could take root and develop; improvising musicians who lived thousands of miles apart could keep track of each other's development, and even influence each other without ever having met.

Software has long been a written form, but in the last two decades we have seen an explosion of ways in which programs could be recorded and shared with others. The internet and the web enabled tools such as SourceForge and GitHub, which in turn enabled the growth of communities dedicated to the creation and nurturing of open source software. The software so nurtured has often been the product of many people, created through thousands of improvisations by programmers living in all corners of the world. New programmers come to these repositories, the record stores of our world, and are able to learn from masters by studying their moves and their creations. They are then able to make their own contributions to existing projects, and to create new projects of their own.

As Gioia says of jazz, this is not to make the absurd claim that agile software did not exist before it was recorded and shared in this way, but the web and the public repository have had profound impacts on the way software is created. The retrospective form espoused by agile software design methods, the jazz of our industry, has been one valuable result.

Check out Gioia's article. It repaid my investment with worthy connections. If nothing else, it taught me a lot about jazz and music criticism.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

September 19, 2014 3:34 PM

Ask Yourself, "What is the Pattern?"

I ran across this paragraph in an essay about things you really need to learn in college:

Indeed, you should view the study of mathematics, history, science, and mechanics as the study of archetypes, basic patterns that you will recognize over and over. But this means that, when you study these disciplines, you should be asking, "what is the pattern" (and not merely "what are the facts"). And asking this question will actually make these disciplines easier to learn.

Even in our intro course, I try to help students develop this habit. Rather than spending all of our time looking at syntax and a laundry list of language features, I am introducing them to some of the most basic code patterns, structures they will encounter repeatedly as they solve problems at this level. In week one came Input-Process-Output. Then after learning basic control structures, we encountered guarded actions, range tests, running totals, sentinel loops, and "loop and a half". We encounter these patterns in the process of solving problems.

While they are quite low-level, they are not merely idioms. They are patterns every bit as much as patterns at the level of the Gang of Four or PoSA. They solve common problems, recur in many forms, and are subject to trade-offs that depend on the specific problem instance.

They compose nicely to create larger programs. One of my goals for next week is to have students solve new problems that allow them to assemble programs from ideas they have already seen. No new syntax or language features, just new problems.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 07, 2014 3:39 PM

Thinking in Types, and Good Design

Several people have recommended Pat Brisbin's Thinking in Types for programmers with experience in dynamically-typed languages who are looking to grok Haskell-style typing. He wrote it after helping one of his colleagues of mine was get unstuck with a program that "seemed conceptually simple but resulted in a type error" in Haskell when implemented in a way similar to a solution in a language such as Python or Ruby.

This topic is of current interest to me at a somewhat higher level. Few of our undergrads have a chance to program in Haskell as a part of their coursework, though a good number of them learn Scala while working at a local financial tech company. However, about two-thirds of undergrads now start with a one or two semesters of Python, and types are something of a mystery to them. This affects their learning of Java and colors how they think about types if they take my course on programming languages.

So I read this paper. I have two comments.

First, let me say that I agree with my friends and colleagues who are recommending this paper. It is a clear, concise, and well-written description of how to use Haskell's types to think about a problem. It uses examples that are concrete enough that even our undergrads could implement with a little help. I may use this as a reading in my languages course next spring.

Second, I think think this paper does more than simply teach people about types in a Haskell-like language. It also gives a great example of how thinking about types can help programmers create better designs for their programs, even if they are working in an object-oriented language! Further, it hits right at the heart of the problem we face these days, with students who are used to working in scripting languages that provide high-level but very generic data structures.

The problem that Brisbin addresses happens after he helps his buddy create type classes and two instance classes, and they reach this code:

    renderAll [ball, playerOne, playerTwo]

renderAll takes a list of values that are Render-able. Unfortunately, in this case, the arguments come from two different classes... and Haskell does not allow heterogeneous lists. We could try to work around this feature of Haskell and "make it fit", but as Brisbin points out, doing so would cause you to lose the advantages of using Haskell in the first place. The compiler wouldn't be able to find errors in the code.

The Haskell way to solve the problem is to replace the generic list of stuff we pass to renderAll with a new type. With a new Game type that composes a ball with two players, we are able to achieve several advantages at once:

  • create a polymorphic render method for Game that passes muster with the type checker
  • allow the type checker to ensure that this element of our program is correct
  • make the program easier to extend in a type-safe way
  • our program is correct
  • and, perhaps most importantly, express the intent of the program more clearly

It's this last win that jumped off the page for me. Creating a Game class would give us a better object-oriented design in his colleague's native language, too!

Students who become accustomed to programming in languages like Python and Ruby often become accustomed to using untyped lists, arrays, hashes, and tuples as their go-to collections. They are oh, so, handy, often the quickest route to a program that works on the small examples at hand. But those very handy data structures promote sloppy design, or at least enable it; they make it easy not to see very basic objects living in the code.

Who needs a Game class when a Python list or Ruby array works out of the box? I'll tell you: you do, as soon as you try to almost anything else in your program. Otherwise, you begin working around the generality of the list or array, writing code to handle special cases really aren't special cases at all. They are simply unbundled objects running wild in the program.

Good design is good design. Most of the features of a good design transcend any particular programming style or language.

So: This paper is a great read! You can use it to learn better how to think like a Haskell programmer. And you can use it to learn even if thinking like a Haskell programmer is not your goal. I'm going to use it, or something like it, to help my students become better OO programmers.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

April 21, 2014 2:45 PM

The Special Case Object Pattern in "Confident Ruby"

I haven't had a chance to pick up a copy of Avdi Grimm's new book, Confident Ruby, yet. I did buzz by the book's Pragmatic Programmers page, where I was able to pick up a sample chapter or two for elliptical reading.

The chapter "Represent special cases as objects" was my first look. This chapter and the "Represent do-nothing cases as null objects" chapter that follows deal with situations in which our program is missing a kind of object. The result is code that has too many responsibilities because there is no object charged with handling them.

The chapter on do-nothing cases is @avdi's re-telling of the Null Object pattern. Bobby Woolf workshopped his seminal write-up of this pattern at PLoP 1996 (the first patterns conference I attended) and later published an improved version in the fourth Pattern Languages of Program Design book. I had the great pleasure to correspond with Bobby as he wrote his original paper and to share a few ideas about the pattern.

@avdi's special cases chapter is a great addition to the literature. It shows several different ways in which our code can call out for a special case object in place of a null reference. It then shows how creating a new kind of object can make our code better in each case, giving concrete examples written in Ruby, in the context of processing input to a web app.

I was discussing the pattern and the chapter with a student, who asked a question about this example:

    if current_user
      render_logout_button
    else
      render_login_button
    end

This is the only example in which the if check is not eliminated after introducing the special case object, an instance of the new class, GuestUser. Instead, @avdi adds an authenticated? to the User and GuestUser classes, has them return true and false respectively, and then changes the original expression to:

    if current_user.authenticated?
      render_logout_button
    else
      render_login_button
    end

As the chapter tells us, using the authenticated? predicate makes the conditional statement express the programmer's intent more clearly. But it also says that "we can't get rid of the conditional". My student asked, "Why not?"

Of course we can. The question is whether we want to. (I have a hard time using words like "cannot", "never", and "always", because I can usually imagine an exception to the absolute...)

In this case, there is a lingering smell in the code that uses the special case object: authenticated? is a surrogate for type check. Indeed, it behaves just like a query to find the object's class so that we can tailor our behavior to receiver's type. That's just the sort of thing we don't have to do in an OO program.

The standard remedy for this code smell is to push the behavior into the classes and send the object, whatever its type, a message. Rather ask a user if it is authenticated so that we can render the correct button, we might ask it to render the correct button itself:

    current_user.render_button

...

class User def render_button render_logout_button end end

class GuestUser def render_button render_login_button end end

Unfortunately, it's not quite this simple. The render_logXXX_button methods don't live in the user classes, so the render_button methods need to send those messages to some other object. If the user object already knows to whom to send it, great. If not, then the send of the render_button message will need to send itself as an argument along with the message, so that the receiver can send the appropriate message back.

Either of these approaches requires us to let some knowledge from the original context leak into our User and GuestUser classes, and that creates a new form of coupling. Ideally, there will be a way to mitigate this coupling in the form of some other shared association. Ruby web developers know the answer to this better than I.

In any case, this may be what @avdi means when he says that we can't get rid of the if check. Doing so may create more downside than upside.

This turned into a great opportunity to discuss design with my student. Design is about trade-offs. Things never seem quite as simple in the trenches as they do when we learn the rules and heuristics of design. There is no perfect solution. Our goal as programmers should be to develop the ability to make informed decisions in these situations, taking into account the context in which we are working.

Patterns document design solutions and so must be used with care. One of the thing I love about the pattern form is that it encourages the writer to make as explicit as possible the context in which the solution applies and the forces that make its use more or less applicable. This helps the reader to face the possible trade-offs with his or her eyes wide open.

So, one minor improvement @avdi might make in this chapter is to elaborate on the reason underlying the assertion that we can't eliminate this particular if check. Otherwise, students of OOP are likely to ask the same question my student asked.

Of course, the answer may be obvious to Ruby web developers. In the end, working with patterns is like all other design: the more experience we have, the better.

This is a relatively minor issue, though. From what I've seen, "Confident Ruby" will be a valuable addition to most Ruby programmers' bookshelves.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

April 17, 2014 3:30 PM

The "Subclass as Client" Pattern

A few weeks ago, Reginald Braithwaite wrote a short piece discouraging us from creating class hierarchies. His article uses Javascript examples, but I think he intends his advice to apply everywhere:

So if someone asks you to explain how to write a class hierarchy? Go ahead and tell them: "Don't do that!"

If you have done much object-oriented programming in a class-based language, you will recognize his concern with class hierarchies: A change to the implementation of a class high up in the hierarchy could break every class beneath it. This is often called the "fragile base class" problem. Fragile code can't be changed without a lot of pain, fixing all the code broken by the change.

I'm going to violate the premise of Braithwaite's advice and suggest a way that you can make your base classes less fragile and thus make small class hierarchies more attractive. If you would like to follow his advice, feel free to tell me "Don't do that!" and stop reading now.

The technique I suggest follows directly from a practice that OO programmers use to create good objects, one that Braithwaite advocates in his article: encapsulating data tightly within an object.

JavaScript does not enforce private state, but it's easy to write well-encapsulated programs: simply avoid having one object directly manipulate another object's properties. Forty years after Smalltalk was invented, this is a well-understood principle.

The article then shows a standard example of a bank account object written in this style, in which client code uses the object without depending on its implementation. So far, so good.

What about classes?

It turns out, the relationship between classes in a hierarchy is not encapsulated. This is because classes do not relate to each other through a well-defined interface of methods while "hiding" their internal state from each other.

Braithwaite then shows an example of a subclass method that illustrates the problem:

    ChequingAccount.prototype.process = function (cheque) {
      this._currentBalance = this._currentBalance - cheque.amount();
      return this;
    }

The ChequingAccount directly accesses its _currentBalance member, which it inherits from the Account prototype. If we now change the internal implementation of Account so that it does not provide a _currentBalance member, we will break ChequingAccount.

The problem, we are told, is that objects are encapsulated, but classes are not.

... this dependency is not limited in scope to a carefully curated interface of methods and behaviour. We have no encapsulation.

However, as the article pointed out earlier, JavaScript does not enforce private state for objects! Even so, it's easy to write well-encapsulated programs -- by not letting one object directly manipulate another object's properties. This is a design pattern that makes it possible to write OO programs even when the language does not enforce encapsulation.

The problem isn't that objects are encapsulated and classes are not. It's that we tend treat superclasses differently than we treat other classes.

When we write code for two independent objects, we think of their classes as black boxes, sealed off from external inspection. The data and methods defined in the one class belong to it and its objects. Objects of one class must interact with objects of another via a carefully curated interface of methods and behavior.

But when we write code for a subclass, we tend to think of the data and methods defined in the superclass as somehow "belonging to" instances of the subclass. We take the notion of inheritance too literally.

My suggestion is that you treat your classes like you treat objects: Don't let one class look into another class and access its state directly. Adopt this practice even when the other class is a superclass, and the state is an inherited member.

Many OO programs have this pattern. I usually call it the "Subclass as Client" pattern. Instances of a subclass act as clients of their superclass, treating it -- as much as possible -- as an independent object providing a set of well-defined behaviors.

When code follows this pattern, it takes Braithwaite's advice for designing objects up a level and follows it more faithfully. Even instance variables inherited from the superclass are encapsulated, accessible only through the behaviors of the superclass.

I don't program in Javascript, but I've written a lot of Java over the years, and I think the lessons are compatible. Here's my story.

~~~~~

When I teach OOP, one of the first things my students learn about creating objects is this:

All instance variables are private.

Like Javascript, Java doesn't require this. We can tell the compiler to enforce it, though, through use of the private modifier. Now, only methods defined in the same class can access the variable.

For the most part, students are fine with this idea -- until we learn about subclasses. If one class extends another, it cannot access the inherited data members. The natural thing to do is what they see in too many Java examples in their texts and on the web: change private variables in the superclass to protected. Now, all is right with the world again.

Except that they have stepped directly into the path of the fragile base class problem. Almost any change to the superclass risks breaking all of its subclasses. Even in a sophomore OO course, we quickly encounter the problem of fragile base classes in our programs. But other choice do we have?

Make each class a server to its subclasses. Keep the instance variables private, and (in Braithwaite's words) carefully curate an interface of methods for subclasses to use. The class may be willing to expose more of its identity to its subclasses than to arbitrary objects, so define protected methods that are accessible only to its subclasses.

This is an intentional extension of the class's interface for explicit interaction with subclasses. (Yes, I know that protected members in Java are accessible to every class in the package. Grrr.)

This is the same discipline we follow when we write well-behaved objects in any language: encapsulate data and define an interface for interaction. When applied to the class-subclass relationship, it helps us to avoid the dangers of fragile base classes.

Forty years after Smalltalk was invented, this principle should be better understood by more programmers. In Smalltalk, variables are encapsulated within their classes, which forces subclasses to access them through methods defined in the superclass. This language feature encourages the writer of the class to think explicitly about how instances of a subclass will interact with the class. (Unfortunately, those methods are public to the world, so programmers have to enforce their scope by convention.)

Of course, a lazy programmer can throw away this advantage. When I first learned OO in Smalltalk, I quickly figured out that I could simply define accessors with the same names as the instance variables. Hurray! My elation did not last long, though. Like my Java students, I quickly found myself with a maze of class-subclass entanglements that made programming unbearable. I had re-invented the Fragile Base Class problem.

Fortunately, I had the Smalltalk class library to study, as well as programs written by better programmers than I. Those programs taught me the Subclass as Client pattern, I learned that it was possible to use subclasses well, when classes were designed carefully. This is just one of the many ways that Smalltalk taught me OOP.

~~~~~

Yes, you should prefer composition to inheritance, and, yes, you should strive to keep your class hierarchies as small and shallow as possible. But if you apply basic principles of object design to your superclasses, you don't need to live in absolute fear of fragile base classes. You can "do that" if you are willing to carefully curate an interface of methods that define the behavior of a class as a superclass.

This advice works well only for the class hierarchies you build for yourself. If you need to work with a class from an external package you don't control, then you can't be control the quality of those class's interfaces. Think carefully before you subclass an external class and depend on its implementation.

One technique I find helpful in this regard is to build a wrapper class around the external class, carefully define an interface for subclasses, and then extend the wrapper class. This at least isolates the risk of changes in the library class to a single class in my program.

Of course, if you are programming in Javascript, you might want to look to the Self community for more suitable OO advice than to Smalltalk!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

March 29, 2014 10:06 AM

Sooner

That is the advice I find myself giving to students again and again this semester: Sooner.

Review the material we cover in class sooner.

Ask questions sooner.

Think about the homework problems sooner.

Clarify the requirements sooner.

Write code sooner.

Test your code sooner.

Submit a working version of your homework sooner. You can submit a more complete version later.

A lot of this advice boils down to the more general Get feedback sooner. In many ways, it is a dual of the advice, Take small steps. If you take small steps, you can ask, clarify, write, and test sooner. One of the most reliable ways to do those things sooner is to take small steps.

If you are struggling to get things done, give sooner a try. Rather than fail in a familiar way, you might succeed in an unfamiliar way. When you do, you probably won't want to go back to the old way again.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 16, 2014 10:48 AM

Experience Happens When You Keep Showing Up

You know what they say about good design coming from experience, and experience coming from bad design? That phenomenon is true of most things non-trivial. Here's an example from men's college basketball.

The University of Florida has a veteran team. The University of Kentucky has a young team. Florida's players are very good, but not generally considered to be in the same class as Kentucky's highly-regarded players. Yesterday, the two teams played a close game on Kentucky's home floor.

Once they fell behind by five with less than two minutes remaining, Kentucky players panicked. Florida players didn't. Why not? "Well, we have a veteran group here that's panicked before -- that's been in this situation and not handled it well," [Patric] Young said.

How did Florida's players maintain their composure at the end of a tight game on the road against another good team? They had been in that same situation three times before, and failed. They didn't panic this time in large part because they had panicked before and learned from those experiences.

Kentucky's starters have played a total of 124 college games. Florida's four seniors have combined to play 491. That's a lot of experience -- a lot of opportunities to panic, to guess wrong, to underestimate a situation, or otherwise to come up short. And a lot of opportunities to learn.

The young players at Kentucky hurt today. As the author of the linked game report notes, Florida's players have hurt like that before, for coming up short in much the same way, "and they used that pain to get better".

It turns out that composure comes from experience, and experience comes from lack of composure.

As a teacher, I try to convince students not to shy away from the error messages their compiler gives them, or from the convoluted code they eventually sneak past it. Those are the experiences they'll eventually look back to when they are capable, confident programmers. They just need the opportunity to learn.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 25, 2013 2:56 PM

The Moment When Design Happens

Even when we plan ahead a bit, the design of a program tends to evolve. Gary Bernhardt gives an example in his essay on abstraction:

If I feel the need to violate the abstraction, I need to reconsider how to modify the boundaries to match that need, rather than violating the boundaries by crossing them.

This is the moment when design happens...

This is a hard design lesson to give students, because it is likely to click with them only after living with the consequences of violating the abstraction. This requires working with the same large program over time, preferably one they are building along the way.

This is one of the reasons I so like our senior project courses. My students are building a compiler this term, which gives them a chance to experience a moment when design happens. Their abstract syntax trees and symbol tables are just the sort of abstractions that invite violation -- and reward a little re-design.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

October 29, 2013 3:49 PM

My PLoP 2013 Retrospective

wrapper of the Plopp candy bar I received from Rebecca Rikner

PLoP 2013 was as much fun and as invigorating as I had hoped it would be. I hadn't attended in eight years, but it didn't take long to fall back into the rhythm of writers' workshops interspersed among invited talks, focus group sessions, BoFs, mingling, and (yes) games.

I was lead moderator for Workshop 010010, which consisted of pedagogical patterns papers. The focus of all of them was interactivity, whether among students building LEGO Mindstorms robots or among students and instructor on creative projects. The idea of the instructor as an active participant, even "generative" in the sense meant by Christopher Alexander, dominated our discussion. I look forward to seeing published versions of the papers we discussed.

The other featured events included invited talks by Jenny Quillien and Ward Cunningham and a 20-year retrospective panel featuring people who were present at the beginning of PLoP, the Hillside Group, and software patterns.

Quillien spent six years working with Alexander during the years he created The Nature of Order. Her talk shared some of the ways that Alexander was disappointed in the effect of his seminal "A Pattern Language" had on the world, both as a result of people misunderstanding it and as a result of the books inherent faults. Along the way, she tried to give pragmatic advice to people trying to document patterns of software. I may try to write up some of her thoughts, and some of my own in response, in the coming weeks.

Cunningham presented his latest work on federated wiki, the notion of multiple, individual wikis "federated" in relationships that share and present information for a common good. Unlike the original wiki, in which collaboration happened in a common document, federated wiki has a fork button on every page. Anyone can copy, modify, and share pages, which are then visible to everyone and available for merging back into the home wikis.

the favicon for my federated wiki on Ward's server

Ward set me up with a wiki in the federation on his server before I left on Saturday. I want to play with it a bit before I say much more than this: Federated wiki could change how communities share and collaborate in much the same way that wiki did.

I also had the pleasure of participating in one other structured activity while at PLoP. Takashi Iba and his students at Keio University in Japan are making a documentary about the history of the patterns community. Takashi invited me to sit for an interview about pedagogical patterns and their history within the development of software patterns. I was happy to help. It was a fun challenge to explain my understanding of what a pattern language is, and to think about what my colleagues and I struggled with in trying to create small pattern languages to guide instruction. Of course, I strayed off to the topic of elementary patterns as well, and that led to more interesting discussion with Takashi. I look forward to seeing their film in the coming years.

More so than even other conferences, unstructured activity plays a huge role in any PLoP conference. I skipped a few meals so that I could walk the extensive gardens and grounds of Allerton Park (and also so that I would not gain maximum pounds from the plentiful and tasty meals that were served). I caught up with old friends such as Ward, Kyle Brown, Bob Hanmer, Ralph Johnson, and made too many new friends to mention here. All the conversation had my mind swirling with new projects and old... Forefront in my mind is exploring again the idea of design and implementation patterns of functional programming. The time is still right, and I want to help.

Now, to write my last entry or two from StrangeLoop...

~~~~

Image 1. A photo of the wrapper of a Plopp candy bar, which I received as a gift from Rebecca Rikner. PLoP has a gifting tradition, and I received a box full of cool tools, toys, mementoes, and candy. Plopp is a Swedish candy bar, which made it a natural gift for Rebecca to share from her native land. (It was tasty, too!)

Image 2. The favicon for my federated wiki on Ward's server, eugene.fed.wiki.org. I like the color scheme that fed.wiki.org gave me -- and I'm glad to be early enough an adopter that I could claim my first name as the name of my wiki. The rest of the Eugenes in the world will have to settle for suffix numbers and all the other contortions that come with arriving late to the dance.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

October 07, 2013 12:07 PM

StrangeLoop: Exercises in Programming Style

[My notes on StrangeLoop 2013: Table of Contents]

Crista Lopes

I had been looking forward to Crista Lopes's StrangeLoop talk since May, so I made sure I was in the room well before the scheduled time. I even had a copy of the trigger book in my bag.

Crista opened with something that CS instructors have learned the hard way: Teaching programming style is difficult and takes a lot of time. As a result, it's often not done at all in our courses. But so many of our graduates go into software development for the careers, where they come into contact with many different styles. How can they understand them -- well, quickly, or at all?

To many people, style is merely the appearance of code on the screen or printed. But it's not. It's more, and something entirely different. Style is a constraint. Lopes used images of a few stylistic paintings to illustrate the idea. If an artist limits herself to pointillism or cubism, how can she express important ideas? How does the style limit the message, or enhance it?

But we know this is true of programming as well. The idea has been a theme in my teaching for many years. I occasionally write about the role of constraints in programming here, including Patterns as a Source of Freedom, a few programming challenges, and a polymorphism challenge that I've run as a workshop.

Lopes pointed to a more universal example, though: the canonical The Elements of Programming Style. Drawing on this book and other work in software, she said that programming style ...

  • is a way to express tasks
  • exists at all scales
  • recurs at multiple scales
  • is codified in programming language

For me, the last bullet ties back most directly to idea of style as constraint. A language makes some things easier to express than others. It can also make some things harder to express. There is a spectrum, of course. For example, some OO languages make it easy to create and use objects; others make it hard to do anything else! But the language is an enabler and enforcer of style. It is a proxy for style as a constraint on the programmer.

Back to the talk. Lopes asked, Why is it so important that we understand programming style? First, a style provides the reader with a frame of reference and a vocabulary. Knowing different styles makes us a more effective consumers of code. Second, one style can be more appropriate for a given problem or context than another style. So, knowing different styles makes us a more effective producers of code. (Lopes did not use the producer-consumer distinction in the talk, but it seems to me a nice way to crystallize her idea.)

the cover of Raymond Queneau's Exercises in Style

The, Lopes said, I came across Raymond Queneau's playful little book, "Exercises in Style". Queneau constrains himself in many interesting ways while telling essentially the same story. Hmm... We could apply the same idea to programming! Let's do it.

Lopes picked a well-known problem, the common word problem famously solved in a Programming Pearls column more than twenty-five years. This is a fitting choice, because Jon Bentley included in that column a critique of Knuth's program by Doug McIlroy, who considered both engineering concerns and program style in his critique.

The problem is straightforward: identify and print the k most common terms that occur in a given text document, in decreasing order. For the rest of the talk, Lopes presented several programs that solve the problem, each written in a different style, showing code and highlighting its shape and boundaries.

Python was her language of choice for the examples. She was looking for a language that many readers would be able to follow and understand, and Python has the feel of pseudo-code about it. (I tell my students that it is the Pascal of their time, though I may as well be speaking of hieroglyphics.) Of course, Python has strengths and weaknesses that affect its fit for some styles. This is an unavoidable complication for all communication...

Also, Lopes did not give formal names to the styles she demonstrated. Apparently, at previous versions of this talk, audience members had wanted to argue over the names more than the styles themselves! Vowing not to make that mistake again, she numbered her examples for this talk.

That's what programmers do when they don't have good names.

In lieu of names, she asked the crowd to live-tweet to her what they thought each style is or should be called. She eventually did give each style a fun, informal name. (CS textbooks might be more evocative if we used her names instead of the formal ones.)

I noted eight examples shown by Lopes in the talk, though there may have been more:

  • monolithic procedural code -- "brain dump"
  • a Unix-style pipeline -- "code golf"
  • procedural decomposition with a sequential main -- "cook book"
  • the same, only with functions and composition -- "Willy Wonka"
  • functional decomposition, with a continuation parameter -- "crochet"
  • modules containing multiple functions -- "the kingdom of nouns"
  • relational style -- (didn't catch this one)
  • functional with decomposition and reduction -- "multiplexer"

Lopes said that she hopes to produce solutions using a total of thirty or so styles. She asked the audience for help with one in particular: logic programming. She said that she is not a native speaker of that style, and Python does not come with a logic engine built-in to make writing a solution straightforward.

Someone from the audience suggested she consider yet another style: using a domain-specific language. That would be fun, though perhaps tough to roll from scratch in Python. By that time, my own brain was spinning away, thinking about writing a solution to the problem in Joy, using a concatenative style.

Sometimes, it's surprising just how many programming styles and meaningful variations people have created. The human mind is an amazing thing.

The talk was, I think, a fun one for the audience. Lopes is writing a book based on the idea. I had a chance to review an early draft, and now I'm looking forward to the finished product. I'm sure I'll learn something new from it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 27, 2013 4:26 PM

StrangeLoop: Add All These Things

[My notes on StrangeLoop 2013: Table of Contents]

I took a refreshing walk in the rain over the lunch hour on Friday. I managed to return late and, as a result, missed the start of Avi Bryant's talk on algebra and analytics. Only a few minutes, though, which is good. I enjoyed this presentation.

Bryant didn't talk about the algebra we study in eighth or ninth grade, but the mathematical structure math students encounter in a course called "abstract" or "modern" algebra. A big chunk of the talk focused on an even narrower topic: why +, and operators like it, are cool.

One reason is that grouping doesn't matter. You can add 1 to 2, and then add 4 to the result, and have the same answer as if you added 4 to 1, and then added 2 to the result. This is, of course, the associative property.

Another is that order doesn't matter. 1 + 2 is the same as 2 + 1. That's the commutative property.

Yet another is that, if you have nothing to add, you can add nothing and have the same value you started with. 4 + 0 = 4. 0 is the identity element for addition.

Finally, when you add two numbers, you get a number back. This is not quite as true in computers as in math, because an operation can cause an overflow or underflow and create an error. But looked at through fuzzy lenses, this is true in our computers, too. This is the closure property for addition of integers and real numbers.

Addition isn't the only operation on numbers that has these properties. Finding the maximum value in a set of numbers, does, too. The maximum of two numbers is a number. max(x,y) = max(y,x), and if we have three or more numbers, it doesn't how matter how we group them; max will find the maximum among them. The identity value is tricky -- there is no smallest number... -- but in practice we can finesse this by using the smallest number of a given data type, or even allowing max to take "nothing" as a value and return its other argument.

When we see a pattern like this, Bryant said, we should generalize:

  • We have a function f that takes two values from a set and produces another member of the same set.
  • The order of f's arguments doesn't matter.
  • The grouping of f's arguments doesn't matter.
  • There is some identity value, a conceptual "zero", that doesn't matter, in the sense that f(i,zero) for any i is simply i.

There is a name for this pattern. When we have such as set and operation, we have a commutative monoid.

     S ⊕ S → S
     x ⊕ y = y ⊕ x
     x ⊕ (y ⊕ z) = (x ⊕ y) ⊕ z
     x ⊕ 0 = x

I learned about this and other such patterns in grad school when I took an abstract algebra course for kicks. No one told me at the time that I'd being seeing them again as soon as someone created the Internet and it unleashed a torrent of data on everyone.

Just why we are seeing the idea of a commutative monoid again was the heart of Bryant's talk. When we have data coming into our company from multiple network sources, at varying rates of usage and data flow, and we want to extract meaning from the data, it can be incredibly handy if the meaning we hope to extract -- the sum of all the values, or the largest -- can be computed using a commutative monoid. You can run multiple copies of your function at the entry point of each source, and combine the partial results later, in any order.

Bryant showed this much more attractively than that, using cute little pictures with boxes. But then, there should be an advantage to going to the actual talk... With pictures and fairly straightforward examples, he was able to demystify the abstract math and deliver on his talk's abstract:

A mathematician friend of mine tweeted that anyone who doesn't understand abelian groups shouldn't build analytics systems. I'd turn that around and say that anyone who builds analytics systems ends up understanding abelian groups, whether they know it or not.

That's an important point. Just because you haven't studied group theory or abstract algebra doesn't mean you shouldn't do analytics. You just need to be prepared to learn some new math when it's helpful. As programmers, we are all looking for opportunities to capitalize on patterns and to generalize code for use in a wider set of circumstances. When we do, we may re-invent the wheel a bit. That's okay. But also look for opportunities to capitalize on patterns recognized and codified by others already.

Unfortunately, not all data analysis is as simple as summing or maximizing. What if I need to find an average? The average operator doesn't form a commutative monoid with numbers. It falls short in almost every way. But, if you switch from the set of numbers to the set of pairs [n, c], where n is a number and c is a count of how many times you've seen n, then you are back in business. Counting is addition.

So, we save the average operation itself as a post-processing step on a set of number/count pairs. This turns out to be a useful lesson, as finding the average of a set is a lossy operation: it loses track of how many numbers you've seen. Lossy operations are often best saved for presenting data, rather than building them directly into the system's computation.

Likewise, finding the top k values in a set of numbers (a generalized form of maximum) can be handled just fine as long as we work on lists of numbers, rather than numbers themselves.

This is actually one of the Big Ideas of computer science. Sometimes, we can use a tool or technique to solve a problem if only we transform the problem into an equivalent one in a different space. CS theory courses hammer this home, with oodles of exercises in which students are asked to convert every problem under the sun into 3-SAT or the clique problem. I look for chances to introduce my students to this Big Idea when I teach AI or any programming course, but the lesson probably gets lost in the noise of regular classwork. Some students seem to figure it out by the time they graduate, though, and the ones who do are better at solving all kinds of problems (and not by converting them all 3-SAT!).

Sorry for the digression. Bryant didn't talk about 3-SAT, but he did demonstrate several useful problem transformations. His goal was more practical: how can we use this idea of a commutative monoid to extract as many interesting results from the stream of data as possible.

This isn't just an academic exercise, either. When we can frame several problems in this way, we are able to use a common body of code for the processing. He called this body of code an aggregator, comprising three steps:

  • prepare the data by transforming it into the space of a commutative monoid
  • reduce the data to a single value in that space, using the appropriate operator
  • present the result by transforming it back into its original space

In practice, transforming the problem into the space of a monoid presents challenges in the implementation. For example, it is straightforward to compute the number of unique values in a collection of streams by transforming each item into a set of size one and then using set union as the operator. But union requires unbounded space, and this can be inconvenient when dealing with very large data sets.

One approach is to compute an estimated number of uniques using a hash function and some fancy arithmetic. We can make the expected error in estimate smaller and smaller by using more and more hash functions. (I hope to write this up in simple code and blog about it soon.)

Bryant looked at one more problem, computing frequencies, and then closed with a few more terms from group theory: semigroup, group, and abelian group. Knowing these terms -- actually, simply knowing that they exist -- can be useful even for the most practical of practitioners. They let us know that there is more out there, should our problems become harder or our needs become larger.

That's a valuable lesson to learn, too. You can learn all about abelian groups in the trenches, but sometimes it's good to know that there may be some help out there in the form of theory. Reinventing wheels can be cool, but solving the problems you need solved is even cooler.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

September 22, 2013 3:51 PM

StrangeLoop: Jenny Finkel on Machine Learning at Prismatic

[My notes on StrangeLoop 2013: Table of Contents]

The conference opened with a talk by Jenny Finkel on the role machine learning play at Prismatic, the customized newsfeed service. It was a good way to start the conference, as it introduced a few themes that would recur throughout, had a little technical detail but not too much, and reported a few lessons from the trenches.

Prismatic is trying to solve the discovery problem: finding content that users would like to read but otherwise would not see. This is more than simply a customized newsfeed from a singular journalistic source, because it draws from many sources, including other reader's links, and because it tries to surprise readers with articles that may not be explicitly indicated by their profiles.

The scale of the problem is large, but different from the scale of the raw data facing Twitter, Facebook, and the like. Finkel said that Prismatic is processing only about one million timely docs at a time, with the set of articles turning over roughly weekly. The company currently uses 5,000 categories to classify the articles, though that number will soon go up to the order of 250,000.

The complexity here comes from the cross product of readers, articles, and categories, along with all of the features used to try to tease out why readers like the things they do and don't like the others. On top of this are machine learning algorithms that are themselves exponentially expensive to run. And with articles turning over roughly weekly, they have to be amassing data, learning from it, and moving on constantly.

The main problem at the heart of a service like this is: What is relevant? Everywhere one turns in AI, one sees this question, or its more general cousin, Is this similar? In many ways, this is the problem at the heart of all intelligence, natural and artificial.

Prismatic's approach is straight from AI, too. They construct a feature vector for each user/article pair and then try to learn weights that, when applied to the values in a given vector, will rank desired articles high and undesired articles low. One of the key challenges when doing this kind of working is to choose the right features to use in the vector. Finkel mentioned a few used by Prismatic, including "Does the user follow this topic?", "How many times has the reader read an article from this publisher?", and "Does the article include a picture?"

With a complex algorithm, lots of data, and a need to constantly re-learn, Prismatic has to make adjustments and take shortcuts wherever possible in order to speed up the process. This is a common theme at a conference where many speakers are from industry. First, learn your theory and foundations; learn the pragmatics and heuristics need to turn basic techniques into the backbone of practical applications.

Finkel shared one pragmatic idea of this sort that Prismatic uses. They look for opportunities to fold user-specific feature weights into user-neutral features. This enables their program to compute many user-specific dot products using a static vector.

She closed the talk with five challenges that Prismatic has faced that other teams might be on the look out for:

Bugs in the data. In one case, one program was updating a data set before another program could take a snapshot of the original. With the old data replaced by the new, they thought their ranker was doing better than it actually was. As Finkel said, this is pretty typical for an error in machine learning. The program doesn't crash; it just gives the wrong answer. Worse, you don't even have reason to suspect something is wrong in the offending code.

Presentation bias. Readers tend to look at more of the articles at the top of a list of suggestions, even if they would have enjoyed something further down the list. This is a feature of the human brain, not of computer programs. Any time we write programs that interact with people, we have to be aware of human psychology and its effects.

Non-representative subsets. When you are creating a program that ranks things, its whole purpose is to skew a set of user/article data points toward the subset of articles that the reader most wants to read. But this subset probably doesn't have the same distribution as the full set, which hampers your ability to use statistical analysis to draw valid conclusions.

Statistical bleeding. Sometimes, one algorithm looks better than it is because it benefits from the performance of the other. Consider two ranking algorithms, one an "explorer" that seeks out new content and one an "exploiter" that recommend articles that have already been found to be popular. If we in comparing their performances, the exploiter will tend to look better than it is because it benefits from the successes of the explorer without being penalized for its failures. It is crucial to recognize that one feature you measure is not dependent on another. (Thanks to Christian Murphy for the prompt!)

Simpson's Paradox. The iPhone and the web have different clickthrough rates. They once found them in a situation where one recommendation algorithm performed worse than another on both platforms, yet better overall. This can really disorient teams who follow up experiments by assessing the results. The issue here is usually a hidden variable that is confounding the results.

(I remember discussing this classic statistical illusion with a student in my early years of teaching, when we encountered a similar illusion in his grade. I am pretty sure that I enjoyed our discussion of the paradox more than he did...)

This part of a talk is of great value to me. Hearing about another team's difficulties rarely helps me avoid the same problems in my own projects, but it often does help me recognize those problems when they occur and begin thinking about ways to work around them. This was a good way for me to start the conference.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

September 15, 2013 10:25 AM

I am Going to PLoP 2013. Yeah!

the PLoP 2013 logo

On Friday, an e-mail message was sent to a number of mailing lists inviting people to register for the 20th Conference on Pattern Languages of Programs, also known as PLoP 2013. In part, it said:

... this time a lot of old-timers will also participate, among them Ward Cunningham, Peter Sommerlad, Kyle Brown, Joshua Kerievsky, Eugene Wallingford, Jenny Quillien, Joe Yoder, Ralph Johnson, Richard Gabriel, Robert Hanmer, Rebecca Wirfs-Brock, Lise Hvatum, and Ademar Aguiar.

When I saw this, I felt as if I were a prop in a game of "One of these things is not like the others...". My name will surely raise a few "Huh?"s in a list that contains Ward Cunningham, Ralph Johnson, Richard Gabriel, Rebecca Wirfs-Brock, and a number of other well-known book authors. I am, however, willing to admit that I am now an old-timer!

Yes, I am going to PLoP again this year. After attending for ten straight years, 1996-2005, chairing the 2000 conference, serving on a number of program committees, and chairing a few ChiliPLoPs, school duties pulled me away. Becoming department head ate a lot of my time, and I found myself writing less. Also, in 2006, PLoP left Allerton Park and collocated with OOPSLA/SPLASH for a few years. That made going to PLoP a bigger time commitment for me than driving a few hours to the southeast.

This year, PLoP returns to Allerton Park, which is an awesome site for a conference: remote, relaxing, unusual, and stimulating. It is also a great place to run and ride.

I am excited to be going back to Allerton and to PLoP this year. I look forward to seeing so many of the old gang. I also look forward to making new friends among the software practitioners who are working hard to document the deep design and implementation knowledge of how to write programs. So much of this knowledge is created in the trenches, after we graduate from school and start making things under varied constraints. Fortunately, there are developers out there who are trying to write that knowledge down -- and write it well.

Sadly, I don't have a paper in a workshop this year. The writers' workshop is one of the key innovations brought to the software world by the patterns community, and I enjoy them.

I will be participating in some of the 20th anniversary events, I'm sure, as well as helping with a working group on pedagogical patterns. There has been a lot of interesting work going on in this space over the last few years, especially in Europe, under the leadership of Christian Köppe, Christian Kohls, and others. PLoP will give me a chance to meet the people doing this new work and to work with them.

So, count this patterns old-timer as excited about the chance to renew friendships with other old-timers and the chance to make new friendships with the new standard bearers. And about the chance to savor again the unique atmosphere of Allerton Park.

First, though, I head to StrangeLoop 2013 later this week. I am psyched.


Posted by Eugene Wallingford | Permalink | Categories: Patterns

June 13, 2013 3:01 PM

It's Okay to Talk About Currying!

James Hague offers some sound advice for writing functional programming tutorials. I agree with most of it, having learned the hard way by trying to teach functional style to university students for many years. But I disagree with one of his suggestions: I think it's okay to talk about currying.

Hague's concern with currying is practical:

Don't get so swept up in that theory that you forget the obvious: in any programming language ever invented, there's already a way to easily define functions of multiple arguments. That you can build this up from more primitive features is not useful or impressive to non-theoreticians.

Of course, my context is a little different. We teach functional programming in a course on programming languages, so a little theory is important. We want students not only to be able to write code in a functional style but also to understand some of the ideas at the foundation of the languages they use. We also want them to understand a bit about how different programming styles relate to one another.

But even in the context of teaching people to think functionally and to write code in that style, I think it's okay to talk about currying. Indeed, it is essential. Currying is not simply a theoretical topic. It is a valuable programming technique.

Here is an example. When we write a language interpreter, we often write a procedure names eval-exp. It takes two arguments: an expression to evaluate, and a list of variable/value bindings.

   (define eval-exp
     (lambda (exp env)
       ...))

The binding list, sometimes called an environment, is a map of names declared in the local block to their values, along with the bindings from the blocks that contain the local block. Each time the interpreter enters a new block, it pushes a new set of name/value pairs onto the binding list and recurses.

To evaluate a function call for which arguments are passed by value, the interpreter must first evaluate all of the function's arguments. As the arguments are all in the same block, they are evaluated using the same binding list. We could write a new procedure to evaluate the arguments recursively, but this seems like a great time to map a procedure over a list: (map eval-exp args), get a list of the results, and pass them to the code that applies the function to them.

We can't do that, though, because eval-exp is a two-argument procedure, and map works only with a one-argument procedure. But the same binding list is used to evaluate each of the expressions, so that argument to eval-exp is effectively a constant for the purposes of the mapping operation.

So we curry eval-exp:

   (define eval-exp-with
     (lambda (bindings)
       (lambda (exp)
         (eval-exp exp bindings))))

... to create the one-argument evaluator that we need, and we use it to evaluate the arguments with map:

   ; in eval-exp
   (map (eval-exp-with env) arguments)

In most functional languages, we can use a nameless lambda to curry eval-exp "in place" and avoid writing an explicit helper function:

   ; an alternative approach in eval-exp
   (map (lambda (exp)
          (eval-exp exp bindings))
        arguments)

This doesn't look much like currying because we never created the procedure that takes the bindings argument. But we can reach this same piece of code by writing eval-exp-with, calling it in eval-exp, and then using program derivation to substitute the value of the call for the call itself. This is actually a nice connection to be able to make in a course about programming languages!

When I deliver short tutorials on functional style, currying often does not make the cut, because there are so many cool and useful ideas to cover. But it doesn't take long writing functional code before currying becomes useful. As this example shows, currying is a practical tool for the functional programmer to have in his or her pocket. In FP, currying isn't just theory. It's part of the style.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

May 21, 2013 3:05 PM

Exercises in Exercises in Style

I just registered for Strange Loop 2013, which doesn't happen until this fall. This has become a popular conference, deservedly so, and it didn't seem like a good idea to wait to register and risk being shut out.

One of the talks I'm looking forward to is by Crista Lopes. I mentioned Crista in a blog entry from last year's Strange Loop, for a talk she gave at OOPSLA 2003 that made an analogy between programming language and natural language. This year, she will give a talk called Exercises in Style that draws inspiration from a literary exercise:

Back in the 1940s, a French writer called Raymond Queneau wrote an interesting book with the title Exercises in Style featuring 99 renditions of the exact same short story, each written in a different style. This talk will shamelessly do the same for a simple program. From monolithic to object-oriented to continuations to relational to publish/subscribe to monadic to aspect-oriented to map-reduce, and much more, you will get a tour through the richness of human computational thought by means of implementing one simple program in many different ways.

If you've been reading this blog for long, you can image how much I like this idea. I even checked Queneau's book out of the library and announced on Twitter my plan to read it before the conference. From the response I received, I gather a lot of conferences attendees plan to do the same. You gotta love the audience Strange Loop cultivates.

I actually have a little experience with this idea of writing the same program in multiple styles, only on a much smaller scale. For most of the last twenty years, our students have learned traditional procedural programming in their first-year sequence and object-oriented programming in the third course. I taught the third course twice a year for many years. One of things I often did early in the course was to look at the same program in two forms, one written in a procedural style and one written in OOP. I hoped that the contrast between the programs would help them see the contrast between how we think about programs in the two styles.

I've been teaching functional programming regularly for the last decade, after our students have seen procedural and OO styles in previous courses, but I've rarely done the "exercises in style" demo in this course. For one thing, it is a course on languages and interpreters, not a course on functional programming per se, so the focus is on getting to interpreters as soon as possible. We do talk about differences in the styles in terms of their concepts and the underlying differences (and similarities!) in their implementation. But I think about doing so every time I prep the next offering of the course.

Not doing "exercises in style" can be attractive, too. Small examples can mislead beginning students about what is important, or distract them with concepts they'd won't understand for a while. The wrong examples can damage their motivation to learn. In the procedural/object-oriented comparison, I have had reasonable success in our OOP course with a program for simple bank accounts and a small set of users. But I don't know how well this exercise would work for a larger and more diverse set of styles, at least not at a scale I could use in our courses.

I thought of this when @kaleidic tweeted, "I hope @cristalopes includes an array language among her variations." I do, too, but my next thought was, "Well, now Crista needs to use an example problem for which an array language is reasonably well-suited." If the problem is not well suited to array languages, the solution might look awkward, or verbose, or convoluted. A newcomer to array languages is left to wonder, "Is this a problem with array languages, or with the example?" Human nature as it is, too many of us are prone to protect our own knowledge and assume that something is wrong with the new style.

An alternative approach is to get learners to suspend their disbelief for a while, learn some nuts and bolts, and then help them to solve bigger problems using the new style. My students usually struggle with this at first, but many of them eventually reach a point where they "get" the style. Solving a larger problem gives them a chance to learn the advantages and disadvantages of their new style, and retroactively learn more about the advantages and disadvantages of the styles they already know well. These trade-offs are the foundation of a really solid understanding of style.

I'm really intrigued by Queneau's idea. It seems that he uses a small example not to teach about each style in depth but rather to give us a taste. What does each style feel like in isolation? It is up to the aspiring writer to use this taste as a starting point, to figure out where each style might take you when used for a story of the writer's choosing.

That's a promising approach for programming styles, too, which is one of the reasons I am so looking forward to Crista's talk. As a teacher, I am a shameless thief of good ideas, so I am looking forward to seeing the example she uses, the way she solves it in the different styles, and the way she presents them to the crowd.

Another reason I'm looking forward to the talk is that I love programs, and this should be just plain fun.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 08, 2013 12:11 PM

Not So Random Sentences

I start with a seemingly random set of sentences to blog about and, in the process of writing about them, find that perhaps they aren't so random after all.

An Era of Sharing Our Stuff

Property isn't theft; property is an inefficient distribution of resources.

This assertion comes from an interesting article on "economies of scale as a service", in reaction to a Paul Graham tweet:

Will ownership turn out to be largely a hack people resorted to before they had the infrastructure to manage sharing properly?

Open-source software, the Creative Commons, crowdsourcing. The times they are a-changin'.

An Era of Observing Ourselves

If the last century was marked by the ability to observe the interactions of physical matter -- think of technologies like x-ray and radar -- this century is going to be defined by the ability to observe people through the data they share.

... from The Data Made Me Do It.

I'm not too keen on being "observed" via data by every company in the world, even as understand the value it can brings the company and even me. But I like very much the idea that I can observe myself more easily and more productively. For years, I collected and studied data about my running and used what I learned to train and race better. Programmers are able to do this better now than ever before. You can learn a lot just by watching.

An Era of Thinking Like Scientist

... which leads to this line attributed to John C. Reynolds, an influential computer scientist who passed away recently:

Well, we know less than we did before, but more of what we know is actually true.

It's surprising how easy it is to know stuff when we don't have any evidence at all. Observing the world methodically, building models, and comparing them to what we observe in the future helps to know less of the wrong stuff and more of the right stuff.

Not everyone need be a scientist, but we'd all be better off if more of us thought like a scientist more often.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Personal

April 07, 2013 2:23 PM

To Solve Or Not To Solve

This week I bumped into a familiar tension that arises whenever I assign a new homework problem to students.

Path 1. I assign a new problem before I solve it myself. The problem turns out to too difficult, or includes a complication or two that I didn't anticipate. Students become frustrated, especially the weaker ones. Because of the distraction, most everyone misses the goal of the assignment.

Path 2. I solve a new problem before I assign it. I run into a couple of unexpected wrinkles, things that make the problem less straightforward than I had planned. "This will distract my students," I think, so I iron out a wrinkle here and eliminate a distraction there. Then I assign the problem to my students. The result feels antiseptic, unchallenging. Some students are bored with the problem, especially the stronger ones.

In this week's case, I followed the second path. I assigned a new problem in my Programming Languages course before solving it myself. When I sat down to whip up my solutions, I realized the problem held a couple of surprises for me. I had a lot of fun with those surprises and was happy with the code that resulted. But I also realized that my students will have to do more fancy thinking than I had planned on. Nothing too tricky, just a couple of non-obvious steps along the way to an obvious solution.

Will that defeat the point of assigning the problem? Will my stronger students be happy for the challenge? Will my weaker students be frustrated, or angry at their inability to solve a problem that looks so much like another they have already solved? Will the problem help students learn something new about the topic at hand?

I ultimately believe that students benefit strongly from the challenge of problems that have not been pre-digested for them by the instructor or textbook. When we take too much care in preparing assignments ahead of time, we rob students of the joy of solving a problem that has rough edges.

Joy, and the sense of accomplishment that comes from taking on a real challenge. Problems in the world typically have rough edges, or throw wrinkles at us when we aren't looking for them.

Affording students the joy of solving a real problem is perhaps especially important for the stronger students, who often have to live with a course aimed at the middle of the curve. But it's just as important for the rest of the class. Skill and confidence grow out of doing something worth doing, even if it takes a little help from the professor.

I continue to use both approaches when I create assignments, sometimes solving a new problem first and sometimes trusting my instinct. The blend keeps assignments from veering too far in one direction or the other, I think, which gives students some balance.

However, I am usually most happy when I let a new problem surprise us all. I try to keep these problems on track by paying closer attention to students as they begin to work on them. When we run into an unexpected rough edge, I try to intervene with just enough assistance to get them over the complexity, but not so much as to sterilize the problem.

Finding the right balance between too clean and too rough is a tough problem for a teacher to solve. It is a problem worth solving, and a source of both disappointment and joy.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

February 28, 2013 2:52 PM

The Power of a Good Abstract

Someone tweeted a link to Philip Greenspun's M.S. thesis yesterday. This is how you grab your reader's attention:

A revolution in earthmoving, a $100 billion industry, can be achieved with three components: the GPS location system, sensors and computers in earthmoving vehicles, and SITE CONTROLLER, a central computer system that maintains design data and directs operations. The first two components are widely available; I built SITE CONTROLLER to complete the triangle and describe it here.

Now I have to read the rest of the thesis.

You could do worse than use Greenspun's first two sentences as a template for your next abstract:

A revolution in <major industry or research area> can be achieved with <n> components: <component-1>, <component-2>, ... and <component-n>. The first <n-1> components are widely available. I built <program name> to meet the final need and describe it here.

I am adding this template to my toolbox of writing patterns, alongside Kent Beck's four-sentence abstract (scroll down to Kent's name), which generalizes the idea of one startling sentence that arrests the reader. I also like good advice on how to write concise, incisive thesis statements, such as that in Matt Might's Advice for PhD Thesis Proposals and Olin Shivers's classic Dissertation Advice.

As with any template or pattern, overuse can turn a good idea into a cliché. If readers repeatedly see the same cookie-cutter format, it begins to look stale and will cause the reader to lose interest. So play with variations on the essential theme: I have solved an important problem. This is my solution.

If you don't have a great abstract, try again. Think hard about your own work. Why is this problem important? What is the big win from my solution? That's a key piece of advice in Might's advice for graduate students: state clearly and unambiguously what you intend to achieve.

Indeed, approaching your research in a "test-driven" way makes a lot of sense. Before embarking on a project, try to write the startling abstract that will open the paper or dissertation you write when you have succeeded. If you can't identify the problem as truly important, then why start at all? Maybe you should pick something more valuable to work on, something that matters enough you can write a startling abstract for the esult. That's a key piece of advice shared by Richard Hamming in his You and Your Research.

And whatever you do, don't oversell a minor problem or a weak solution with an abstract that promises too much. Readers will be disappointed at best and angry at worst. If you oversell even a little bit too many times, you will become like the boy who cried wolf. No one will believe your startling claim even when it's on the mark.

Greenspun's startling abstract ends as strongly as it begins. Of course, it helps if you can close with a legitimate appeal to ameliorating poverty around the world:

This area is exciting because so much of the infrastructure is in place. A small effort by computer scientists could cut the cost of earthmoving in half, enabling poor countries to build roads and rich countries to clean up hazardous waste.

I'm not sure adding another automating refactoring to Eclipse or creating another database library can quite rise to the level of empowering the world's poor. But then, you may have a different audience.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

February 18, 2013 12:59 PM

Code Duplication as a Hint to Think Differently

Last week, one of my Programming Languages students sent me a note saying that his homework solution worked correctly but that he was bothered by some duplicated code.

I was so happy.

Any student who has me for class for very long hears a lot about the dangers of duplication for maintaining code, and also that duplication is often a sign of poor design. Whenever I teach OOP or functional programming, we learn ways to design code that satisfy the DRY principle and ways to eliminate it via refactoring when it does sneak in.

I sent the student an answer, along with hearty congratulations for recognizing the duplication and wanting to eliminate it. My advice

When I sat down to blog the solution, I had a sense of deja vu... Hadn't I written this up before? Indeed I had, a couple of years ago: Increasing Duplication to Eliminate Duplication. Even in the small world of my own teaching, it seems there is nothing new under the sun.

Still, there was a slightly different feel to the way I talked about this in class later that day. The question had come earlier in the semester this time, so the code involved was even simpler. Instead of processing a vector or a nested list of symbols, we were processing with a flat list of symbols. And, instead of applying an arbitrary test to the list items, we were simply counting occurrences of a particular symbol, s.

The duplication occurred in the recursive case, where the procedure handles a pair:

    (if (eq? s (car los))
        (+ 1 (count s (cdr los)))      ; <---
        (count s (cdr los)))           ; <---

Then we make the two sub-cases more parallel:

    (if (eq? s (car los))
        (+ 1 (count s (cdr los)))      ; <---
        (+ 0 (count s (cdr los))))     ; <---

And then use distributivity to push the choice down a level:

    (+ (if (eq? s (car los)) 1 0)
       (count s (cdr los)))            ; <--- just once!

This time, I made a point of showing the students that not only does this solution eliminate the duplication, it more closely follows the command to follow the shape of the data:

When defining a program to process an inductively-defined data type, the structure of the program should follow the structure of the data.

This guideline helps many programmers begin to write recursive programs in a functional style, rather than an imperative style.

Note that in the first code snippet above, the if expression is choosing among two different solutions, depending on whether we see the symbol s in the first part of the pair or not. That's imperative thinking.

But look at the list-of-symbols data type:

    <list-of-symbols> ::= ()
                        | (<symbol> . <list-of-symbols>)

How many occurrences of s are in a pair? Obviously, the number of s's found in the car of the list plus the number of s's found in the cdr of the list. If we design our solution to match the code to the data type, then the addition operation should be at the top to begin:

    (+ ; number of s's found in the car
       ; number of s's found in the cdr )

If we define the answer for the problem in terms of the data type, we never create the duplication-by-if in the first place. We think about solving the subproblems for the car and the cdr, fill in the blanks, and arrive immediately at the refactored code snippet above.

I have been trying to help my students begin to "think functionally" sooner this semester. There is a lot or room for improvement yet in my approach. I'm glad this student asked his question so early in the semester, as it gave me another chance to model "follow the data" thinking. In any case, his thinking was on the right track.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

February 12, 2013 2:53 PM

Student Wisdom on Monad Tutorials

After class today, a few of us were discussing the market for functional programmers. Talk turned to Clojure and Scala. A student who claims to understand monads said:

To understand monad tutorials, you really have to understand monads first.

Priceless. The topic of today's class was mutual recursion. I think we are missing a base case here.

I don't know whether this is a problem with monads, a problem with the writers of monad tutorials, or a problem with the rest of us. If it is true, then it seems a lot of people are unclear on the purpose of a tutorial.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

December 31, 2012 8:22 AM

Building Things and Breaking Things Down

As I look toward 2013, I've been thinking about Alan Kay's view of CS as science [ link ]:

I believe that the only kind of science computing can be is like the science of bridge building. Somebody has to build the bridges and other people have to tear them down and make better theories, and you have to keep on building bridges.

In 2013, what will I build? What will I break down, understand, and help others to understand better?

One building project I have in mind is an interactive text. One analysis project in mind involves functional design patterns.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns

December 29, 2012 8:47 AM

Beautiful Sentences

Matthew Ward, in the Translator's Note to "The Stranger" (Vintage Books, 1988):

I have also attempted to venture further into the letter of Camus's novel, to capture what he said and how he said it, not what he meant. In theory, the latter should take care of itself.

This approach works pretty well for most authors and most books, I imagine.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

December 14, 2012 3:50 PM

A Short Introduction to the Law of Demeter

Preface

My students spent the last three weeks of the semester implementing some of the infrastructure for a Twitter-like messaging app. They grew the app in three iterations, with new and changed features at each step. In the readme file for one of the versions, a student commented:

I wound up having a lot of code of the sort
  thisCollection.thisInsideCollection.thisAttribute.thisMethod()

I smiled and remembered again that even students who would never write such code in a smaller, more focused setting can be lulled into writing it in the process of growing a big, complicated program. I made a mental note to pay a little extra attention the next time I teach the course to the Law of Demeter.

I actually don't talk much about the Law of Demeter in this sophomore-level course, because it's a name we don't need. But occasionally I'd like to point a student to a discussion of it, and there don't seem to be a lot of resources at the right level for these students. So I decided to draft the beginnings of a simple reference. I welcome your suggestions on how to make it better.

You might also check out The Paperboy, The Wallet, and The Law Of Demeter, a nice tutorial I recently came across.

What is the Law of Demeter?

The Law of Demeter isn't a law so much as a general principle for software design. It is often referred to in the context of object-oriented programming, so you may see it phrased in terms of objects:

Objects should have short reach.

Or:

An object should not try to know too much, or need to.

The law's Wikipedia entry has a nice object-free formulation:

Only talk to your immediate friends.

If those are too squishy for you, the Wikipedia entry also as a more formal summary:

The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents).

Framed this way, the Law of Demeter is just a restatement of OOP 101. Objects are independent. They encapsulate their state and behavior. Instance variables are private, and we should be suspicious of getter methods.

As a matter of programming style, I often introduce this principle in a pragmatic way:

An operation should live with the data it uses.

Those are all general statements of the Law of Demeter. You will sometimes see a much more specific, formal statement of this sort:

A method m of an object obj may send messages only to these objects:
  • obj itself
  • obj's instance variables
  • m's formal parameters
  • any objects created within m

This version of the Law is actually an enumeration of specific ways that we can obey the more general principle. In the case of existing code, this version can help us recognize that the general principle has been violated.

Why is the Law of Demeter important?

This principle is often pitched as being about loose coupling: we should minimize the amount of knowledge that any component has about the implementation of any other component.

Another way to think about this principle from the perspective of the receiver. Some object wants access to its parts in order to do its job. From this angle, the Law of Demeter is fundamentally about encapsulation. The receiver's implementation should be private, and chained messages tend to leak implementation detail. When we hide those details from the object's collaborators, we shield other parts of the system from changes to the implementation.

How can we follow the Law of Demeter?

Consider my student's example from above:

  thisCollection.thisInsideCollection.thisAttribute.thisMethod()

The simplest way to eliminate the sender's need to know about thisInsideCollection and thisAttribute is to

  • add a method to thisCollection's class that accomplishes thisInsideCollection.thisAttribute.thisMethod(), and
  • send the new message to thisCollection:
      thisCollection.doSomething()
    

Coming up with a good name for the new message doSomething forces us to think about what this behavior means in our program. Many times, finding a good name helps us to clarify how we think about the objects that make up our program.

Notice that the new method in thisCollection might still violate our design principle, because thisCollection needs to know that thisInsideCollection is implemented in terms of thisAttribute and that thisAttribute responds to thisMethod(). That's okay. You can apply the same process again, looking for a way to push the behavior that operates on thisAttribute into the object that knows about it.

More generally, when you notice yourself violating the Law of Demeter, think of the violation as an opportunity to rethink the objects in your program and how they relate to one another. Why is this object performing this task? Why doesn't its collaborator provide that service? Perhaps we can give this responsibility to another object. Who knows how to help this object with this task?

Sometimes, we even find that we can move thisMethod() into thisCollection and take our object out the loop entirely, letting the object that sent us the message communicate directly with thisCollection. That is a great way to make our code less tightly coupled.

A Potential Wrinkle

There is a potential problem when we program in a language like Java, though. What if thisCollection is a primitive Java class, say, a Vector or a HashMap? If so, you cannot add a new method to the class. This is sign of another problem looking in your program. This object depends on your choice of data structure for thisCollection. What role does thisCollection play in your domain? Maybe it's a catalog of users, or the set of users that follow another user.

Make a new class that represents this domain object, and make your data structure an instance variable in that class. Now you can write your program in terms of the domain object, not the Java primitive. This makes solves several problems for you:

  • Your code reflects the problem domain more faithfully.
  • Your code is easier to modify. You can now change the data structure used to implement the domain object without affecting the rest of the program.
  • Now you can solve the Law of Demeter violation by adding a method to the new class.

This new wrinkle is sometimes called Primitive Obsession. OO masters know to beware its temptations. I sometimes like to play a little game in which every base type and every primitive class has been pushed down to the bottom layer of my program and wrapped in a domain object. This is often overkill -- taking a good thing too far -- but such an exercise can help you see just how often it really is a good thing to hide implementation detail and program in terms of the objects in your domain.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 26, 2012 3:24 PM

Quotes of the Day: Constraints on Programming

Obligation as constraint

Edward Yang has discovered the Three Bears pattern. He calls it "extremist programming".

When learning a new principle, try to apply it everywhere. That way, you'll learn more quickly where it does and doesn't work well, even if your initial intuitions about it are wrong.

Actually, you don't learn in spite of your initial intuitions being wrong. You learn because your initial intuitions were wrong. That's when learning happens best.

(I mention Three Bears every so often, such as Bright Lines in Learning and Doing, and whenever I discuss limiting usage of language features or primitive data values.)

Blindness as constraint

In an interview I linked to in my previous entry, Brian Eno and Ha-Joon Chang talk about the illusion of freedom. Whenever you talk about freedom, as in a "free market" or "free jazz",

... what you really mean is "constrained by rules that we've stopped thinking about".

Free jazz isn't entirely free, because you are constrained by what your muscles can do. Free markets aren't entirely free, because there are limits we simply choose not to talk about. Perhaps we once did talk about them and have chosen not to any more. Perhaps we never talked about them and don't even recognize that they are present.

I can't help but think of computer science faculty who claim we shouldn't be teaching OO programming in the first course, or any other "paradigm"; we should just teach basic programming first. They may be right about not teaching OOP first, but not because their approach is paradigm-free. It isn't.

(I mention constraints as a source of freedom every so often, including the ways in which patterns free students to create and the way creativity needs to be developed.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 16, 2012 2:50 PM

Preferring Objects over Class Methods in OOP

Bryan Helmkamp recently posted Why Ruby Class Methods Resist Refactoring on the Code Climate blog. It explains nicely example how using class methods makes it harder to refactor in a way that makes you feel like you have actually improved the code.

This is a hard lesson for beginners to learn, but even experienced Ruby and Java programmers sometimes succumb to the urge for simplicity that "just" writing a method offers. It doesn't take long for a simple class method to grow into something more complex, which gets in the way of growing the program.

As Helmkamp points out, class methods have the insidious feature of coupling your code to a particular class. Even when we program with instances rather than classes, we like to avoid that sort of coupling. Thus was born the factory method.

Deep in the process of teaching OO to undergraduates, I was drawn to an important truth found deep in the post, just after mention of the class-name coupling:

You can't easily swap in new a class, but you can easily swap in a new instance.

One of the great advantages of OOP is being able to plug a different object into a program and thus change or extend the program's behavior without modifying its code. Dynamic polymorphism via substitutable objects is in many ways the first principle of object-oriented programming.

That's why the first refactoring I usually apply whenever I encounter a class method is Introduce Object.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 06, 2012 3:34 PM

A Good Name Is About An Idea, Not An Implementation

In The Poetry of Function Naming, Stephen Wolfram captures something that all programmers eventually learn:

[Naming functions is] an unforgiving and humbling activity. And the issue is almost always the same. The reason you can't find a good name is because you don't really understand with complete and ultimate clarity what the function does.

Sometimes we can't come up with the perfect name for a function or a variable until after we have written code that uses it. The act of writing the program helps us to learn about the program's content.

Later in the same blog entry, Wolfram says something that made me think of my previous blog, on how some questions presuppose how they are to be answered:

But in a computer language, each function name ultimately refers to a particular piece of functionality that is defined in an absolute way, and can be implemented by a specific precise program.

When we write OO programs, a name doesn't always refer to a specific function. With polymorphic variables, we don't usually know which method will be executed when we use a name. Any object that provides the protocol required by the variable's type, or implements the interface so named, may be stored in the variable. It may even be an instance of a class I know nothing about.

For this reason, when I teach OO programming, I am careful to talk about sending a message rather than "invoking a method" or "calling a function". The receiver of the message interprets the message, not the sender.

This doesn't invalidate what Wolfram says, though it does point to a way in which we might rephrase it more generally. The name of a good method isn't about specific functionality so much as about expectation. It's about the core idea associated with an interaction, not any particular implementation of that idea.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 30, 2012 4:22 PM

Mathematical Formulas, The Great Gatsby, and Small Programs

... or: Why Less Code is Better

In Don't Kill Math, Evan Miller defends analytic methods in the sciences against Bret Victor's "visions for computer-assisted creativity and scientific understanding". (You can see some of my reactions to Victor's vision in a piece I wrote about his StrangeLoop talk.)

Miller writes:

For the practicing scientist, the chief virtue of analytic methods can be summed up in a single word: clarity. An equation describing a quantity of interest conveys what is important in determining that quantity and what is not at all important.

He goes on to look at examples such as the universal law of gravitation and shows that a single formula gives even a person with "minimal education in physics" an economical distillation of what matters. The clarity provided by a good analytic solution affords the reader two more crucial benefits: confident understanding and memorable insights.

Poet Peter Turchi describes a related phenomenon in fiction writing, in his essay You and I Know, Order is Everything. A story can pull us forward by omitting details and thus creating in the reader a desire to learn more. Referring to a particularly strategic paragraph, he writes:

That first sentence created a desire to know certain information: What is this significant event? ... We still don't have an answer, but the context for the question is becoming increasingly clear -- so while we're eager to have those initial questions answered, we're content to wait a little longer, because we're getting what seems to be important information. By the third paragraph, ... we think we've got a clue; but by that time the focus of the narrative is no longer the simple fact of what's going on, but the [larger setting of the story]. The story has shifted our attention from a minor mystery to a more significant one. On some level or another nearly every successful story works this way, leading us from one mystery to another, like stepping stones across a river.

In a good story, eventually...

... we recognize that the narrator was telling us more than we could understand, withholding information but also drawing our attention to the very thing we should be looking at.

In two very different contexts, we see the same forces at play. The quality of a description follows from the balance it strikes between what is in the description and what is left out.

To me, this is another example of how a liberal education can benefit students majoring in both the sciences and the humanities [ 1 | 2 ]. We can learn about many common themes and patterns of life from both traditions. Neither is primary. A student can encounter the idea first in the sciences, or first in the humanities, whichever interests the student more. But apprehending a beautiful pattern in multiple domains of discourse can reinforce the idea and make it more salient in the preferred domain. This also broadens our imaginations, allowing us to see more patterns and more subtlety in the patterns we already know.

So: a good description, a good story, depends in some part on the clarity attendant in how it conveys what is important and what is not important. What are the implications of this pattern for programming? A computer program is, after all a description: an executable description of a domain or phenomenon.

I think this pattern gives us insight into why less code is usually better than more code. Given two functionally equivalent programs of different lengths, we generally prefer the shorter program because it contains only what matters. The excess code found in the longer program obscures from the reader what is essential. Furthermore, as with Miller's concise formulas, a short program offers its reader the gift of more confident understanding and the opportunity for memorable insights.

What is not in a program can tell us a lot, too. One of the hallmarks of expert programmers is their ability to see the negative space in a design and program and learn from it. My students, who are generally novice programmers, struggle with this. They are still learning what matters and how to write descriptions at all, let alone concise one. They are still learning how to focus their programming tools, and on what.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

October 07, 2012 2:50 PM

Equality Check Patterns for Recursive Structures

I first encountered this trio of programming patterns when writing Smalltalk programs to manipulate graphs back in the late 1980s. These days I see them most often when comparing recursive data types in language processors. Andrew Black wrote about these patterns in the context of Smalltalk on the Squeak mailing list in the mid-2000s.

~~~~

Recursive Equality Check

Problem

You are working with a recursive data structure. In the simplest form, you might have a list or a record that can contain lists or records as parts. In a language application, you might have a structured type that can have the same type as one of its parts. For instance, a function type might allow a function as an argument or a function as a return value.

You have two instances of the structure and need to compare them, say, for equality or for dominance. In the language app, for example, you will need to verify that the type of an argument matches the type of the formal parameter on a procedure.

Solution

Standard structural recursion works. Walk the two structures in parallel, checking to see that they have the same values in the same positions. When one of the positions holds values the same structure, make a recursive call to compare them.

But what if ...

~~~~

Recursive Equality Check with Identity Check

Problem

An instance of the recursive structure can contain a reference to itself as a value, either directly or through mutual recursion. In the simplest form, this might be a dictionary that contains itself as a value in a key/vaue pair, as in Smalltalk, where the global variable Smalltalk is the dictionary of all global variables, including Smalltalk.

Comparing two instances now raises concerns. Two instances may be identical, or contain identical components. In such cases, the standard recursive comparison will never terminate.

Solution

Check for identity first. Recurse only if the two values are distinct.

But what if...

~~~~

Recursive Equality Check with Cache

Problem

You have two structures that do not share any elements, but they are structurally isomorphic. For example, this can occur in the simplest of structures, two one-element maps:

    a = { :self => a }
    b = { :self => b }

Now, even with an identity test up front, the recursive comparison will never terminate.

Solution

Maintain a cache of compared pairs. Before you begin to compare two objects, check to see if the pair is in the cache. If yes, return true. Otherwise, add the pair to the cache and proceed.

This approach works even though the function has not finished comparing the two objects yet. If there turns out to be a difference between the two, the check currently in progress will find it elsewhere and answer false. There is no need to enter a recursive check.

~~~~

A variation of this caching technique can also be used in other situations, such as computing a hash value for a recursive structure. If in the course of computing the hash you encounter the same structure again, assume that the value is a suitable constant, such as 0 or 1. Hashes are only approximations anyway, so making only one pass over the structure is usually enough. If you really need a fixpoint, then you can't take this shortcut.

Ruby hashes handle all three of these problems correctly:

    a = { :first  => :int,
          :second => { :first  => :int, :second => :int } }
    b = { :second => { :second => :int, :first  => :int },
          :first  => :int, }

a == b # --> true

c = { :first => c } a = { :first => :int, :second => { :first => c, :second => :int } } b = { :first => :int, :second => { :first => c, :second => :int } }

a == b # --> true

a = { :self => a } b = { :self => b }

a == b # --> true

I don't know if either MRI Ruby or JRuby uses these patterns to implement their solutions, or if they use some other technique.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

September 30, 2012 12:45 PM

StrangeLoop 8: Reactions to Brett Victor's Visible Programming

The last talk I attended at StrangeLoop 2012 was Bret Victor's Visible Programming. He has since posted an extended version of his presentation, as a multimedia essay titled Learnable Programming. You really should read his essay and play the video in which he demonstrates the implementation of his ideas. It is quite impressive, and worthy of the discussion his ideas have engendered over the last few months.

In this entry, I give only a high-level summary of the idea, react to only one of his claims, and discuss only one of his design principles in ay detail. This entry grew much longer than I originally intended. If you would like to skip most of my reaction, jump to the mini-essay that is the heart of this entry, Programing By Reacting, in the REPL.

~~~~

Programmers often discuss their productivity as at least a partial result of the programming environments they use. Victor thinks this is dangerously wrong. It implies, he says, that the difficulty with programming is that we aren't doing it fast enough.

But speed is not the problem. The problem is that our programming environments don't help us to think. We do all of our programming in our minds, then we dump our ideas into code via the editor.

Our environments should do more. They should be our external imagination. They should help us see how our programs work as we are writing them.

This is an attractive guiding principle for designing tools to help programmers. Victor elaborates this principle into a set of five design principles for an environment:

  • read the vocabulary -- what do these words mean?
  • follow the flow -- what happens when?
  • see the state -- what is the computer thinking?
  • create by reacting -- start somewhere, then sculpt
  • create by abstracting -- start concrete, then generalize

Victor's talk then discussed each design principle in detail and showed how one might implement the idea using JavaScript and Processing.js in a web browser. The demo was cool enough that the StrangeLoop crowd broke into applause at leas twice during the talk. Read the essay.

~~~~

As I watched the talk, I found myself reacting in a way I had not expected. So many people have spoken so highly of this work. The crowd was applauding! Why was I not as enamored? I was impressed, for sure, and I was thinking about ways to use these ideas to improve my teaching. But I wasn't falling head over heels in love.

A Strong Claim

First, I was taken aback by a particular claim that Victor made at the beginning of his talk as one of the justifications for this work:

If a programmer cannot see what a program is doing, she can't understand it.

Unless he means this metaphorically, seeing "in the mind's eye", then it is simply wrong. We do understand things we don't see in physical form. We learn many things without seeing them in physical form. During my doctoral study, I took several courses in philosophy, and only rarely did we have recourse to images of the ideas we were studying. We held ideas in our head, expressed in words, and manipulated them there.

We did externalize ideas, both as a way to learn them and think about them. But we tended to use stories, not pictures. By speaking an idea, or writing it down, and sharing it with others, we could work with them.

So, my discomfort with one of Victor's axioms accounted for some of my unexpected reaction. Professional programmers can and do manipulate ideas abstractly. Visualization can help, but when is it necessary, or even most helpful?

Learning, Versus Doing

This leads to a second element of my concern. I think I had a misconception about Victor's work. His talk and its title, "Visible Programming", led me to think his ideas are aimed primarily at working programmers, that we need to make programs visible for all programmers.

The title of his essay, "Learnable Programming", puts his claims into a different context. We need to make programs visible for people who are learning to program. This seems a much more reasonable position on its face. It also lets me see the axiom that bothered me so much in a more sympathetic light: If a novice programmer cannot see what a program is doing, then she may not be able to understand it.

Seeing how a program works is a big part of learning to program. A few years ago, I wrote about "biction" and the power of drawing a picture of what code does. I often find that if I require a student to draw a picture of what his code is doing before he can ask me for debugging help, he will answer his own question before getting to me.

The first time a student experiences this can be a powerful experience. Many students begin to think of programming in a different way when they realize the power of thinking about their programs using tools other than code. Visible programming environments can play a role in helping students think about their programs, outside their code and outside their heads.

I am left puzzling over two thoughts:

  • How much of the value my students see in pictures comes from not from seeing the program work but from drawing the picture themselves -- the act of reflecting about the program? If our tools visualizes the code for them, will we see the same learning effect that we see in drawing their own pictures?

  • Certainly Victor's visible programming tools can help learners. How much will they help programmers once they become experts? Ben Shneiderman's Designing the User Interface taught me that novices and experts have different needs, and that it's often difficult to know what works well for experts until we run experiments.

Mark Guzdial has written a more detailed analysis of Victor's essay from the perspective of a computer science educator. As always, Mark's ideas are worth reading.

Programming By Reacting, in the REPL

My favorite parts of this talk were the sections on creating by reacting and abstracting. Programmers, Victor says, don't work like other creators. Painters don't stare at a blank canvas, think hard, create a painting in their minds, and then start painting the picture they know they want to create. Sculptors don't stare at a block of stone, envision in their mind's eye the statue they intend to make, and then reproduce that vision in stone. They start creating, and react, both to the work of art they are creating and to the materials they are using.

Programmers, Victor says, should be able to do the same thing -- if only our programming environments helped us.

As a teacher, I think this is an area ripe for improvement in how we help students learn to program. Students open up their text editor or IDE, stare at that blank screen, and are terrified. What do I do now? A lot of my work over the last fifteen to twenty years has been in trying to find ways to help students get started, to help them to overcome the fear of the blank screen.

My approaches haven't been through visualization, but through other ways to think about programs and how we grow them. Elementary patterns can give students tools for thinking about problems and growing their code at a scale larger than characters or language keywords. An agile approach can help them start small, add one feature at a time, proceed in confidence with working tests, and refactor to make their code better as they go along. Adding Victor-style environment support for the code students write in CS1 and CS2 would surely help as well.

However, as I listened to Victor describe support for creating by reacting, and then abstracting variables and functions out of concrete examples, I realized something. Programmers don't typically write code in an environment with data visualizations of the sort Victor proposes, but we do program in the style that such visualizations enable.

We do it in the REPL!

A simple, interactive computer programming environment enables programmers to create by reacting.

  • They write short snippets of code that describe how a new feature will work.
  • They test the code immediately, seeing concrete results from concrete examples.
  • They react to the results, shaping their code in response to what the code and its output tell them.
  • They then abstract working behaviors into functions that can be used to implement another level of functionality.

Programmers from the Lisp and Smalltalk communities, and from the rest of the dynamic programming world, will recognize this style of programming. It's what we do, a form of creating by reacting, from concrete examples in the interaction pane to code in the definitions pane.

In the agile software development world, test-first development encourages a similar style of programming, from concrete examples in the test case to minimal code in the application class. Test-driven design stimulates an even more consciously reactive style of programming, in which the programmer reacts both to the evolving program and to the programmer's evolving understanding of it.

The result is something similar to Victor's goal for programmers as they create abstractions:

The learner always gets the experience of interactively controlling the lower-level details, understanding them, developing trust in them, before handing off that control to an abstraction and moving to a higher level of control.

It seems that Victor would like to perform even more support for novices than these tools can provide, down to visualizing what the program does as they type each line of code. IDEs with autocomplete is perhaps the closest analog in our current arsenal. Perhaps we can do more, not only for novices but also professionals.

~~~~

I love the idea that our environments could do more for us, to be our external imaginations.

Like many programmers, though, as I watched this talk, I occasionally wondered, "Sure, this works great if you creating art in Processing. What about when I'm writing a compiler? What should my editor do then?"

Victor anticipated this question and pre-emptively answered it. Rather than asking, How does this scale to what I do?, we should turn the question inside out and ask, These are the design requirements for a good environment. How do we change programming to fit?

I doubt such a dogmatic turn will convince skeptics with serious doubts about this approach.

I do think, though, that we can reformulate the original question in a way that focuses on helping "real" programmers. What does a non-graphical programmer need in an external imagination? What kind of feedback -- frequent, even in-the-moment -- would be most helpful to, say, a compiler writer? How could our REPLs provide even more support for creating, reacting, and abstracting?

These questions are worth asking, whatever one thinks of Victor's particular proposal. Programmers should be grateful for his causing us to ask them.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 29, 2012 3:40 PM

StrangeLoop 6: Y Y

I don't know if it was coincidence or by design of the conference organizers, but Wednesday morning was a topical repeat of Tuesday morning for me: two highly engaging talks on functional programming. I had originally intended to write them up in a single entry, but that write-up grew so long that I decided to give them their own entries.

Y Not?

Watching talks and reading papers about the Y combinator are something of a spectator code kata for me. I love to see new treatments, and enjoy seeing even standard treatments every now and then. Jim Weirich presented it at StrangeLoop with a twist I hadn't seen before.

Weirich opened, as speakers often do, with him. This is a motivational talk, so it should be...

  • non-technical. But it's not. It is highly technical.
  • relevant. But it's not. It is extremely pointless.
  • good code. But it's not. It shows the worst Clojure code ever.

But it will be, he promises, fun!

Before diving in, he had one more joke, or at least the first half of one. He asked for audience participation, then asked his volunteer to calculate cos(n) for some value of n I missed. Then he asked the person to keep hitting the cosine button repeatedly until he told him to stop.

At the dawn of computing, to different approaches were taken in an effort to answer the question, What is effectively computable?

Alan Turing devised what we now call a universal Turing machine to embody the idea. Weirich showed a video demonstration of a physical Turing machine to give his audience a sense of what a TM is like.

(If you'd like to read more about Turing and the implication of his universal machine, check out this reflection I wrote earlier this year after a visit by Doug Hofstadter to my campus. Let's just say that the universal TM means more than just an answer to what functions are effectively computable.)

A bit ahead of Turing, Alonzo Church devised an answer to the same question in the form of the lambda calculus, a formal logical system. As with the universal TM, the lambda calculus can be used to compute everything, for a particular value of eveything. These days, nearly every programming language has lambdas of some form

... now came the second half of the joke running in the background. Weirich asked his audience collaborator what was in his calculator's display. The assistant called out some number, 0.7... Then Weirich showed his next slide -- the same number, taken out many more digits. How was he able to do this? There is a number n such that cos(n) = n. By repeatedly pressing his cosine button, Weirich's assistant eventually reached it. That number n is called the fixed point of the cosine function. Other functions have fixed points to, and they can be a source of great fun.

Then Weirich opened up his letter and wrote some code from the ground up to teach some important concepts of functional programming, using the innocuous function 3(n+1). With this short demo, Weirich demonstrated the idea of a higher-order function, including function factories, a set of useful functional refactorings that included

  • Introduce Binding
    -- where the new binding is unused in the body
  • Inline Definition
    -- where a call to a function is replaced by the function body, suitably parameterized
  • Wrap Function
    -- where an expression is replaced by a function call that computes the expression
  • Tennent Correspondence Principle
    -- where an expression is turned into a think

At the end of his exercise, Weirich had created a big function call that contained no named function definitions yet computed the same answer.

He asks the crowd for applause, then demurs. This is 80-year-old technology. Now you know, he says, what a "chief scientist" at New Context does. (Looks a lot like what an academic might do...)

Weirich began a second coding exercise, the point behind all his exposition to this point: He wrote the factorial function, and began to factor and refactor it just as he had the simpler 3(n+1). But now inlining the function breaks the code! There is a recursive call, and the name is now out of scope. What to do?

He refactors, and refactors some more, until the body of factorial is an argument to a big melange of lambdas and applications of lambdas. The result is a function that computes the fixed point of any function passed it.

That is Y. The Y combinator.

Weirich talked a bit about Y and related ideas, and why it matters. He closed with a quote from Wittgenstein, from Philosophical Investigations:

The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice something -- because it is always before one's eyes.) The real foundations of his enquiry do not strike a man at all. Unless that fact has at some time struck him. -- And this means: we fail to be struck by what, once seen, is most striking and most powerful.

The thing that sets Weirich's presentation of Y apart from the many others I've seen is its explicit use of refactoring to derive Y. He created Y from a sequence of working pieces of code, each the result of a refactoring we can all understand. I love to do this sort of thing when teaching programming ideas, and I was pleased to see it used to such good effect on such a challenging idea.

The title of this talk -- Y Not? -- plays on Y's interrogative homonym. Another classic in this genre echos the homonym in its title, then goes on to explain Y in four pages of English and Scheme. I suggest that you study @rpg's essay while waiting for Weirich's talk to hit InfoQ. Then watch Weirich's talk. You'll like it.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

September 26, 2012 8:12 PM

StrangeLoop 3: Functional Programming 1 -- Monads and Patterns

The StrangeLoop program had a fair amount of functional programming talks, and I availed myself of two to complete the first morning of the conference.

Monad Examples for Normal People

The web is full of tutorials claiming to explain monads in a way that anyone can understand. If any of them had succeeded, we wouldn't need another. How could I not attend a talk claiming to slay this dragon?

Getz started out with a traditional starting point: a sequence of operations that can be viewed as composition of functions. That works great for standard business logic. But consider a common exceptional case: given a name, looking up an account number fails. This requires us to break the symmetry of the code with guards. These break composition, because now the return type of the function doesn't fit.

The Maybe monad factors these guards out of the business logic. If we further need to record and capture error coded, we can use the Error monad, which factors the same sort of plumbing out of the business logic and also serves as a facade for a tuple of value and error code.

After these simple examples, the speaker dove into the sort of exercise that tends to lose the interest of programmers in the trenches building apps: building a Lisp interpreter in Python, using monads to compose the elements of the interpreter. The environment consists of a combination of the reader monad and the writer monad; the interpreter consists of a combination of the environment and the error monad. Several other monads play a role in representing values, built-in procedures, state, and non-standard control operators. An interpreter is, he said, "monad heaven".

The best part of this talks message was in viewing a monad as a design pattern that abstracts repetitive plumbing out of applications in such a way that preserves function composition.

After the talk, someone asked a question to the effect, "I get by fine with macros and higher-order functions. When do I need monads?" Getz answered from his personal experience: monads enable him to write more elegant code, by factoring repetition that other tools could not reach as nicely.

This wasn't the monad explanation to end the need for new monad explanations, but it was a decent effort. With the Getz's focus on factoring code and the question mentioning macros, I could not help but think of this essay that presents monads in the light of code transformation, and Brian Marick's approach treating a monad as a macro. Perhaps we are getting near a metaphor for monads that will help "normal people" grok them without resorting to abstract abstract math.

Functional Design Patterns

From the moment I saw the StrangeLoop line-up, I was excited to see Stuart Sierra speak on functional design patterns. Sierra is one of the few prominent people in the Lisp and Clojure worlds to acknowledge the value of design patterns in functional style -- heck, even to acknowledge they exist.

He opened his talk in a way that confronted the view commonly held among Lispers, He conceded that, for many, "design pattern" is a loaded term, bringing to mind an OO cult and the ominous voice of the Gang of Four. The thing is, Sierra said, Design Patterns is a pretty good book, given the time it was written and the programming language community to which it. speaks. However, in the functional language community, the design patterns in that book are best known for being debunked by Peter Norvig in a 1998 tutorial.

Sierra reminded this audience that patterns can occur at all levels of a program. He pointed to a lower-profile patterns book of the mid-1990s, Pattern-Oriented Software Architecture (now a five-volume series), which organized patterns at multiple levels:

  • architectural -- across components
  • design -- within components
  • idiom -- within a particular language

Sierra then went on to list, and describe briefly, several common patterns he has noticed in functional programs and used himself in Clojure. Like POSA, he organized them into categories. Before proceeding, he admitted to any Haskell programmers in the room that, yes, many of these patterns are monadic in nature.

I'd very much like to write about some of Sierra's patterns in greater detail than a single entry permits, including providing links to blog entries he and others have written about them. For now, let me list the ones I jotted down, in Sierra's categories:

  • state patterns
    • State/Event, aka Event Sourcing
    • Consequences

  • data-building patterns
    • Accumulator
    • Reduce/Combine
    • Recursive Expansion

  • control flow patterns
    • Pipeline
    • Wrapper
    • Token
    • Observer
    • Strategy

Before describing Reduce/Combine, Sierra took a short digression to talk about MapReduce, a pattern accepted by many in the world of big data. He reminded us that this pattern is predicated on the spinning disk becoming the bottleneck of our system. In the future, this pattern will become less useful as other forces come to dominate our system architecture.

Two of the control flow patterns, Observer and Strategy, are held in common with the GoF catalog, though in the context of functional programming a few new variations become more obvious. It also seemed to me that Sierra's Wrapper is a lot like the GoF's Decorator, though he did not make an explicit connection.

As I wrote a couple of years ago, the time is right for functional design patterns. I'm so glad that Sierra has been documented patterns of this sort and articulating the value of thinking in terms of patterns. The key is not to look to OO programs for patterns of value to functional programmers, but to look at functional programs for recurring choices among design alternatives. (It's not too surprising that many OO design patterns don't mean much to functional programmers, just as it's not surprising that FP patterns dealing with, say, state are afterthoughts in the OO world.)

I look forward to reviewing Sierra's slides, available at StrangeLoop 2012 GitHub repo, and coming back to the topic of functional design patterns soon.

The first day of StrangeLoop was off to a great start.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

September 05, 2012 5:24 PM

Living with the Masters

I sometimes feel guilty that most of what I write here describes connections between teaching or software development and what I see in other parts of the world. These connections are valuable to me, though, and writing them down is valuable in another way.

I'm certainly not alone. In Why Read, Mark Edmondson argues for the value of reading great literature and trying on the authors' view of the world. Doing so enables us to better understand our own view of the world, It also gives us the raw material out of which to change our worldview, or build a new one, when we encounter better ideas. In the chapter "Discipline", Edmondson writes:

The kind of reading that I have been describing here -- the individual quest for what truth a work reveals -- is fit for virtually all significant forms of creation. We can seek vital options in any number of places. They may be found for this or that individual in painting, in music, in sculpture, in the arts of furniture making or gardening. Thoreau felt he could derive a substantial wisdom by tending his bean field. He aspired to "know beans". He hoed for sustenance, as he tells us, but he also hoed in search of tropes, comparisons between what happened in the garden and what happened elsewhere in the world. In his bean field, Thoreau sought ways to turn language -- and life -- away from old stabilities.

I hope that some of my tropes are valuable to you.

The way Edmondson writes of literature and the liberal arts applies to the world of software in a much more direct ways too. First, there is the research literature of computing and software development. One can seek truth in the work of Alan Kay, David Ungar, Ward Cunningham, or Kent Beck. One can find vital options in the life's work of Robert Floyd, Peter Landin, or Alan Turing; Herbert Simon, Marvin Minsky, or John McCarthy. I spent much of my time in grad school immersed in the writings and work of B. Chandrasekaran, which affected my view of intelligence in both humans and machines.

Each of these people offers a particular view into a particular part of the computing world. Trying out their worldviews can help us articulate our own worldviews better, and in the process of living their truths we sometimes find important new truths for ourselves.

We in computing need not limit ourselves to the study of research papers and books. As Edmondson says the individual quest for the truth revealed in a work "is fit for virtually all significant forms of creation". Software is a significant form of creation, one not available to our ancestors even sixty years ago. Live inside any non-trivial piece of software for a while, especially one that has withstood the buffets of human desire over a period of time, and you will encounter truth -- truths you find there, and truths you create for yourself. A few months trying on Smalltalk and its peculiar view of the world taught me OOP and a whole lot more.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

August 08, 2012 1:50 PM

Examples First, Names Last

Earlier this week, I reviewed a draft chapter from a book a friend is writing, which included a short section on aspect-oriented programming. The section used common AOP jargon: "cross cutting", "advice", and "point cut". I know enough about AOP to follow his text, but I figured that many of his readers -- young software developers from a variety of backgrounds -- would not. On his use of "cross cutting", I commented:

Your ... example helps to make this section concrete, but I bet you could come up with a way of explaining the idea behind AOP in a few sentences that would be (1) clear to readers and (2) not use "cross cutting". Then you could introduce the term as the name of something they already understand.

This may remind you of the famous passage from Richard Feynman about learning names and understanding things. (It is also available on a popular video clip.) Given that I was reviewing a chapter for a book of software patterns, it also brought back memories of advice that Ralph Johnson gave many years ago on the patterns discussion list. Most people, he said, learn best from concrete examples. As a result, we should write software patterns in such a way that we lead with a good example or two and only then talk about the general case. In pattern style, he called this idea "Concrete Before Abstract".

I try to follow this advice in my teaching, though I am not dogmatic about it. There is a lot of value in mixing up how we organize class sessions and lectures. First, different students connect better with some approaches than others, so variety increases the chances that of connecting with everyone a few times each semester. Second, variety helps to keep students in interested, and being interested is a key ingredient in learning.

Still, I have a preference for approaches that get students thinking about real code as early as possible. Starting off by talking about polymorphism and its theoretical forms is a lot less effective at getting the idea across to undergrads than showing students a well-chosen example or two of how plugging a new object into an application makes it easier to extend and modify programs.

So, right now, I have "Concrete Before Abstract" firmly in mind as I prepare to teaching object-oriented programming to our sophomores this fall.

Classes start in twelve days. I figured I'd be blogging more by now about my preparations, but I have been rethinking nearly everything about the way I teach the course. That has left my mind more muddled that settled for long stretches. Still, my blog is my outboard brain, so I should be rethinking more in writing.

I did have one crazy idea last night. My wife learned Scratch at a workshop this summer and was talking about her plans to use it as a teaching tool in class this fall. It occurred to me that implementing Scratch would be a fun exercise for my class. We'll be learning Java and a little graphics programming as a part of the course, and conceptually Scratch is not too many steps from the pinball game construction kit in Budd's Understanding Object-Oriented Programming with Java, the textbook I have used many times in the course. I'm guessing that Budd's example was inspired by Bill Budge's game for Electronic Arts, Pinball Construction Set. (Unfortunately, Budd's text is now almost as out of date as that 1983 game.)

Here is an image of a game constructed using the pinball kit and Java's AWT graphics framework:

a pinball game constructed using a simple game kit

The graphical ideas needed to implement Scratch are a bit more complex, including at least:

  • The items on the canvas must be clickable and respond to messages.
  • Items must be able to "snap" together to create units of program. This could happen when a container item such as a choice or loop comes into contact with an item it is to contain.

The latter is an extension of collision-detecting behavior that students would be familiar with from earlier "ball world" examples. The former is something we occasionally do in class anyway; it's awfully handy to be able to reconfigure the playing field after seeing how the game behaves with the ball in play. The biggest change would be that the game items are little pieces of program that know how to "interpret" themselves.

As always, the utility of a possible teaching idea lies in the details of implementing it. I'll give it a quick go over the next week to see if it's something I think students would be able to handle, either as a programming assignment or as an example we build and discuss in class.

I'm pretty excited by the prospect, though. If this works out, it will give me a nice way to sneak basic language processing into the course in a fun way. CS students should see and think about languages and how programs are processed throughout their undergrad years, not only in theory courses and programming languages courses.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

July 11, 2012 2:45 PM

A Few Comments on the Alan Kay Interview, and Especially Patterns

Alan Kay

Many of my friends and colleagues on Twitter today are discussing the Interview with Alan Kay posted by Dr. Dobb's yesterday. I read the piece this morning while riding the exercise bike and could not contain my desire to underline passages, star paragraphs, and mark it up with my own comments. That's hard to do while riding hard, hurting a little, and perspiring a lot. My desire propelled me forward in the face of all these obstacles.

Kay is always provocative, and in this interview he leaves no oxen ungored. Like most people do when whenever they read outrageous and provocative claims, I cheered when Kay said something I agreed with and hissed -- or blushed -- when he said something that gored me or one of my pet oxen. Twitter is a natural place to share one's cheers and boos for an artyicle with or by Alan Kay, given the amazing density of soundbites one finds in his comments about the world of computing.

(One might say the same thing about Brian Foote, the source of both soundbites in that paragraph.)

I won't air all my cheers and hisses here. Read the article, if you haven't already, and enjoy your own. I will comment on one paragraph that didn't quite make me blush:

The most disastrous thing about programming -- to pick one of the 10 most disastrous things about programming -- there's a very popular movement based on pattern languages. When Christopher Alexander first did that in architecture, he was looking at 2,000 years of ways that humans have made themselves comfortable. So there was actually something to it, because he was dealing with a genome that hasn't changed that much. I think he got a few hundred valuable patterns out of it. But the bug in trying to do that in computing is the assumption that we know anything at all about programming. So extracting patterns from today's programming practices ennobles them in a way they don't deserve. It actually gives them more cachet.

Long-time Knowing and Doing readers know that patterns are one of my pet oxen, so it would have been natural for me to react somewhat as Keith Ray did and chide Kay for what appears to be a typical "Hey, kids, get off my lawn!" attitude. But that's not my style, and I'm such a big fan of Kay's larger vision for computing that my first reaction was to feel a little sheepish. Have I been wasting my time on a bad idea, distracting myself from something more important? I puzzled over this all morning, and especially as I read other people's reactions to the interview.

Ultimately, I think that Kay is too pessimistic when he says we hardly know anything at all about programming. We may well be closer to the level of the Egyptians who built the pyramids than we are to the engineers who built the Empire State Building. But I simply don't believe that people such as Ward Cunningham, Ralph Johnson, and Martin Fowler don't have a lot to teach most of us about how to make better software.

Wherever we are, I think it's useful to identify, describe, and catalog the patterns we see in our software. Doing so enables us to talk about our code at a level higher than parentheses and semicolons. It helps us bring other programmers up to speed more quickly, so that we don't all have to struggle through all the same detours and tar pits our forebears struggled through. It also makes it possible for us to talk about the strengths and weaknesses of our current patterns and to seek out better ideas and to adopt -- or design -- more powerful languages. These are themes Kay himself expresses in this very same interview: the importance of knowing our history, of making more powerful languages, and of education.

Kay says something about education in this interview that is relevant to the conversation on patterns:

Education is a double-edged sword. You have to start where people are, but if you stay there, you're not educating.

The real bug in what he says about patterns lies at one edge of the sword. We may not know very much about how to make software yet, but if we want to remedy that, we need to start where people are. Most software patterns are an effort to reach programmers who work in the trenches, to teach them a little of what we do know about how to make software. I can yammer on all I want about functional programming. If a Java practitioner doesn't appreciate the idea of a Value Object yet, then my words are likely wasted.

Ward Cunningham

Ironically, many argue that the biggest disappointment of the software patterns effort lies at the other edge of education's sword: an inability to move the programming world quickly enough from where it was in the mid-1990s to a better place. In his own Dr. Dobb's interview, Ward Cunningham observed with a hint of sadness that an unexpected effect of the Gang of Four Design Patterns book was to extend the life of C++ by a decade, rather than reinvigorating Smalltalk (or turning people on to Lisp). Changing the mindset of a large community takes time. Many in the software patterns community tried to move people past a static view of OO design embodied in the GoF book, but the vocabulary calcified more quickly than they could respond.

Perhaps that is all Kay meant by his criticism that patterns "ennoble" practices in a way they don't deserve. But if so, it hardly qualifies in my mind as "one of the 10 most disastrous things about programming". I can think of a lot worse.

Kurt Vonnegut

To all this, I can only echo the Bokononists in Kurt Vonnegut's novel Cat's Cradle: "Busy, busy, busy." The machinery of life is usually more complicated and unpredictable than we expect or prefer. As a result, reasonable efforts don't always turn out as we intend them to. So it goes. I don't think that means we should stop trying.

Don't let my hissing about one paragraph in the interview dissuade you from reading the Dr. Dobb's interview. As usual, Kay stimulates our minds and encourages us to do better.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

July 05, 2012 2:48 PM

The Value of a Liberal Education, Advertising Edition

A few days ago, I mentioned James Webb Young's A Technique for Producing Ideas. It turns out that Young was in advertising. He writes:

The construction of an advertisement is the construction of a new pattern in this kaleidoscopic world in which we live. The more of the elements of that world which are stored away in that pattern-making machine, the mind, the more the chances are increased for the production of new and striking combinations, or ideas. Advertising students who get restless about the "practical" value of general college subjects might consider this.

Computer Science students, too.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

June 13, 2012 2:19 PM

The First Rule of Programming in Ruby

When I was thinking about implementing the cool programming idiom I blogged about yesterday, I forgot the first rule of programming in Ruby:

It's already in there.

It didn't take long after my post went live that readers began to point out that Ruby already offers the functionality of my sub and R's { operator, via Array#values_at:

     >> [10, 5, 9, 6, 20, 17, 1].values_at(6, 1, 3, 2, 0, 5, 4)
     =;> [1, 5, 6, 9, 10, 17, 20]

I'm not surprised. I should probably spend a few minutes every day browsing the documentation for a randomly-chosen Ruby class. There's so much to find! There's also too much to remember out of context, but I never know when I'll come across a need and have a vague recollection that Ruby already does what I need.

Reader Gary Wright pointed out that I can get closer to R's syntax by aliasing values_at with an unused array operator, say:

     class Array
       def %(*args)
         values_at(*args.first)
       end
     end

Now my use of % is as idiomatic in Ruby as { is in R:

     >> [10, 5, 9, 6, 20, 17, 1] % [6, 1, 3, 2, 0, 5, 4]
     =;> [1, 5, 6, 9, 10, 17, 20]

I am happy to learn that Ruby already has my method and am just as happy to have spent time thinking about and implementing it on my own. I like to play with the ideas as much as I like knowing the vast class library of a modern scripting language.

(If I were writing Ruby code for a living, my boss might not look upon my sense of exploration so favorably... There are advantages to being an academic!)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

June 12, 2012 3:44 PM

Faking a Cool Programming Idiom in Ruby

Last week, James Hague blogged about a programming idiom you've never heard of: fetching multiple items from an array with a single operation.

Let's say the initial array is this:
     10 5 9 6 20 17 1

Fetching the values at indices 0, 1, 3, and 6, gives:

     10 5 6 1

You can do this directly in APL-style languages such as J and R. In J, for example, you use the { operator:

     0 1 3 6 { 10 5 9 6 20 17 1

Such an operator enables you to do some crazy things, like producing a sorted array by accessing it with a permuted set of indices. This:

     6 1 3 2 0 5 4 { 10 5 9 6 20 17 1

produces this:

     1 5 6 9 10 17 20

When I saw this, my first thought was, "Very cool!" It's been a long time since I programmed in APL, and if this is even possible in APL, I'd forgotten.

One of my next thoughts was, "I bet I can fake that in Ruby...".

I just need a way to pass multiple indices to the array, invoking a method that fetches one value at a time and returns them all. So I created an Array method named sub that takes the indices as an array.

     class Array
       def sub slots
         slots.map {|i| self[i] }
       end
     end

Now I can say:

     [10, 5, 9, 6, 20, 17, 1].sub([6, 1, 3, 2, 0, 5, 4])

and produce [1, 5, 6, 9, 10, 17, 20].

The J solution is a little cleaner, because my method requires extra syntax to create an array of indices. We can do better by using Ruby's splat operator, *. splat gathers up loose arguments into a single collection.

     class Array
       def sub(*slots)             # splat the parameter
         slots.map {|i| self[i] }
       end
     end

This definition allows us to send sub any number of integer arguments, all of which will be captured into the parameter slots.

Now I can produce the sorted array by saying:

     [10, 5, 9, 6, 20, 17, 1].sub(6, 1, 3, 2, 0, 5, 4)

Of course, Ruby allows us to omit the parentheses when we send a message as long as the result is unambiguous. So we can go one step further:

     [10, 5, 9, 6, 20, 17, 1].sub 6, 1, 3, 2, 0, 5, 4

Not bad. We are pretty close to the APL-style solution now. Instead of {, we have .sub. And Ruby requires comma-separated argument lists, so we have to use commas when we invoke the method. These are syntactic limitations placed on us by Ruby.

Still, with relative simple code we are able to fake Hague's idiom quite nicely. With a touch more complexity, we could write sub to allow either the unbundled indices or a single array containing all the indices. This would make the code fit nicely with other Ruby idioms that produce array values.

If you have stronger Ruby-fu than I and can suggest a more elegant implementation, please share. I'd love to learn something new!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

May 10, 2012 12:18 PM

Code Signatures in Lisp

Recently, @fogus tweeted:

I wonder if McCarthy had to deal with complaints of parentheses count in the earliest Lisps?

For me, this tweet immediately brought to mind Ward Cunningham's experiment with file signatures as an aid in browsing unfamiliar code, which he presented at a workshop on "software archeology" at OOPSLA 2001. In his experiment, Ward collapsed each file in the Java 1.3 source code distribution into a single line consisting of only braces, quotes, and semicolons. For example, the AWT class java.awt.peer.ComponentPeer looked like this:

    ;;;;;;;;{;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;} 

while java.awt.print.PageFormat looked like this:

    {;{;;}{;;};}{;;{;}{;};}{;;{;}{;};}{;{;;;;;;"";};}{;{;;;;;;"";};}{;{;}{;};}{;{;}{;};}{}{}{;}{;}{{;}{;}}{;}{;;;;;;}{}{;{;;;;;;;;;;;;;;;;;;;;;;};}}

As Ward said, it takes some time to get use to the "radical summarization" of files into such punctuation signatures. He was curious about how such a high-level view of a code base might help a programmer understand the regularities and irregularities in the code, via an interactive process of inspection and projection.

Perhaps this came to mind as a result of experiences I had when I was first learning to program in Scheme. Coming from other languages with more syntax, I developed a bad habit of writing code like this:

    (define factorial
      (lambda (n)
        (if (zero? n)
            1
            (* n (factorial (- n 1)))
        )))

When real Scheme and Lisp programmers saw my code, they suggested that I put those closing parens at the end of the multiplication line. They were even more insistent when I dropped closing parens onto separate lines in the middle of a larger piece of code, say, with a let expression of several complex values. I objected that the line breaks helped me to see the structure of my code better. They told me to trust them; after I had more experience, I wouldn't need the help, and my code would be cleaner and more idiomatic.

They were right. Eventually, I learned to read Scheme code more like real Schemers do. I found myself drawn to the densest parts of the code, in which those closing parens often played a role, and learned to see that that's where the action usually was.

I think it was the connection between counting parentheses and the structure of code that brought to mind Ward's work. And then I wondered what it would be like to take the signature of Lisp or Scheme code in terms of its maligned primary punctuation, the parentheses?

In a few spare minutes, I fiddled with the I idea. As an example, consider the following Lisp function, which is part of an implementation of CLOS written by Patrick Henry Winston and Berthold Horn to support their AI and Lisp textbooks:

    (defun call-next-method ()
      (if *around-methods*
          (apply (pop *around-methods*) *args*)
        (progn
          (do () ((endp *before-methods*))
            (apply (pop *before-methods*) *args*))
          (multiple-value-prog1
              (if *primary-methods*
	          (apply (pop *primary-methods*) *args*)
                (error "Oops, no applicable primary method!")) 
            (do () ((endp *after-methods*))
              (apply (pop *after-methods*) *args*))))))

Collapsing this function into a single line of parentheses results in:

    (()((())((()(())(()))(((())())(()(())(()))))))

The semicolons in Java code give the reader a sense of the code's length; collapsing Lisp in this way loses the line breaks. So I wrote another function to insert a | where the line breaks had been, which results in:

    (()|(|(())|(|(()(())|(()))|(|(|(())|())|(()(())|(()))))))

This gives a better idea of how the code plays out on the page, but it loses all sense of the code's structure, which is so important when reading Lisp. So I implemented a third signature, one that surrenders the benefit of a single line in exchange for a better sense of structure. This signature preserves leading white space and line breaks but otherwise gives us just the parentheses:

    (()
      (
          (())
        (
          (()(())
          (()))
          (
            (
           (())
               ())
        (()(())
          (()))))))

Interesting. It's almost art.

I think there is a lot of room left to explore here in terms of punctuation. To capture the nature of Scheme and Lisp programs, we would probably want to include other characters, such as the hash, the comma, quotes, and backquotes. These would expose macro-related expressions to the human reader. To expand the experiment to include Clojure, we would of course want to include [] and {} in the signatures.

I'm not an every-day Schemer, so I am not sure how much either the flat signatures or the structured signatures would help seasoned Lisp or Scheme programmers develop an intuitive sense of a function's size, complexity, and patterns. As Ward's experiment showed, the real value comes when signing entire files, and for that task flat signatures may have more appeal. It would be neat to apply this idea to a Lisp distribution of non-trivial size -- say, the full distribution of Racket or Clojure -- and see what might be learned.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

April 03, 2012 4:00 PM

Intermediate Representations and Life Beyond the Compiler

In the simplest cases, a compiler can generate target code directly from the abstract syntax tree:

generating target code directly from the abstract syntax tree

In many cases, though, there are good reasons why we don't want to generate code for the target machine immediately. One is modularity. A big part of code generation for a particular target machine is machine-dependent. If we write a monolithic code generator, then we will have to reimplement the machine-independent parts every time we want to target a new machine.

Even if we stick with one back-end architecture, modularity helps us. Not all of the machine-dependent elements of code generation depend in the same way on the machine. If we write a monolithic code generator, then any small change in the target machine -- perhaps even a small upgrade in the processor -- can cause changes throughout our program. If instead we write a modular code generator, with modules that reflect particular shear layers in the generation process, a lá How Buildings Learn, then we may be able to contain changes in target machine specification to an easily identified subset of modules.

So, more generally we think of code generation in two parts:

  • one or more machine-independent transformations from an abstract syntax tree to intermediate representations of the program, followed by

  • one or more machine-dependent transformations from the final intermediate representation to machine code.

generating target code directly from the abstract syntax tree

Intermediate representations between the abstract syntax tree and assembly code have other advantages, too. In particular, they enable us to optimize code in machine-independent ways, without having to manipulate a complex target language.

In practice, an intermediate representation sometimes outlives the compiler for which it was created. Chris Clark described an example of this phenomenon in Build a Tree--Save a Parse:

Sometimes the intermediate language (IL) takes on a life of its own. Several systems that have multiple parsers, sophisticated optimizers, and multiple code generators have been developed and marketed commercially. Each of these systems has its own common virtual assembly language used by the various parsers and code generators. These intermediate languages all began connecting just one parser to one code generator.

P-code is an example IL that took on a life of its own. It was invented by Nicklaus Wirth as the IL for the ETH Pascal compiler. Many variants of that compiler arose [Ne179], including the USCD Pascal compiler that was used at Stanford to define an optimizer [Cho83]. Chow's compiler evolved into the MIPS compiler suite, which was the basis for one of the DEC C compilers -- acc. That compiler did not parse the same language nor use any code from the ETH compiler, but the IL survived.

Good language design usually pays off, sometimes in unexpected ways.

(If you like creating languages and writing language processors, Clark's paper is worth a read!)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

March 18, 2012 5:20 PM

Thinking Out Loud about the Compiler in a Pure OO World

John Cook pokes fun at OO practice in his blog post today. The "Obviously a vast improvement." comment deftly ignores the potential source of OOP's benefits, but then that's the key to the joke.

A commenter points to a blog entry by Smalltalk veteran Travis Griggs. I agree with Griggs's recommendation to avoid using verbs-turned-into-nouns as objects, especially lame placeholder words such as "manager" and "loader". As he says, they usually denote objects that fiddle with the private parts of other objects. Those other objects should be providing the services we need.

Griggs allows reasonable linguistic exceptions to the advice. But he also acknowledges the pull of pop culture which, given my teaching assignment this semester, jumped out at me:

There are many 'er' words that despite their focus on what they do, have become so commonplace, that we're best to just stick with them, at least in part. Parser. Compiler. Browser.

I've thought about this break in my own OO discipline before, and now I'm thinking about it again. What would it be like to write compiles without creating parsers and code generators -- and compilers themselves -- as objects?

We could ask a program to compile itself:

     program.compileTo( targetMachine )
But is the program a program, or does it start life as a text file? If the program starts as a text file, perhaps we say
  program.parse()
to create an abstract syntax tree, which we then could ask
  ast.compileTo( targetMachine )

(Instead of sending a parse() message, we might send an asAbstractSyntax() message. There may be no functional difference, but I think the two names indicate subtle differences in mindset.)

When my students write their compilers in Java or another OO language, we discuss in class whether abstract syntax trees ought to be able to generate target code for themselves. The danger lies in binding the AST class to the details of a specific target machine. We can separate the details of the target machine for the AST by passing an argument with the compileTo() message, but what?

Given all the other things we have to learn in the course, my students usually end up following Griggs's advice and doing the traditional thing: pass the AST as an argument to a CodeGenerator object. If we had more time, or a more intensive OO design course prior to the compilers course, we could look at techniques that enable a more OO approach without making things worse in the process.

Looking back farther to the parse behavior, would it ever make sense to send an argument with the parse() message? Perhaps a parse table for an LL(1) or LR(1) algorithm? Or the parsing algorithm itself, as a strategy object? We quickly run the risk of taking steps in the direction that Cook joshes about in his post.

Or perhaps parsing is a natural result of creating a Program object from a piece of text. In that approach, when we say

     Program source = new Program( textfile );
the internal state of source is an abstract syntax tree. This may sound strange at first, but a program isn't really a piece of text. It's just that we are conditioned to think that way by the languages and tools most of us learn first and use most heavily. Smalltalk taught us long ago that this viewpoint is not necessary. (Lisp, too, though in a different way.)

These are just some disconnected thoughts on a Sunday afternoon. There is plenty more to say, and plenty of prior art. I think I'll fire up a Squeak image sometime soon and spend some time reacquainting myself with its take on parsing and code generation, in particular the way it compiles its core out to C.

I like doing this kind of "what if?" thinking. It's fun to explore along the boundary between the land where OOP works naturally and the murky territory where it doesn't seem to fit so well. That's a great place to learn new stuff.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 09, 2012 3:33 PM

This and That from Douglas Hofstadter's Visit

Update: In the original, I conflated two quotes in
"Food and Hygiene". I have un-conflated them.

In addition to his lecture on Gödel's incompleteness theorem, Douglas Hofstadter spent a second day on campus, leading a seminar and giving another public talk. I'll blog on those soon. In the meantime, here are a few random stories I heard and impressions I formed over the two days.

The Value of Good Names.   Hofstadter told a story about his "favorite chapter on Galois theory" (don't we all have one?), from a classic book that all the mathematicians in the room recognized. The only thing Hofstadter didn't like about this chapter was that it referred to theorems by number, and he could never remember which theorem was which. That made an otherwise good text harder to follow than it needed to be.

In contrast, he said, was a book by Galyan that gave each theorem a name, a short phrase evocative of what the theorem meant. So much better for reader! So he gave his students one semester an exercise to make his favorite chapter better: they were to give each of the numbered theorems in the chapter an evocative name.

This story made me think of my favorite AI textbook, Patrick Henry Winston's Artificial Intelligence. Winston's book stands out from the other AI books as quirky. He uses his own vocabulary and teaches topics very much in the MIT AI fashion. But he also gives evocative names to many of the big ideas he wants us to learn, among them the representation principle, the principle of least commitment, the diversity principle, and the eponymous "Winston's principle of parallel evolution". My favorite of all is the convergent intelligence principle:

The world manifests constraints and regularities. If a computer is to exhibit intelligence, it must exploit those constraints and regularities, no matter of what the computer happens to be made.

To me, that is AI.

Food and Hygiene.   The propensity of mathematicians to make their work harder for other people to understand, even other mathematicians, reminded Doug Shaw of two passages, from famed mathematicians Gian-Carlo Rota and André Weil. Rota said that we must guard ... against confusing the presentation of mathematics with the content of mathematics. More colorfully, Weil cautioned [If] logic is the hygiene of the mathematician, it is not his source of food. Theorems, proofs, and Greek symbols are mathematical hygiene. Pictures, problems, and understanding are food.

A Good Gig, If You Can Get It.   Hofstadter holds a university-level appointment at Indiana, and his research on human thought and the fluidity of concepts is wide enough to include everything under the sun. Last semester, he taught a course on The Catcher in the Rye. He and his students read the book aloud and discussed what makes it great. Very cool.

If You Need a Research Project...   At some time in the past, Hofstadter read, in a book or article about translating natural language into formal logic, that 'but' is simply a trivial alternative to 'and' and so can be represented as such. "Nonsense", he said! 'but' embodies all the complexity of human thought. "If we could write a program that could use 'but' correctly, we would have accomplished something impressive."

Dissatisfied.   Hofstadter uses that word a lot in conversation, or words like it, such as 'unsatisfying'. He does not express the sentiment in a whiny way. He says it in a curious way. His tone always indicates a desire to understand something better, to go deeper to the core of the question. That's a sign of a good researcher and a deep thinker.

~~~~

Let's just say that this was a great treat. Thanks to Dr. Hofstadter for sharing so much time with us here.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

February 19, 2012 12:17 PM

The Polymorphism Challenge

Back at SIGCSE 2005, Joe Bergin and ran a workshop called The Polymorphism Challenge that I mentioned at the time but never elaborated on. It's been on my mind again for the last week. First I saw a link to an OOP challenge aimed at helping programmers move toward OO's ideal of small classes and short methods. Then Kent Beck tweeted about the Anti-IF Campaign, which, as its name suggests, wants to help programmers "avoid dangerous ifs and use objects to build a code that is flexible, changeable, and easily testable".

That was the goal of The Polymorphism Challenge. I decided it was time to write up the challenge and make our workshop materials available to everyone.

Background

Beginning in the mid-1990s, Joe and I have been part of a cabal of CS educators trying to teach object-oriented programming style better. We saw dynamic polymorphism as one of the key advantages to be found in OOP. Getting procedural programmers to embrace it, including many university instructors, was a big challenge.

At ChiliPLoP 2003, our group was riffing on the idea of extreme refactoring, during which Joe and I created a couple of contrived examples eliminating if statements from a specific part of Karel the Robot that seemed to require them.

This led Joe to propose a programming exercise he called an étude, similar to what these days are called katas, which I summarized in Practice for Practice's Sake:

Write a particular program with a budget of n if-statements or fewer, for some small value of n. Forcing oneself to not use an if statement wherever it feels comfortable forces the programmer to confront how choices can be made at run-time, and how polymorphism in the program can do the job. The goal isn't necessarily to create an application to keep and use. Indeed, if n is small enough and the task challenging enough, the resulting program may well be stilted beyond all maintainability. But in writing it the programmer may learn something about polymorphism and when it should be used.

Motivated by the Three Bears pattern, Joe and I went a step further. Perhaps the best way to know that you don't need if-statements everywhere is not to use them anywhere. Turn the dials to 11 and make 'em all go away! Thus was born the challenge, as a workshop for CS educators at SIGCSE 2005. We think it is useful for all programmers. Below are the materials we used to run the workshop, with only light editing.

Task

Working in pairs, you will write (or re-write) simple but complete programs that normally use if statements, to completely remove all selection structures in favor of polymorphism.

Objective

The purpose of this exercise is not to demonstrate that if statements are bad, but that they aren't necessary. Once you can program effectively this way, you have a better perspective from which to choose the right tool. It is directed at skilled procedural programmers, not at novices.

Rules

You should attempt to build the solutions to one of the challenge problems without using if statements or the equivalent.

You may use the libraries arbitrarily, even when you are pretty sure that they are implemented with if statements.

You may use exceptions only when really needed and not as a substitute for if statements. Similarly, while loops are not to be used to simulate if statements. Your problems should be solved with polymorphism: dynamic and parameterized.

Note that if you use (for example) a hash map and the program cannot control what is used as a key in a get (user input, for example). then it might return null. You are allowed to use an exception to handle this situation, or even an if. If you can't get all if statements out of your code, then see how few you really need to live with.

Challenges

Participants worked in pairs. They had a choice of programming scenarios, some of which were adapted from work by others:

This pdf file contains the full descriptions given to participants, including some we did not try with workshop participants. If you come up with a good scenario for this challenge, or variations on ours, please let me know.

Hints

When participants hit a block and asked for pointers, we offered hints of various kinds, such as:

•  When you have two behaviors, put them into different objects. The objects can be created from the same class or from related classes. If they are from the same class, you may want to use parameterized polymorphism. When the classes are different, you can use dynamic polymorphism. This is the easy step. Java interfaces are your friend here.

•  When you have the behaviors in different objects, find a way to bring the right object into play at the right time. That way, you don't need to use ad hoc methods to distinguish among them. This is the hard part. Sometimes you can develop a state change diagram that makes it easier. Then you can replace one object with another at the well-defined state change points.

•  Note that you can eliminate a lot of if statements by capturing early decisions -- perhaps made using if statements -- in different objects. These objects can then act as "flags with behavior" when they are passed around the program. The flag object then encapsulates the earlier decision. Now try to capture those decisions without if statements.

(Note that this technique alone is a big win in improving the maintainability of code. You replace many if statements spread throughout a program with one, giving you a single point of change in future.)

•  Delegation from one object to another is a real help in this exercise. This leads you to the Strategy design pattern. An object M can carry with it another, S, that encapsulates the strategy M has for solving a problem. To perform the associated behavior, M delegates to S. By changing the strategy object, you can change the behavior of the object that carries it. M seems to behave polymorphically, but it is really S that does the work.

•  You can modify or enhance strategies using the Decorator design pattern. A decorator D implements the same interface as the thing it decorates, M. When sent a message, the decorator performs some action and also sends the same message to the object it decorates. Thus the behavior of M is executed, but also more. Note that D can provide additional functionality both before and after sending the message to M. A functional method can return quite a different result when sent through a decorated object.

•  You can often choose strategies or other objects that encapsulate decisions using the Factory design pattern. A hash map can be a simple factory.

•  You can sometimes use max and min to map a range of values onto a smaller set and then use an index into a collection to choose an object. max and min are library functions so we don't care here how they might be implemented.

At the end of the workshop, we gave one piece of general advice: Doing this exercise once is not enough. Like an étude, it can be practiced often, in order to develop and internalize the necessary skills and thought patterns.

Conclusion

I'll not give any answers or examples here, so that readers can take the challenge themselves and try their hand at writing code. In future posts, I'll write up examples of the techniques described in the hints, and perhaps a few others.

Joe wrote up his étude in Variations on a Polymorphic Theme. In it, he gives some advice and a short example.

Serge Demeyer, Stéphane Ducasse, and Oscar Nierstrasz wrote a wonderful paper that places if statements in the larger context of an OO system, Transform Conditionals: a Reengineering Pattern Language.

If you like the Polymorphism Challenge, you might want to try some other challenges that ask you to do without features of your preferred language or programming style that you consider essential. Check out these Programming Challenges.

Remember, constraints help us grow. Constraints are where creative solutions happen.

I'll close with the same advice we gave at the end of the workshop: Doing this exercise once is not enough, especially for OO novices. Practice it often, like an étude or kata. Repetition can help you develop the thought patterns of an OO programmer, internalize them, and build the skills you need to write good OO programs.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 15, 2012 4:41 PM

Architecture Without Architects

David Byrne's essay on collective-creation introduced me to a dandy little picture book by Bernard Rudofsky called Architecture Without Architects. I love a slim book, and I love architecture, so I didn't need much of a push to pick it up at the library.

For an agile software developer, the book's title evokes something visceral. I think software architecture often happens best when it happens organically, emerging as the programmer grows the program piecemeal, feature by feature. This is a pragmatic view, as expressed succinctly by Brian Marick in his recent The Aim of Architecture:

Architecture isn't true, it's useful.

In software, as Marick says, knowing a program's architecture helps us to navigate our way around the program and add new code. An agile developer is willing to let architecture describe the existing program, rather than prescribe its shape. A descriptive, emergent architecture will be more helpful in what we need it for than a prescribed, often inaccurate architecture created ahead of time.

That's the mindset I brought to Architecture Without Architects. I found, though, that it is about more than piecemeal growth and emergence; it talks about buildings and spaces created by regular people. Some people call this "vernacular" architecture, but Rudofsky uses a number of terms aimed at elevating the idea beyond the vulgar, among them "indigenous", "spontaneous", and "non-pedigreed". I think Rudofsky likes "non-pedigreed" best because it most accurately expresses the distinction between the creations he studies and "real" architecture: the only difference is the credential held by the builder.

He lays responsibility for this harmful distinction at the feat of historians, who emphasize "the parts played by architects and their patrons" at the expense of "the communal enterprise" of the built environment. But all of us share in the blame:

Part of our trouble results from the tendency to ascribe to architects -- or, for that matter, to all specialists -- exceptional insights into problems of living when, in truth, most of them are concerned with problems of business and prestige.

One of the goals of this book is to encourage the study of non-pedigreed architecture, to describe a typology and to document important examples. "There is much to learn," says Rudofsky, "from architecture before it became an expert's art".

So, it turns out that Architecture Without Architects is not about the same sense of "without architect" that we in the software world usually mean. Agile developers are, for the most part, professionals, not hobbyists or regular Joes cobbling together programs on the side. Part of that is cultural. People who would never think of writing a program for themselves think nothing about diddling around their houses. Part of it is technological. It's pretty easy to go to the nearest home improvement center and buy modular components that non-professionals can use to change the shape and decoration of their houses. Programming, not so much.

There are, of course, a few hobbyists tinkering around with programs in their spare time. More important, there are plenty of people with few or no credentials in computing or software engineering making a living by writing programs. Sometimes, they have switched careers out of necessity or choice. Other times, they have slowly drifted into software development over the course of a career.

In yet other cases, they retain their professional identity in another discipline and write code to help them do their jobs. Greg Wilson's Software Carpentry project is aimed at one such group of people: professional scientists who find themselves writing and maintaining software as an essential part of doing their science. Rudofsky may be right when he chides us for attributing exceptional insight to professional architects, and if so we are certainly right not to attribute exceptional insight to pedigreed software developers. But Wilson is building a brand by reminding us that, while it may not take exceptional insight to write programs, doing it well does require some skill and knowledge.

I think that Rudofsky's interest in vernacular architecture has other parallels in the software world. For example, technologies such as SourceForge and now GitHub have enabled developers to showcase, celebrate, and benefit from the communal enterprise of writing programs. Programmers may be sitting home working on their own, but they aren't really alone. They are sharing what they write, sharing in what others write, and otherwise participating in vibrant communities of developers.

Then there is the idea of credentials. While many programmers do have degrees in computer science or engineering, most professionals don't have advanced academic degrees or an academic bent. They write code in a world shaped by forces beyond those usually talked about in algorithms and data structures textbooks. The software patterns movement that grew up in the 1990s aimed to document valuable lessons learned programming "in the wild". Like Rudofsky's typology of indigenous architecture, catalogs of design patterns collected vernacular wisdom. As Rudofsky said about the creations of the anonymous builders of most of the world we actually live in,

The beauty of this architecture has long been dismissed as accidental, but today we should be able to recognize it as the result of rare good sense in the handling of practical problems.

Say whatever else you want about the Gang of Four book, it captured a lot of OO wisdom learned in the trenches, often from working with unforgiving building materials like C++.

I enjoyed Architecture Without Architects greatly. After an eight-page preface in which Rudofsky lays down most of the ideas I've summarized here, the book consists of 150 or so pages of pictures accompanied by explanatory paragraphs. There was a lot of interesting detail and even a little wisdom in those paragraphs. If you like architecture, whether housing or programming, you might enjoy spending a couple of hours with this book.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

February 13, 2012 4:13 PM

The Patterns, They Are A-Changin'

Seth Godin's recent blog entry at the Domino Project talks about how changes in the book are changing the publishing industry. He doesn't use the word 'pattern' in his discussion, in the sense of an Alexandrian pattern, but that's how I see his discussion. The forces at play in our world are changing, which leads to changes in the forms that find equilibrium. In particular, Godin mentions:

  • Patterns of length. There are tradeoffs involving the cost of binding and the minimum viable selling price for publishers versus the technology of binding and a maximum viable purchase price for consumers. These have favored certain sizes in print.
  • Patterns of self-sufficiency. "Electronic forms link." Print forms must stand on their own.
  • Patterns of consumption. These are are driven even more economic forces than the other two types, not technical forces. Consuming e-books is, he says, "more like browsing than buying."

Godin looks mostly at the forward implications of changes in the patterns of self-sufficiency, but I've been thinking about the backward implications of print publications having to stand on their own. As noted in a recent entry, I have begun to adapt a couple of my blog entries into articles for print media, such as newspaper and magazine opinion pieces. My blog entries link generously and regularly to my earlier writings, because much of what I write is part of an ongoing process of thinking out loud. I also link wherever I can to other peoples' works, whether blogs, journal articles, code, or other forms. That works reasonably well in a blog, because readers can see and following the context in which the current piece is written. It also means that I don't have to re-explain every idea that a given entry deals with; if it's been handled well in a previous entry, I link to it.

As I try to adapt individual blog entries, I find that they are missing so much context when we strip the links out. In some places, I can replace the link with a few sentences of summary. But how much should I explain? It's easy to find myself turning one- or two-page blog entry into four pages, or ten. The result is that the process of "converting an entry into an article" may become more like writing a new piece than like modifying an existing piece. That's okay, of course, but it's a different task and requires a different mindset.

For someone steeped in Alexander's patterns and the software patterns community, this sentence by Godin signals a shift in the writing and publishing patterns we are all used to:

As soon as paper goes away, so do the chokepoints that created scarcity.

Now, the force of abundance begins to dominate scarcity, and the forces of bits and links begin to dominate paper and bindings and the bookshelves of the neighborhood store.

It turns out that the world has for the last hundreds of years been operating within a small portion of the pattern language of writing and publishing books. As technology and people change, the equilibrium points in the publishing space have changed... and so we need to adopt a new set of patterns elsewhere in the pattern language. At first blush, this seems like unexplored territory, but in fact it is more. This part of the pattern language is, for the most part, unwritten. We have to discover these patterns for ourselves.

Thus, we have the Domino Project and others like it.

The same shifting of patterns is happening in my line of work, too. A lot of folks are beginning to explore the undiscovered country of university education in the age of the internet. This is an even greater challenge, I think, because people and how they learn are even more dominant factors in the education pattern language than in the publishing game.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

February 01, 2012 5:00 PM

"You Cannot Trust Your Creativity Yet"

You've got to learn your instrument.
Then, you practice, practice, practice.
And then, when you finally get up there on the bandstand,
forget all that and just wail. -- Charlie Parker

I signed up for an opportunity to read early releases of a book in progress, Bootstrapping Design. Chapter 4 contain a short passage that applies to beginning programmers, too:

Getting good at design means cultivating your taste. Right now, you don't have it. Eventually you will, but until then you cannot trust your creativity. Instead, focus on simplicity, clarity, and the cold, hard science of what works.

M.C. Escher, 'Hands'

This is hard advice for people to follow. We like to use our brains, to create, to try out what we know. I see this desire in many beginning programming students. The danger grows as our skills grow. One of my greatest frustrations comes in my Programming Languages course. Many students in the course are reluctant to use straightforward design patterns such as mutual recursion.

At one level, I understand their mindset. They have started to become accomplished programmers in other languages, and they want to think, design, and program for themselves. Oftentimes, their ill-formed approaches work okay in the small, even if the code makes the prof cringe. As our programs grow, though, the disorder snowballs. Pretty soon, the code is out of the student's control. The prof's, too.

A good example of this phenomenon, in both its positive and negative forms, happened toward the end of last semester's course. A series of homework assignments had the students growing an interpreter for a small, Scheme-like language. It eventually became the largest functional program they had ever written. In the end. there was a big difference between code written by students who relied on "the cold, hard science" we covered in class and code written by students who had wondered off into the wilderness of their own creativity. Filled with common patterns, the former was relatively easy to read, search, and grade. The latter... not so much. Even some very strong students began to struggle with their own code. They had relied too much on their own approaches for decomposing the problem and organizing their programs, but those ideas weren't scaling well.

I think what happens is that, over time, small errors, missteps, and complexities accumulate. It's almost like the effect of rounding error when working with floating point numbers. I vividly remember experiencing that in mu undergrad Numerical Analysis courses. Sadly, few of our CS students these days take Numerical Analysis, so their understanding of the danger is mostly theoretical.

Perhaps the most interesting embodiment of trusting one's own creativity too much occurred on the final assignment of the term. After several weeks and several assignments, we had a decent sized program. Before assigning the last set of requirements, I gave everyone in the class a working solution that I had written, for reference. One student was having so much trouble getting his own program to work correctly, even with reference to my code, that he decided to use my code as the basis for his assignment.

Imagine my surprise when I saw his submission. He used my code, but he did not follow the example. The code he added to handle the new requirements didn't look anything like mine, or like what we had studied in class. It repeated many of the choices that had gotten him into hot water over the course of the earlier assignments. I could not help but chuckle. At least he is persistent.

It can be hard to trust new ideas, especially when we don't understand them fully yet. I know that. I do the same thing sometimes. We feel constrained by someone else's programming patterns and want to find our own way. But those patterns aren't just constraints; they are also a source of freedom. I try to let my students grow in freedom as they progress through the curriculum, but sometimes we encounter something new like functional programming and have to step back into the role of uncultivated programmer and grow again.

There is great value in learning the rules first and letting our tastes and design skill evolve slowly. Seniors taking project courses are ready, so we turn them loose to apply their own taste and creativity on Big Problems. Freshmen usually are not yet able to trust their own creativity. They need to take it slow.

To "think outside the box", you you have to start with a box. That is true of taste and creativity as much as it is of knowledge and skill.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

December 30, 2011 11:05 AM

Pretending

Kurt Vonnegut never hesitated to tell his readers the morals of his stories. The frontispiece of his novel Mother Night states its moral upfront:

We are what we pretend to be, so we must be careful about what we pretend to be.

Pretending is a core thread that runs through all of Vonnegut's work. I recognized this as a teenager, and perhaps it is what drew me to his books and stories. As a junior in high school, I wrote my major research paper in English class on the role fantasy played in the lives of Vonnegut's characters. (My teachers usually resisted my efforts to write about authors such as Kafka, Vonnegut, and Asimov, because they weren't part of "the canon". They always relented, eventually, and I got to spend more time thinking about works I loved.)

I first used this sentence about pretending in my eulogy for Vonnegut, which includes a couple of other passages on similar themes. Several of those are from Bokononism, the religion created in his novel Cat's Cradle as a way to help the natives of the island of San Lorenzo endure their otherwise unbearable lives. Bokononism had such an effect on me that I spent part of one summer many years ago transcribing The Books of Bokonon onto the web. (In these more modern times, I share Bokonon's wisdom via Twitter.)

Pretending is not just a way to overcome pain and suffering. Even for Vonnegut, play and pretense are the ways we construct the sane, moral, kind world in which we want to live. Pretending is, at its root, a necessary component in how we train our minds and bodies to think and act as we want them to. Over the years, I've written many times on this blog about the formation of habits of mind and body, whether as a computer scientist, a student, or a distance runner.

Many people quote Aristotle as the paradigm of this truth:

We are what we repeatedly do. Excellence, then, is not an act, but a habit.

I like this passage but prefer another of his, which I once quoted in a short piece, What Remains Is What Will Matter:

Excellence is an art won by training and habituation. We do not act rightly because we have virtue or excellence, but rather we have those because we have acted rightly.

This idea came charging into my mind this morning as I read an interview with Seth Godin. He and his interviewers are discussing the steady stream of rejection that most entrepreneurs face, and how some people seem able to fight through it to succeed. What if a person's natural "thermostat" predisposes them to fold in the face of rejection? Godin says:

I think we can reset our inclinations. I'm certain that pretending we can is better than admitting we can't.

Vonnegut and Aristotle would be proud. We are what we pretend to be. If we wish to be virtuous, then we must act rightly. If we wish to be the sort of person who responds to rejection by working harder and succeeding, then we must work harder. We become the person we pretend to be.

As children, we think pretending is about being someone we aren't. And there is great fun in that. As teenagers, sometimes we feel a need to pretend, because we have so little control over our world and even over our changing selves. As adults, we tend to want to put pretending away as child's play. But this obscures a truth that Vonnegut and Aristotle are trying to teach us:

Pretending is just as much about being who we are as about being who we aren't.

As you and I consider the coming of a new year and what we might resolve to do better or differently in the coming twelve months that will make a real difference in our lives, I suggest we take a page out of Vonnegut's playbook.

Think about the kind of world you want to live in, then live as if it exists.

Think about the kind of person you want to be, then live as if you are that person.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

November 16, 2011 2:38 PM

Memories of a Programming Assignment

Last Friday afternoon, the CS faculty met with the department's advisory board. The most recent addition to the board is an alumnus who graduated a decade ago or so. At one point in the meeting, he said that a particular programming project from his undergraduate days stands out in his mind after all these years. It was a series of assignments from my Object-Oriented Programming course called Cat and Mouse.

I can't take credit for the basic assignment. I got the idea from Mike Clancy in the mid 1990s. (He later presented his version of the assignment on the original Nifty Assignments panel at SIGCSE 1999.) The problem includes so many cool features, from coordinate systems to stopping conditions. Whatever one's programming style or language, it is a fun challenge. When done OO in Java, with great support for graphics, it is even more fun.

But those properties aren't what he remembers best about the project. He recalls that the project took place over several weeks and that each week, I changed the requirements of assignment. Sometimes, I added a new feature. Other times, I generalized an existing feature.

What stands out in his mind after all these years is getting the new assignment each week, going home, reading it, and thinking,

@#$%#^. I have to rewrite my entire program.

You, see he had hard-coded assumptions throughout his code. Concerns were commingled, not separated. Objects were buried inside larger pieces of code. Model was tangled up with view.

So, he started from scratch. Over the course of several weeks, he built an object-oriented system. He came to understand dynamic polymorphism and patterns such as MVC and decorator, and found ways to use them effectively in his code.

He remembers the dread, but also that this experience helped him learn how to write software.

I never know exactly what part of what I'm doing in class will stick most with students. From semester to semester and student to student, it probably varies quite a bit. But the experience of growing a piece of software over time in the face of growing and changing requirements is almost always a powerful teacher.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 14, 2011 3:43 PM

Using Null Object to Get Around Truthiness

Avdi Grimm recently posted a short but thorough introduction to a common problem encountered by Ruby programmers: nullifying an action when the required object is not present. In a mixed-paradigm language, this becomes an issue when we use if-statements to guard the action. Ruby, like many languages, treats a distinguished value as false and all other values as true. Thus, unless an object is "falsey", it will be seen as true by the if-statement. This requires us to do extra tests or wrap our objects to ensure that tests pass and fail at the appropriate times. Grimm explains the problem and walks us through several approaches to solving this problem in Ruby. I recommend you read it.

Some of my functional programming friends teased me about this article, along the lines of a pithy tweet:

Rubyists have discovered Maybe/Option.

Maybe is a wrapper type in Haskell that allows programmers to indicate that an expected value may not be available; Option is the Scala analog. Wrapper types are the natural way to handle the problem of falsey-ness in functional programming, especially in statically-typed languages. (Clojure, a popular dynamically-typed functional language in the Lisp tradition, doesn't make use of this idea.)

When I write Haskell code, as rare as that is, I use Maybe as a natural part of typing my programs. As a functional programmer more generally, I see its great value in writing compact code. The alternative in Scheme is to us if expressions to switch on value, separating null (usually) base cases from inductive cases.

However, when I work as an OO programmer, I don't miss Maybe or even falsey objects more generally. Indeed, I think the best advice in Grimm's article comes in his conclusion:

If we're trying to coerce a homemade object into acting falsey, we may be chasing a vain ideal. With a little thought, it is almost always possible to transform code from typecasing conditionals to duck-typed polymorphic method calls. All we have to do is remember to represent the special cases as objects in their own right, whether that be a Null Object or something else.

My pithy tweet-like conclusion is this: If you are checking for truthiness, you're doing OO wrong.

To do it right, use the Null Object pattern.

Ruby is an OO language, but it gives us great freedom to write code in other styles as well. Many programmers in the Ruby community, mostly those without prior OO experience, miss out on the beauty and compactness one can achieve using standard OO patterns.

I see a lot of articles on the web about the Null Object pattern, but most of them involve extending the class NilClass or its lone instance, nil. That is fine if you are trying to add generic null object behavior to a system, often in service of truthiness and falsey-ness. A better approach in most contexts is to implement an object that behaves like a "nothing" in your application. If you are writing a program that consist of users, create an unassigned user. If you are creating a logging facility and need the ability for a service not to use a log, create a null log. If you are creating an MVC application and need a read-only controller, create a null controller.

In some applications, the idea of the null object disappears into another primitive object. When we implement a binary tree, we create an object that has references to two other tree objects. If all tree nodes behave similarly except that some do not have children, then we can create a NullTree object to serve as the values of the instance variables in the actual leaves. If leaf nodes behave differently than interior nodes, then we can create a Leaf object to serve as the values of the interior nodes' instance variables. Leaf subsumes any need for a null object.

One of the consequences of using the Null Object pattern is the elimination of if statements that switch on the type of object present. Such if statements are a form of ad hoc polymorphism. Programmers using if statements while trying to write OO code should not be surprised that their lives are more difficult than they need be. The problem isn't with OOP; it's with not taking advantage of dynamic polymorphism, one of the essential benefits of OOP.

If you would like to learn more about the Null Object pattern, I suggest you read its two canonical write-ups:

If you are struggling with making the jump to OOP, away from typecasing switch statements and explicit loops over data structures, take up the programming challenge writing increasingly larger programs with no if statements and no for statements. Sometimes, you have to go to extremes before you feel comfortable in the middle.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 17, 2011 4:46 PM

Computational Thinking Everywhere: Experiments in Education

I recently ran across Why Education Startups Do Not Succeed, based on the author's experience working as an entrepreneur in the education sector. He admits upfront that he isn't offering objective data to support his conclusions, so we should take them with a grain of salt. Still, I found his ideas interesting. Here is the take-home point in sentences:

Most entrepreneurs in education build the wrong type of business, because entrepreneurs think of education as a quality problem. The average person thinks of it as a cost problem.

That disconnect creates a disconnect between the expectations of sellers and buyers, which ends up hurting, even killing, most education start-ups.

The old AI guy in me latched on to this paragraph:

Interestingly, in the US, the people who are most willing to try new things are the poor and uneducated because they have a similar incentive structure to a person in rural India. Their default state is "screwed." If a poor person doesn't do something dramatic, they are going to stay screwed. Many parents and teachers in these communities understand this. So the communities are often willing to try new, experimental things -- online education, charter schools, longer school days, no summer vacation, co-op programs -- even if they may not work. Why? Because their students default state is "screwed", and they need something dramatically better. Doing something significantly higher quality is the only way to overcome the inertia of already being screwed. The affordable, but poor quality approaches just aren't good enough. These communities are on the hunt for dramatically better approaches and willing to try new things.

Local and global maxima in hill-climbing

I've seen other discussions of the economic behavior of people in the lowest socioeconomic categories that fit this model. Among them were the consumption of lottery tickets in lieu of saving, and more generally the trade-off between savings and consumption. If a small improvement won't help a people much, then it seems they are more likely willing to gamble on big improvements or to simply enjoy short-term rewards of spending.

This mindset immediately brought to mind the AI search technique known as hill climbing. When you know you are on a local maximum that is significantly lower than the global maximum, you are willing to take big steps in search of a better hill to climb, even if that weakens your position in the short-term. Baby steps won't get you there.

This is a small example of unexpected computational thinking in the real world. Psychologically, it seems, that we are often hill climbers.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

September 08, 2011 8:19 PM

The Summer Smalltalk Taught Me OOP

Or, Throw 'em Away Until You Get It Right

Just before classes started, one of my undergrad research students stopped by to talk about a choice he was weighing. He is writing a tool that takes as input a Photoshop document and produces as output a faithful, understandable HTML rendition of the design and content. He has written a lot of code in Python over the last few months, using an existing library of tools for working with Photoshop docs. Now that he understands the problem well and has figured out what a good solution looks like, he is dissatisfied with his existing code. He thinks he can create a better, simpler solution by writing his own Photoshop-processing tools.

The choice is: refactor the code incrementally until he has replaced all the library code, or start from scratch?

My student's dilemma took me back over twenty years, to a time when I faced the same choice, when I too was first learning my craft.

I was one of my doctoral advisor's first graduate students. In AI, this means that we had a lot of infrastructure to build in support of the research we all wanted to do. We worked in knowledge-based systems, and in addition to doing research in the lab we also wanted to deliver working systems to our collaborators on and off campus. Building tools for regular people meant having reasonable interfaces, preferably with GUI capabilities. This created an extra problem, because our collaborators used PCs, Macs, and Unix workstations. My advisor was a Mac guy. At the time, I was a Unix man in the process of coming to love Macs. In that era, there was no Java and precious few tools for writing cross-platform GUIs.

Our lab spent a couple of years chasing the ideal solution: a way to write a program and run it on any platform, giving users the same experience on all three. The lab had settled more or less on PCL, a portable Common Lisp. It wasn't ideal, but we grad students -- who were spending a lot of time implementing libraries and frameworks -- were ready to stop building infrastructure and start working full-time on our own code.

the Smalltalk balloon from Byte magazine

Then my advisor discovered Smalltalk.

The language included graphics classes in its base image, which offered the promise of write-once-run-anywhere apps for clients. And it was object-oriented, which matched the conceptual model of the software we wanted to build to a T. I had just spent several months trying to master CLOS, Common Lisp's powerful object system, but Smalltalk looked like just what we wanted. So we made the move -- and told advisor that this would be the last move. Smalltalk would be our home.

I learned the basics of the language by working through every tutorial I could my hands on, first Digitalk Smalltalk then ObjectWorks. They were pretty good. Then I wrote some toy programs of my own, to show I was ready to create on my own.

So I started writing my first Smalltalk system: a parser and interpreter for a domain-specific AI language with a built-in inference engine, a simple graphical and table-driven tool for writing programs in these languages, and graphical front end for running systems.

There was a lot going on there, at least for a green graduate student only recently called up to the big leagues. But I had done my homework. I was ready. I was sure of it. I was cocky.

I crashed.

It turn out that I didn't really understand the role that data should play in an OO program. My program soon became a tangle of data dependencies design before I understood my solution all that well, and the tangle made the code increasing turgid.

So I threw it away, rm -r'ed it into nothingness. I started from scratch, sure that Version 2 would be perfect.

I crashed again.

The program was better this time around, but it turns out that I didn't really understand how objects should interact in a large OO program. My program soon became a tangle of objects and wrappers and adapters, whose behavior I could not follow in even the simplest scenarios. The tangle made the code increasing murky.

So I threw it away -- rm -r again -- and started from scratch. Surely Version 3, based on several weeks of coding and learning, would be just what we wanted.

I crashed yet again. This time, the landing was more gentle, because I really was making progress. But as I coded my third system, I began to see ways to structure the program that would make the code easier to grow as I added features, and easier to change as I got better at design. I was just beginning to glimpse the Land of Design Patterns. But I always seemed to learn each lesson one day too late to use it.

My program was moving forward, creakily, but I just knew it could be better. I did not like the idea of maintaining this code for several years, as we modified apps fielded with our collaborators and as we used it as part of the foundation for the lab's vision.

So I threw it away and wrote my system from scratch one last time. The result was not a perfect program, but one that I could live with and be proud of. It only took me four major iterations and several months of programming.

Looking back, I faced the same decision my student faced recently with his system. Refactor or start over? He has the advantage of having written a better first program than I had, yet he made the sound decision to rewrite.

Sometimes, refactoring really is the better approach. You can keep system running while slowly corralling data dependencies, spurious object interactions, and suboptimal design. Had I been a more experienced programmer, I may well have chosen to refactor from Version 3 to Version 4 of my program. But I wasn't. Besides, I had neither a suite of unit tests nor access to automated refactoring tools. Refactoring without either of these makes the process scarier and more dangerous than it needs to be.

Maybe refactoring is the better approach most or all of the time. I've read all about how the Great Rewrite is one of those Things You Should Never Do.

Fred Brooks

But then, there is an axiom from Turing Award winner Fred Brooks that applies better to my circumstance of writing the initial implementation of a program: "... plan to throw one away; you will, anyhow". I find Brooks's advice most useful when I am learning a lot while I am programmer. For me, that is one context, at least, in which starting from scratch is a big win: when my understanding is changing rapidly, whether of domain, problem, or tools. In those cases, I am usually incapable of refactoring fast enough to keep up with my learning. Starting over offers a faster path to a better program than refactoring.

On that first big Smalltalk project of mine, I was learning so much, so fast. Smalltalk was teaching me object-oriented programming, through my trial and error and through my experience with the rest of the Smalltalk class hierarchy. I had never written a language interpreter or other system of such scale before, and I was learning lessons about modularity and language processing. I was eager to build a new and improved system as quickly as I could.

In such cases, there is nothing like the sound of OS X's shredder. Freedom. No artificial constraints from what has suddenly become legacy code. No limits from my past ignorance. A fresh start. New energy!

This is something we programmers can learn from the experience of other writers, if we are willing. In Unless It Moves the Human Heart: The Craft and Art of Writing, Roger Rosenblatt tells us that Edgar Doctorow ...

... had written 150 pages of The Book of Daniel before he'd realized he had chosen the wrong way to tell the story. ... So one morning Edgar tossed out the 150 pages and started all over.... I wanted the class to understand that Edgar was happy to start from scratch, because he had picked the wrong door the first time.

Sometimes, the best thing a programmer can admit is that he or she has picked the wrong door and walked down the wrong path.

But Brooks warns of a danger programmer's face on second efforts: the Second System Effect. As Scott Rosenberg writes in Code Reads #1: The Mythical Man-Month:

Brooks noted that an engineer or team will often make all the compromises necessary to ship their first product, then founder on the second. Throughout project number one, they stored up pet features and extras that they couldn't squeeze into the original product, and told themselves, "We'll do it right next time." This "second system" is "the most dangerous a man ever designs."

I never shipped the first version of my program, so perhaps I eluded this pitfall out of luck. Still, I was cocky when I wrote Version 1, and then I was cocky when I wrote Version 2. But both versions humbled me, humbled me hard. I was a good programmer, maybe, but I wasn't good enough. I had a lot to learn, and I wanted to learn it all.

So it was relatively easy to start over on #3 and #4. I was learning, and I had the luxury of time. Ah, to be a grad student again!

In the end, I wrote a program that I could release to users and colleagues with pride. Along the way, Smalltalk taught me a lot about OOP, and writing the program taught me a lot about expert system shells. It was time well spent.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

August 11, 2011 7:59 PM

Methods, Names, and Assumptions in Adding New Code to a Program

Since the mid-1990s, there has been a healthy conversation around refactoring, the restructuring of code to improve its internal structure without changing its external behavior. Thanks to Martin Fowler, we have a catalog of techniques for refactoring that help us restructure code safely and reliably. It is a wonderful tool for learners and practitioners alike.

When it comes to writing new code, we are not so lucky. Most of us learn to program by learning to write new code, yet we rarely learn techniques for adding code to a program in a way that is as safe and reliable as effective as the refactorings we know and love.

You might think that adding code would be relatively simple, at least compared to restructuring a large, interconnected web of components. But how can we move with the same confidence when adding code as we do when we follow a meticulous refactoring recipe under the protection of good unit tests permits? Test-driven design is a help, but I have never felt like I had the same sort of support writing new code as when I refactor.

So I was quite happy a couple of months ago to run across J.B. Rainsberger's Adding Behavior with Confidence. Very, very nice! I only wish I had read it a couple of months ago when I first saw the link. Don't make the same mistake; read it now.

Rainsberger gives a four-step process that works well for him:

  1. Identify an assumption that the new behavior needs to break.
  2. Find the code that implements that assumption.
  3. Extract that code into a method whose name represents the generalisation you're about to make.
  4. Enhance the extracted method to include the generalisation.

I was first drawn to the idea that a key step in adding new behavior is to make a new method, procedure, or function. This is one of the basic skills of computer programming. It is one of the earliest topics covered in many CS1 courses, and it should be taught sooner in many others.

Even still, most beginners seem to fear creating new methods. Even more advanced students will regress a bit when learning a new language, especially one that works differently than the languages they know well. A function call introduces a new point of failure: parameter passing. When worried about succeeding, students generally try to minimize the number of potential points of failure.

Notice, though, that Rainsberger starts not with a brand new method, empty except for the new code to be written. This technique asks us first to factor out existing code into a new method. This breaks the job of writing the new code into two, smaller steps: First refactor, relying on a well-known technique and the existing tests to provide safety. Second, add the new code. (These are Steps 3 and 4 in Rainsberger's technique.)

That isn't what really grabbed my attention first, however. The real beauty for me is that extracting a method forces us to give it us a name. I think that naming gives us great power, and not just in programming. A lot of times, CS textbooks make a deal about procedures as a form of abstraction, and they are. But that often feels so abstract... For programmers, especially beginners, we might better focus on the fact that help us to name things in our programs. Names, we get.

By naming a procedure that contains a few lines of code, we get to say what the code does. Even the best factored code that uses good variable names tends to say how something is done, not what it is doing. Creating and calling a method separates the two: the client does what the method does, and the server implements how it is done. This separation gives us new power: to refactor the code in other ways, certainly. Rainsberger reminds us that it also gives us power to add code more reliably!

"How can I add code to a program? Write a new function." This is an unsurprising, unhelpful answer most of the time, especially for novices who just see this as begging the question. "Okay, but what do I do then?" Rainsberger makes it a helpful answer, if a bit surprising. But he also puts it in a context with more support, what to do before we start writing the new code.

Creating and naming procedures was the strongest take-home point for me when I first read this article. As the ideas steeped in my mind for a few days, I began to have a greater appreciation for Rainsberger's focus on assumptions. Novice thinkers have trouble with assumptions. This is true whether they are learning to program, learning to read and analyze literature, or learning to understand and argue public policy issues. They have a hard time seeing assumptions, both the ones they make and the ones made by other writers. When the assumptions are pointed out, they are often unsure what to do with them, and are tempted to skip right over them. Assumptions are easy to ignore sometimes, because they are implicit and thus easy to lose track of when deep in a argument.

Learning to understand and reason about assumptions is another important step on the way to mature thinking. In CS courses, we often introduce the idea of preconditions and postconditions in Data Structures. (Students also see them in a discrete structures course, but there they tend to be presented as mathematical tools. Many students dismiss their value out of hand). Writing pre- and postconditions for a method is a way to make assumptions in your program explicit. Unfortunately, most beginning don't yet see the value in writing them. They feel like an extra, unnecessary step in a process dominated by the uncertainty they feel about their code. Assuring them that these invariants help is usually like pushing a rock up a hill. Tomorrow, you get to do it again.

One thing I like about Rainsberger's article is that it puts assumptions into the context of a larger process aimed at helping us write code more safely. Mathematical reasoning about code does that, too, but again, students often see it as something apart from the task of programming. Rainsberger's approach is undeniably about code. This technique may encourage programmers to begin thinking about assumptions sooner, more often, and more seriously.

As I said, I haven't seen many articles or books talk about adding code to a program in quite this way. Back in January, "Uncle Bob" Martin wrote an article in the same spirit as this, called The Transformation Priority Premise. It offers a grander vision, a speculative framework for all additions to code. If you know Uncle Bob's teachings about TDD, this article will seem familiar; it fits quite nicely with the mentality he encourages when using tests to drive the growth of a program. While his article is more speculative, it seems worthy or more thought. It encourages the tiniest of steps as each new test provokes new code in our program. Unfortunately, it takes such small steps that I fear I'd have trouble getting my students, especially the novices, to give it a fair try. I have a hard enough time getting most students to grok the value of TDD, even my seniors!

I have similar concerns about Rainsberger's technique, but his pragmatism and unabashed focus on code gibes me hope that it may be useful teaching students how to add functionality to their programs.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 20, 2011 11:55 AM

Learning From Others

I've been reading through some of the back entries in Vivek Haldar's blog and came across the entry Coding Blind. Haldar notes that most professionals and craftsmen learn their trade at least in part by watching others work, but that's not how programmers learn. He says that if carpenters learned the way programmers do, they'd learn the theory of how to hammer nails in a classroom and then do it for the rest of their careers, with every other carpenter working in a different room.

Programmers these days have a web full of open-source code to study, but that's not the same. Reading a novel doesn't give you any feel at all for what writing a novel is like, and the same is true for programming. Most CS instructors realize this early in their careers: showing students a good program shows them what a finished program looks like, but it doesn't give them any feel at all for what writing a program is like. In particular, most students are not ready for the false starts and the rewriting that even simple problems will cause them.

Many programming instructors try to bridge this gap by writing code live in class, perhaps with student participation, so that students can experience some of the trials of programming in a less intimidating setting. This is, of course, not a perfect model; instructors tend not to make the same kind of errors as beginners, or as many, but it does have some value.

Haldar points out one way that other kinds of writers learn from their compatriots:

Great artists and writers often leave behind a large amount of work exhaust other than their finished masterpieces: notebooks, sketches, letters and journals. These auxiliary work products are as important as the finished item in understanding them and their work.

He then says, "But in programming, all that is shunned." This made me chuckle, because I recently wrote a bit about my experience having students maintain engineering notebooks for our Intelligent Systems course. I do this so that they have a record of their thoughts, a place to dump ideas and think out loud. It's an exercise in "writing to learn", but Haldar's essay makes me think of another potential use of the notebooks: for other students to read and learn from. Given how reluctant my students were to write at all, I suspect that they would be even more reluctant to share their imperfect thoughts with others in the course. Still, perhaps I can find a way to marry these ideas.

cover of rpg's Writers' Workshops

This makes me think of another way that writers learn from each other, writers' workshops. Code reviews are a standard practice in software, and PLoP, the Pattern Languages of Programs conference, has adapted the writers' workshop form for technical writers. One of the reasons I like to teach certain project courses in a studio format is that it gives all then teams an opportunity to see each other's work and to talk about design, coding, and anything else that challenges or excites them. Some semesters, it works better than others.

Of course, a software team itself has the ability to help its members learn from one another. One thing I noticed more this semester than in the past was students commenting that they had learned from their teammates by watching them work. Some of the students who said this viewed themselves as the weakest links on their teams and so saw this as a chance to approach their more accomplished teammates' level. Others thought of themselves as equals to their teammates yet still found themselves learning from how others tackled problems or approached learning a new API. This is a team project succeeding as we faculty hope it might.

Distilling experience with techniques in more than just a finished example or two is one of the motivations for the software patterns community. It's one of the reasons I felt so comfortable with both the literary form and the community: its investment in and commitment to learning from others' practice. That doesn't operate at quite the fundamental level of watching another carpenter drive a nail, but it does strike close to the heart of the matter.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 13, 2011 2:26 PM

Patterns for Naming Things

J.B. Rainsberger's short entry grabbed my attention immediately. I think that Rainsberger is talking about a pair of complementary patterns that all developers learn at some point or other as they write more and bigger programs. He elegantly captures the key ideas in only a few words.

These patterns balance common forces between giving things long names and giving things short names. A long name can convey more information, but a short name is easier to type, format, and read. A long name can be a code smell that indicates a missing abstraction, but a short name can be a code smell that indicates premature generalization, a strange kind of YAGNI violation.

The patterns differ in the contexts in which they appear successfully. Long names are most useful the first time or two you implement an idea. At that point, there are few or no other examples of the idea in our code, so there is not yet a need for an abstraction. A long name can convey valuable information about the idea. As an idea appears more often, two or more long names will begin to overlap, which is a form of duplication. We are now ready to factor out the abstraction common to them. Now the abstraction conveys some or all of the information and short names become more valuable.

I need to incorporate these into any elementary pattern language I document, as well as in the foundation patterns layer of any professional pattern language. One thing I would like to think more about is how these patterns relate to Kent Beck's patterns Intention-Revealing Name and Type-Revealing Name.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

April 19, 2011 6:04 PM

A New Blog on Patterns of Functional Programming

(... or, as my brother likes to say about re-runs, "Hey, it's new to me.")

I was excited this week to find, via my Twitter feed, a new blog on functional programming patterns by Jeremy Gibbons, especially an entry on recursion patterns. I've written about recursion patterns, too, though in a different context and for a different audience. Still, the two pieces are about a common phenomenon that occurs in functional programs.

I poked around the blog a bit and soon ran across articles such as Lenses are the Coalgebras for the Costate Comonad. I began to fear that the patterns on this blog would not be able to help the world come to functional programming in the way that the Gang of Four book helped the world come to object-oriented programming. As difficult as the GoF book was for every-day programmers to grok, it eventually taught them much about OO design and helped to make OO programming mainstream. Articles about coalgebras and the costate comonad are certainly of value, but I suspect they will be most valuable to an audience that is already savvy about functional programming. They aren't likely to reach every-day programmers in a deep way or help them learn The Functional Way.

But then I stumbled across an article that explains OO design patterns as higher-order datatype-generic programs. Gibbons didn't stop with the formalism. He writes:

Of course, I admit that "capturing the code parts of a pattern" is not the same as capturing the pattern itself. There is more to the pattern than just the code; the "prose, pictures, and prototypes" form an important part of the story, and are not captured in a HODGP representation of the pattern. So the HODGP isn't a replacement for the pattern.

This is one of the few times that I've seen an FP expert speak favorably about the idea that a design pattern is more than just the code that can be abstracted away via a macro or a type class. My hope rebounds!

There is work to be done in the space of design patterns of functional programming. I look forward to reading Gibbons's blog as he reports on his work in that space.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

December 03, 2010 3:56 PM

A Commonplace for Novelists and Programmers

To be a moral human being is to pay, be obliged to pay, certain kinds of attention.

-- Susan Sontag, "The Novelist and Moral Reasoning"

For some reason, this struck me yesterday as important, not just as a human being, but also as a programmer. I am reminded that many of my friends and colleagues in the software world speak passionately of our moral obligations when writing code. The software patterns community, especially, harkens to Christopher Alexander's call regarding the moral responsibility of creators.

(If you want to read more from Sontag without tracking down her essay, check out this transcript excerpted from a speech she gave in 2004.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 03, 2010 2:12 PM

Ideas from Readers on Recent Posts

A few recent entries have given rise to interesting responses from readers. Here are two.

Fat Arrows

Relationships, Not Characters talked about how the most important part of design often lies in the space between the modules we create, whether objects or functions, not the modules themselves. After reading this, John Cook reminded me about an article by Thomas Guest, Distorted Software. Near the end of that piece, which talks about design diagrams, Guest suggests that the arrows in application diagrams should be larger, so that they would be proportional to the time their components take to develop. Cook says:

We typically draw big boxes and little arrows in software diagrams. But most of the work is in the arrows! We should draw fat arrows and little boxes.

I'm not sure that would make our OO class diagrams better, but it might help us to think more accurately!

My Kid Could Do That

Ideas, Execution, and Technical Achievement wistfully admitted that knowing how to build Facebook or Twitter isn't enough to become a billionaire. You have to think to do it. David Schmüdde mentioned this entry in his recent My Kid Could Do That, which starts:

One of my favorite artists is Mark Rothko. Many reject his work thinking that they're missing some genius, or offended that others see something in his work that they don't. I don't look for genius because genuine genius is a rare commodity that is only understood in hindsight and reflection. The beauty of Rothko's work is, of course, its simplicity.

That paragraph connects with one of the key points of my entry: Genius is rare, and in most ways irrelevant to what really matters. Many people have ideas; many people have skills. Great things happen when someone brings these ingredients together and does something.

Later, he writes:

The real story with Rothko is not the painting. It's what happens with the painting when it is placed in a museum, in front of people at a specific place in the world, at a specific time.

In a comment on this post, I thanked Dave, and not just because he discusses my personal reminiscence. I love art but am a novice when it comes to understanding much of it. My family and I saw an elaborate Rothko exhibit at the Smithsonian this summer. It was my first trip to the Smithsonian complex -- a wonderful two days -- and my first extended exposure to Rothko's work. I didn't reject his art, but I did leave the exhibit puzzled. What's the big deal?, I wondered. Now I have a new context in which to think about that question and Rothko's art. I didn't expect the new context to come from a connection a reader made to my post on tech start-up ideas that change the world!

I am glad to know that thinkers like Schmüdde are able to make connections like these. I should note that he is a professional artist (both visual and aural), a teacher, and a recovering computer scientist -- and a former student of mine. Opportunities to make connections arise when worlds collide.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Personal, Software Development

November 02, 2010 11:00 AM

Relationships, Not Characters

Early in his book "Impro", Keith Johnstone quotes playwright Edward Bond to the effect:

Drama is is about relationships, not about characters.

This immediately brought to mind Alan Kay's insistence that object-oriented programmers too often focus so much on the objects in their programmers that they lose sight of something more important: the space between the objects. A few years ago, I wrote about this idea in an entry called Software in Negative Space. It remains one of my most-read articles.

The secret to good OO design is in the ma, the web of relationships that make up a complex system, not in the objects themselves.

I think this is probably true of good design in any style, because it is really about how complex systems can manage and use encapsulation. The very best OO design patterns show us how multiple objects interact via message passing to resolve a thorny set of forces. The individual objects don't solve the problem; the solution lies in their interfaces and delegation of responsibility. We need to think about our designs at that level, too.

Now that functional programming is receiving so much mainstream attention, this is a good time to think about when and how good functional design is about relationships, not (just) functions. A combinator is an example: the particular functions are not as important as they way they hook together to solve a bigger problem. Designing functions in this style requires thinking more about the interfaces that expose abstractions to the world, and how other modules use them as service, and less about their implementation. Relationships.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 30, 2010 1:01 PM

The Time is Right for Functional Design Patterns

Back in 1998, I documented some of the ideas that I used to teach functional programming in Scheme. The result was the beginnings of a small pattern language I called Roundabout. When I workshopped this paper at PLoP, I had a lot of fun, but it was a challenge. My workshop consisted of professional developers, most working in Java, with little or no experience in Scheme. Worse, many had been exposed to Lisp as undergraduates and had had negative experiences. Even though they all seemed open to my paper, subconsciously their eyes and ears were closed.

We gathered over lunch so that I could teach a quick primer on how to read Scheme. The workshop went well, and I received much useful feedback.

Still, that wasn't the audience for Roundabout. They were are OO programmers. To the extent they were looking for patterns to use, they were looking for GoF-style OO patterns, C++, Java, and enterprise patterns. I had a hard time finding an audience for Roundabout. Most folks in the OO world weren't ready yet; they were still trying to learn how to do OOD and OOP really well. I gave a short talk on how I use Roundabout in class at an ICFP workshop, but the folks there already knew these patterns well, and most were beyond the novice level at which they live. Besides, the functional programming world wasn't all that keen on the idea of patterns at all, not patterns in the style of Christopher Alexander.

Fast forward to 2010. We now have Scala and Clojure on the JVM. A local developer I know is working hard to wrap his head around FP. Last week, he sent me a link to an InfoQ talk by Aino Corry on Functional Design Patterns. The talk is about patterns more generally, what they are and how GoF patterns fit in the functional programming world. At about the 19:00 mark, she mentions... Roundabout! My colleague is surprised to hear my name and tweets his excitement.

My work on functional design patterns is resurfacing. Why? The world is changing. With Scala and Clojure poised to make noise in the Java enterprise world, functional programming is here. People are talking about Scheme and especially Haskell again. Functional programming is trendy now, with famous OO consultants talking it up and making the rounds at conferences and workshops giving talks about how important it is. (Some folks have been saying that for a while...)

The software patterns "movement" grew out of a need felt by many programmers around the world to learn how to do OO design and programming. Most had been weaned on traditional procedural programming and built up years of experience programming in that style, only to find that their experience didn't transfer smoothly into the OO world. Patterns were an ideal vehicle for documenting OO expertise and communicating it to programmers as they learned the new style.

We now face a similar transition in industry and even academia, as languages like Scala and Clojure begin to change how professionals build their systems. They are again finding that their experience -- now with OO -- does not transfer into the functional world without a few hitches. What we need now are papers that document functional design and programming patterns, both at the most basic level (like Roundabout) and at a higher level (like GoF). We have some starting points from which to begin the journey. There has been some good work done on refactoring functional-style programs, and refactoring is a complement to patterns.

This is a great opportunity for experienced functional programmers to teach their style to a large new audience that now has incentive to learn it. This is also a great opportunity to demonstrate again the value of the software pattern as a form for documenting and teaching how to build things. The question is, what to do next.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 14, 2010 11:17 PM

Strange Loop 2010, Day 1 Afternoon

I came back from a lunchtime nap ready for more ideas.

You Already Use Closures in Ruby

This is one of the talks I chose for a specific personal reason. I was looking for ideas I might use in a Cedar Valley Tech Talk on functional programming later this month, and more generally in my programming languages course. I found one, a simple graphical notation for talking about closures as a combination code and environment. Something the speaker said about functions sharing state also gave me an idea for how to use the old koan on the dual "a closure is poor man's object / an object is poor man's closure".

NoSQL at Twitter

Speaker Kevin Weil started by decrying his own title. Twitter uses MySQL and relational databases everywhere. They use distributed data stores for applications that need specific performance attributes.

Weil traced Twitter's evolution toward different distributed data solutions. They started with Syslog for logging applications. It served there early needs but didn't scale well. They then moved to Scribe, which was created by the Facebook team to solve the same problem and then open-sourced.

This move led to a realization. Scribe solved Twitter's problem and opened new vistas. It made logging data so easy, that they started logging more. Having more data gave them better insight into the behavior of their users, behaviors they didn't even know to look for before. Now, data analytics is one of Twitter's most interesting areas.

The Scribe solution works, scaling with no changes to the architecture as data throughput doubles; they just add more servers. But this data creates a 'write' problem: giving today's takes technology, it takes something like 42 hours to write 12 TB to a single hard drive. This led Twitter to add Hadoop to its toolset. Hadoop is both scalable and powerful. Weil mentioned an a 4,000-node cluster at Yahoo! that had sorted one terabyte of integers in 62 seconds.

The rest of Weil's talk focused on data analytics. The key point underlying all he said was this: It is easy to answer questions. It is hard to ask the right questions. This makes experimental programming valuable, and by extension a powerful scripting language and short turnaround times. They need time to ask a lot of questions, looking for good ones and refining promising questions into more useful ones. Hadoop is a Java platform, which doesn't fit those needs.

So, Twitter added Pig, a high-level language that sits atop Hadoop. Programs written in Pig are easy to read and almost English-like. Equivalent SQL programs would probably be shorter, but Pig compiles to MapReduce jobs that run directly on Hadoop. Pig extracts a performance penalty, but the Twitter team doesn't mind. Weil captured why in another pithy sentence: I don't mind if a 10-minute job runs in 12 minutes if it took me 10 minutes to write the script instead of an hour.

Twitter works on several kinds of data-analytic problems. A couple stood out:

  • correlating big data. How do different kinds of user behave -- mobile, web, 3rd-party clients? What features hook users? How do user cohorts work? What technical details go wrong at the same time, leading to site problems?

  • research on big data. What can we learn from a users' tweets, the tweets of those they follow, or the tweets of those who follow them? What can we learn from asymmetric follow relationships about social and personal interests?

As much as Weil had already described, there was more! HBase, Cassandra, FlockDB, .... Big data means big problems and big opportunities, which lead to hybrid solutions that optimize competing forces. Interesting stuff.

Solutions to the Expression Problem

This talk was about Clojure, which interests me for obvious reasons, but the real reason I chose this talk was that I wanted to know what is the expression problem! Like many in the room, I had experienced the expression problem without knowing it by this name:

The Expression Problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts).

Speaker Chris Houser used an example in which we need to add a behavior to an existing class that is hard or impossible to modify. He then stepped through four possible solutions: the adapter pattern and monkey patching, which are available in languages like Java and Ruby, and multimethods and protocols, which are available in Clojure.

I liked two things about this talk. First, he taught his a "failure-driven" way: pose a problem, solve it using a known technique, expose a weakness, and move on to a more effective solution. I often use this technique when teaching design patterns. Second, the breadth of the problem and its potential solutions encouraged language geeks to talk ideas in language design. The conversation included not only Java and Clojure but also Javascript, C#, and macros.

Guy Steele

My notes on this talk aren't that long, but it was important enough to have its own entry. Besides, this entry is already long enough!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

October 06, 2010 11:24 AM

Studying Program Repositories

Hackage, through Cabal

Last week, Garrett Morris presented an experience report at the 2010 Haskell Symposium entitled Using Hackage to Inform Language Design, which is also available at Morris's website. Hackage is an online repository for the Haskell programming community. Morris describes how he and his team studied programs in the Hackage repository to see how Haskell programmers work with a type-system concept known as overlapping instances. The details of this idea aren't essential to this entry, but if you'd like to learn more, check out Section 6.2.3 in this user guide.)

In Morris's search, he sought answers to three questions:

  • What proportion of the total code on Hackage uses overlapping instances?
  • In code that uses overlapping instances, how many instances overlap each other?
  • Are there common patterns among the uses of overlapping instances?

Morris and his team used what they learned from this study to design the class system for their new language. They found that their language did not need to provide the full power of overlapping instances in order to support what programmers were really doing. Instead, they developed a new idea they call "instance chains" that was sufficient to support a large majority of the uses of overlapping instances. They are confident that their design can satisfy programmers because the design decision reflects actual uses of the concept in an extensive corpus of programs.

I love this kind of empirical research. One of the greatest assets the web gives us is access to large corpora of code: SourceForge and GitHub are examples large public repositories, and there are an amazing number of smaller domain-specific repositories such as Hackage and RubyGems. Why design languages or programming tools blindly when we can see how real programmers work through the code they write?

The more general notion of designing languages via observing behavior, forming theories, and trying them out is not new. In particular, I recommend the classic 1981 paper Design Principles Behind Smalltalk. Dan Ingalls describes the method by which the Smalltalk team grew its language and system as explicitly paralleling the scientific method. Morris's paper talks about a similar method, only with the observation phase grounded in an online corpus of code.

Not everyone in computer science -- or outside CS -- thinks of this method as science. Just this weekend Dirk Riehle blogged about the need to broaden the idea of science in CS. In particular, he encourages us to include exploration as a part of our scientific method, as it provides a valuable way for us to form the theories that we will test using the sort of experiments that everyone recognizes as science.

Dirk Riehle's cycle of theory formation and validation

Unfortunately, too many people in computer science do not take this broad view. Note that Morris published his paper as an experience report at a symposium. He would have a hard time trying to get an academic conference program committee to take such a paper in its research track, without first dressing it up in the garb of "real" research.

As I mentioned in an earlier blog entry, one of my grad students, Nate Labelle, did an M.S. thesis a few years ago based on information gathered from the study of a large corpus of programs. Nate was interested in dependencies among open-source software packages, so he mapped the network of relationships within several different versions of Linux and within a few smaller software packages. This was the raw data he used to analyze the mathematical properties of the dependencies.

In that project, we trolled repositories looking for structural information about the code they contained. Morris's work studied Hackage to learn about the semantic content of the programs. While on sabbatical several years ago, I started a much smaller project of this sort to look for design patterns in functional programs. That project was sidetracked by some other questions I ran across, but I've always wanted to get back to it. I think there is a lot we could learn about functional programming in this way that would help us to teach a new generation of programmers in industry.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

July 21, 2010 4:17 PM

Two Classic Programs Available for Study

I just learned that the Computer History Museum has worked with Apple Computer to make source code for MacPaint and QuickDraw available to the public. Both were written by Bill Atkinson for the original Mac, drawing on his earlier work for the Lisa. MacPaint was the iconic drawing program of the 1980s. The utility and quality of this one program played a big role in the success of the Mac. Andy Hertzfeld, another Apple pioneer, credited QuickDraw for the success of the Mac for its speed at producing the novel interface that defined the machine to the public. These programs were engineering accomplishments of a different time:

MacPaint was finished in October 1983. It coexisted in only 128K of memory with QuickDraw and portions of the operating system, and ran on an 8 Mhz processor that didn't have floating-point operations. Even with those meager resources, MacPaint provided a level of performance and function that established a new standard for personal computers.

Though I came to Macs in 1988 or so, I was never much of a MacPaint user, but I was aware of the program through friends who showed me works they created using it. Now we can look under the hood to see how the program did what it did. Atkinson implemented MacPaint in one 5,822-line Pascal program and four assembly language files for the Motorola 6800 totaling 3,583 lines. QuickDraw consists of 17,101 lines of Motorola 6800 assembly in thirty-seven modules.

I speak Pascal fluently and am eager to dig into the main MacPaint program. What patterns will I recognize? What design features will surprise me, and teach me something new? Atkinson is a master programmer, and I'm sure to learn plenty from him. He was working in an environment that so constrained his code's size that he had to do things differently than I ever think about programming.

This passage from the Computer History Museum piece shares a humorous story that highlights how Atkinson spent much of his time tightening up his code:

When the Lisa team was pushing to finalize their software in 1982, project managers started requiring programmers to submit weekly forms reporting on the number of lines of code they had written. Bill Atkinson thought that was silly. For the week in which he had rewritten QuickDraw's region calculation routines to be six times faster and 2000 lines shorter, he put "-2000" on the form. After a few more weeks the managers stopped asking him to fill out the form, and he gladly complied.

This reminded me of one of my early blog entries about refactoring. Code removed is code earned!

I don't know assembly language nearly as well as I know Pascal, let alone Motorola 6800 assembly, but I am intrigued by the idea of being able to study more than 20,000 lines of assembly language that work together on a common task and which also exposes a library API for other graphics programs. Sounds like great material for a student research project, or five...

I am a programmer, and I love to study code. Some people ask why anyone would want to read listings of any program, let alone a dated graphics program from more than twenty-five years ago. If you use software but don't write it, then you probably have no reason to look under this hood. But keep in mind that I study how computation works and how it solves problems in a given context, especially when it has limited access to time, space, or both.

But... People write programs. Don't we already know how they work? Isn't that what we teach CS students, at least ones in practical undergrad departments? Well, yes and no. Scientists from other disciplines often ask this question, not as a question but as an implication that CS is not science. I have written on this topic before, including this entry about computation in nature. But studying even human-made computation is a valuable activity. Building large systems and building tightly resource-constrained programs are still black arts.

Many programmers could write a program with the functionality of MacPaint these days, but only a few could write a program that offers such functionality under similar resource limitations. That's true even today, more than two decades after Atkinson and others of his era wrote programs like this one. Knowledge and expertise matter, and most of it is hidden away in code that most of us never get to see. Many of the techniques used by masters are documented either not well or not at all. One of the goals of the software patterns community is to document techniques and the design knowledge needed to use them effectively. And one of the great services of the free and open-source software communities is to make programs and their source code accessible to everyone, so that great ideas are available to anyone willing to work to find them -- by reading code.

Historically, engineering has almost always run ahead of science. Software scientists study source code in order to understand how and why a program works, in a qualitatively different way than is possible by studying a program from the outside. By doing so, we learn about both engineering (how to make software) and science (the abstractions that explain how software works). Whether CS is a "natural" science or not, it is science, and source code embodies what it studies.

For me, encountering the release of source code for programs such as MacPaint feels something like a biologist discovering a new species. It is exciting, and an invitation to do new work.

Update: This is worth an update: a portrait of Bill Atkinson created in MacPaint. Well done.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

April 15, 2010 8:50 PM

Listen To Your Code

On a recent programming languages assignment, I asked students to write a procedure named if->boolean, whose spec was to recognize certain undesirable if expressions and return in their places equivalent but simpler boolean expressions. This procedure could be part of a simple refactoring engine for a Scheme-like language, though I don't know that we discussed it in such terms.

One student's procedure made me smile in a way only a teacher can smile. As expected, his procedure was a cond expression selecting among the undesirable ifs. His procedure began something like this:

    (define if->boolean
      (lambda (exp)
        (cond ((not (if? exp))
                 exp)
              ; Has the form (if condition #t #f)
              ((and (list? exp)
                    (= (length exp) 4)
                    (eq? (car exp) 'if)
                    (exp? (cadr exp))
                    (true-lit? (caddr exp))
                    (false-lit? (cadddr exp)))
                 (if->boolean (cadr exp)))
              ...

The rest of the procedure was more of the same: comments such as

    ; Has the form (if condition #t another)

followed by big and expressions to recognize the noted undesirable if and a call to a helper procedure that constructed the preferred boolean expression. The code was long and tedious, but the comments made its intent clear enough.

Next to his code, I wrote a comment of my own:

Listen to your code. It is saying, "Write syntax procedures!"

How much clearer this code would have been had it read:

    (define if->boolean
      (lambda (exp)
        (cond ((not (if? exp))   exp)
              ((trivial-if? exp) (if->boolean (cadr exp)))
              ...

When I talked about this code in class (presented anonymously in order to guard the student's privacy, in case he desired it), I made it clear that I was not being all that critical of the solution. It was thorough and correct code. Indeed, I praised it for the concise comments that made the intent of the code clearer than it would have been without them.

Still, the code could have been better, and students in the class -- many of whom had written code similar but not always as good as this -- knew it. For several weeks now we have been talking about syntax procedures as a way to define the interface of an ADT and as a way to hide detail that complicates a piece of code.

Syntax Procedure is one pattern in a family of patterns we have learned in order to write structurally-recursive code over an inductive data type. Syntax procedures are, of course, much more broadly applicable than their role in structural recursion, but they are especially useful in helping us to isolate code for manipulating a particular data representation from the code that processes the data values and recurses over their parts. That can be especially useful when students are at the same time still getting used to a language as unusual to them as Scheme.

Michelangelo's Pieta

The note I wrote on the student's printout was one measure of chiding (Really, have you completely forgotten about the syntax procs we've been writing for weeks?) mixed with nine -- or ninety-nine -- measures of stylistic encouragement:

Yes! You have written a good piece of code, but don't stop here. The comment you wrote to help yourself create this code, which you left in so that you would be able to understand the code later, is a sign. Recognize the sign, and use what it says to make your code better.

Most experienced programmers can tell us about the dangers of using comments to communicate a program's intent. When a comment falls out of sync with the code it decorates, woe to future readers trying to understand and modify it. Sometimes, we need a comment to explain a design decision that shapes the code, which the code itself cannot tell us. But most of the time, a comment is just that, a decorator: something meant to spruce up the place when the place doesn't look as good as we know it should. If the lack of syntax procedures in my student's code is a code smell, then his comment is merely deodorant.

Listen to your code. This is one of my favorite pieces of advice to students at all levels, and to professional programmers as well. I even wrote this advice up in a pattern of its own, called Speak the Problem's Language. I knew this pattern from many years writing Smalltalk to build knowledge-based systems in domains from accounting to enginnering. Then I read Peter Norvig's Paradigms of Artificial Intelligence Programming, and his Chapter 2 expressed the wisdom so well that I wanted to make it available as a core coding pattern in all of the pattern languages I was writing at the time. It is still one of my favorites. I hope my student comes to grok it, too.

After class, one of the other students in the class stopped to chat. She is a double major in CS and graphic design, and she wanted to say how odd it was to hear a computer science prof saying, "Listen to your code." Her art professors tell her this sort of thing all time. Let the painting tell you where it wants to go. And, The sculpture is already in the stone; your job is to set it free.

Whatever we want to say about software 'engineering', when we write a program to do something new, our act of creation is not all that different from the painter's, the sculptor's, or the graphic designer's. We shape the code, and it shapes us. Listen.

This was a pretty good way to spend an afternoon talking to students.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

April 08, 2010 8:56 PM

Baseball, Graphics, Patterns, and Simplicity

I love these graphs. If you are a baseball fan or a lover of graphics, you will, too. Baseball is the most numbers-friendly of all sports, with a plethora of statistics that can extracted easily from its mano-a-mano confrontations between pitchers and batters, catchers and baserunners, American League and National. British fan Craig Robinson goes a step beyond the obvious to create beautiful, information-packed graphics that truths both quirky and pedestrian.

Some of the graphs are more complex than others. Baseball Heaven uses concentric rings, 30-degree wedges, and three colors to show that the baseball gods smile their brightest on the state of Arizona. I like some of these complex graphs, but I must admit that sometimes they seem like more work than they should be. Maybe I'm not visually-oriented in the right way.

I notice that many of my favorites have something in common. Consider this chart showing the intersection of the game's greatest home run hitters and the steroid era:

home run hitters and performance-enhancing drugs

It doesn't take much time looking at this graph for a baseball fanatic to sigh with regret and hope that Ken Griffey, Jr., has played clean. (I think he has.) A simple graphic, a poignant bit of information.

Next, take a look at this graph that answers the question, how does baseball's winningest team fare in the World Series?:

win/loss records and World Series performance

This is a more complex than the previous one, but the idea is simple: sort teams by win/loss record, identify the playoff and World Series teams by color, and make the World Series winners the min axis of the graph. Who would have thought that the playoff team with the worst record would win the World Series almost as often as the the team with the best record?

Finally, take a look at what is my current favorite from the site, an analysis of interleague play's winners and losers.

winners and losers in interleague play

I love this one not for its information but for its stark beauty. Two grids with square and rectangular cells, two primary colors, and two shades of each are all we need to see that the two leagues have played pretty evenly overall, with the American League dominating in recent years, and that the AL's big guns -- the Yankees, Red Sox, and Angels -- are big winners against their NL counterparts. This graph is so pretty, I want to put a poster-sized print of it on my wall, just so that I can look at it every day.

The common theme I see among these and my other favorite graphs is that they are variations of the unpretentious bar chart. No arcs, line charts with doubly-labeled axes, or 3D effects required. Simple colors, simple labels, and simple bars illuminating magnitudes of interest.

Why am I drawn to these basic charts? Am I too simple to appreciate the more complex forms, the more complex interweaving of dimensions and data?

I notice this as a common theme across domains. I like simple patterns. I am most impressed when writers and artists employ creative means to breathe life into unpretentious forms. It is far more creative to use a simple bar chart in a nifty or unexpected way than it is to use spirals, swirls of color, concentric closed figures, or multiple interlocking axes and data sources. To take a relationship, however complex, and boil its meaning down to the simplest of forms -- taken with a twist, perhaps, but unmistakably straightforward nonetheless -- that is artistry.

I find that I have similar tastes in programming. The simplest patterns learned by novice programmers captivate me: a guarded action or linear search; structural recursion over a BNF definition or a mutual recursion over two; a humble strategy object or factory method. Simple tools used well, adapted to the unique circumstances of a problem, exposing just the right amount of detail and shielding us from all that doesn't matter. A pattern used a million times never in the same way twice. My tastes are simple, but I can taste a wide range of flavors.

Now that I think about it, I think this theme explains a bit of what I love about baseball. It is a simple game, played on a simple form with simple equipment. Though its rules address numerous edge cases, at bottom they, too, are as simple as one can imagine: throw the ball, hit the ball, catch the ball, and run. Great creativity springs from these simple forms when they are constrained by simple forms. Maybe this is why baseball fans see their sport as an art form, and why people like Craig Robinson are driven to express its truths in art of their own.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

February 27, 2010 9:40 AM

Increasing Duplication to Eliminate Duplication

In a recent entry, I discussed how Kent Beck' design advice "Exploit Symmetries" improves our ability to refactor code. When we take two things that are similar and separate them into parts that are either identical or different, we maximize the repetition in our code. This enables us to factor the repetition in the sharpest way.

Here is a simple example from Scheme. Suppose we are writing a procedure to walk down a vector and count how many of items satisfy a particular condition. Along the way, we might produce code something like this:

  (define count-occurrences-of-test-at
    (lambda (test? von position)
      (if (>= position (vector-length von))
          0
          (if (test? (vector-ref von position))
              (+ 1 (count-occurrences-of-test-at test? von (+ position 1)))
              (count-occurrences-of-test-at test? von (+ position 1))))))

Our procedure duplicates code, but it may not be obvious at first how to factor it away. The problem is that the duplication occurs nested in a larger expression at two different levels: one is a consequent of the if expression, and the other is part of the computation that is the other consequent.

As a first step, we can increase the symmetry in our code by rewriting the else clause as a similar computation:

  (define count-occurrences-of-test-at
    (lambda (test? von position)
      (if (>= position (vector-length von))
          0
          (if (test? (vector-ref von position))
              (+ 1 (count-occurrences-of-test-at test? von (+ position 1)))
              (+ 0 (count-occurrences-of-test-at test? von (+ position 1)))))))

Now we see that the duplicated code is always part of the value of expression, and the if expression itself is about choosing whether to add 1 or 0 to the value of the recursive call. We can use one of the distributive laws of code to factor out the repetition:

  (define count-occurrences-of-test-at
    (lambda (test? von position)
      (if (>= position (vector-length von))
          0
          (+ (if (test? (vector-ref von position)) 1 0)
             (count-occurrences-of-test-at test? von (+ position 1))))))

Voilá! No more duplication. By increasing the duplication in our code, we create a more symmetric relation, and the symmetry enables us to eliminate the duplication entirely. I have never thought of myself as thinking in terms of symmetry when I write code, but I do think in terms of regularity. My mind prefers code with regular form, both on the surface and in the programming structures I use. Often times, my thorniest refactoring problems arise when I let irregular structure sneak into my code. When some duplication or complexity make me uneasy, I find that taking the preparatory step of increasing regularity can help me see a way to simpler code.

Of course, we might approach this problem differently altogether, from a functional point of view, and write a different sort of solution:

  (define count-occurrences-of-test
    (lambda (test? von)
      (apply + (vector-map test? von))))

This eliminates another form of duplication that we find across many procedures that operate on vectors: the common structure of the code simulates a loop over a vector. That is yet another form of regularity that we can exploit, once we begin to recognize it. Then, when we write new code, we can look for ways to express the solution in terms of the functionally mapping pattern, so that we don't have to roll our own loop by hand. When imperative programmers begin to see this form of symmetry, they are on their way to becoming functional programmers. (It is also the kind of symmetry at the heart of MapReduce.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

January 05, 2010 3:25 PM

In Programming Style, as in Most Things, Moderation

As I prepare for a new semester of teaching programming languages, I've been enjoying getting back into functional programming. Over break, someone somewhere pointed me toward a set of blog entries on why functional programming doesn't work. My first thought as I read the root entry was, "Just use Scheme or Lisp", or for that matter any functional language that supports mutation. But the author explicitly disallows this, because he is talking about the weakness of pure functional programming.

This is common but always seems odd to me. Many of the arguments one sees against FP are against "pure" functional programming: all FP, all the time. No one ever seems to talk in the same way about stateful imperative programming in, say, C. No one seems to place a purity test on stateful programming: "Try writing that without any functions!". Instead, we use state and sequencing and mutation throughout programs, and then selectively use functional style in the parts of the program where it makes sense. Why should FP be any different? We can use functional style throughout, and then selectively use state where it makes sense.

Mixing state and functions is the norm in imperative programming. The same should be true when we discuss functional programming. In the Lisp world, it is. I have occasionally read Lispers say that their big programs are about 90% functional and 10% imperative. That ratio seems a reasonable estimate for the large functional programs I have written, give or take a few percent either way.

Once we get to the point of acknowledging the desirability of mixing styles, the question becomes which proportion will serve us best in a particular environment. In game programming, the domain used as an example in the set of blog entries I read, perhaps statefulness plays a larger role than 10%. My own experience tells me that whenever I can emphasize functional style (or tightly-constrained stateful style, a lá objects), I am usually better off. If I have to choose, I'll take 90:10 functional over 90:10 imperative any day.

If we allow ourselves to mix styles, then solving the author's opening problem -- making two completely unrelated functions interdependent -- becomes straightforward in a functional program: define the functions (or doppelgangers for them) in a closure and 'export' only the functions. To me, this is an improvement over the "pure" stateful approach, as it gives us state and dependent behavior without global variables mucking up the namespace of the program or the mindshare of the programmer.

Maybe part of the problem lies in how proponents of functional programming pitch things. Some are surely overzealous about the virtues of a pure style. But I think as much of the problem lies in how limited people with vast experience and deep understanding of one way to think feel when they move outside their preferred style. Many programmers still struggle with object-oriented programming in much the same way.

Long ago, I learned from Ralph Johnson to encourage people to think in terms of programming style rather than programming paradigm. Style implies choice and freedom of thought, whereas paradigm implies rigidity and single-mindedness. I like to encourage students to develop facility with multiple styles, so that they will feel comfortable moving seamlessly in and out of styles, across borders whenever that suits the program they are writing. It is better for what we build to be defined by what we need, not our limitations.

(That last is a turn of phrase I learned from the book Art and Fear, which I have referenced a couple of times before.)

I do take to heart one piece of advice derived from another article in the the author's set of articles on FP. People who would like to see functional programming adopted more widely could help the cause by providing more guidance to people who want to learn. What happens if we ask a professional programmer to rewrite a video game (the author's specialty) in pure FP, or

... just about any large, complex C++ program for that matter[?] It's doable, but requires techniques that aren't well documented, and it's not like there are many large functional programs that can be used as examples ...

First, both sides of the discussion should step away from the call for pure FP and allow a suitable mix of functional and stateful programming. Meeting in the middle better reflects how real programmers work. It also broadens considerably the set of FP-style programs available as examples, as well as the set of good instructional materials.

But let's also give credence to the author's plea. We should provide better and more examples, and do a better job of documenting the functional programming patterns that professional programmer needs. How to Design Programs is great, but it is written for novices. Maybe Structure and Interpretation of Computer Programs is part of the answer, and I've been excited to see so many people in industry turning to it as a source of professional development. But I still think we can do better helping non-FP software developers make the move toward a functional style from what they do now. What we really need is the functional programming equivalent of the Gang of Four book.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

November 20, 2009 3:35 PM

Learning Through Crisis

... an author never does more damage to his readers
than when he hides a difficulty.
-- Évariste Galois

Like many of the aphorisms we quote for guidance, this one is true, but not quite true if taken with the wrong sense of its words or at the wrong scale.

First, there are different senses of the word "difficulty". Some difficulties are incidental, and some are essential. An author should indeed hide incidental difficulties; they only get in the way. However, the author must not hide essential difficulty. Part of the author's job is to help the readers overcome the difficulty.

Second, we need to consider the scale of revelation and hiding. Authors who expose difficulties too soon only confuse their readers. Part of the author's job is to prepare the reader, to explain, inspire, and lead readers from their initial state into a state where they are ready to face the difficulty. At that moment, the author is ready to bring the difficulty out into the open. The readers are ready.

What if the reader has already uncovered the difficulty before meeting the author? In that time, the author must not try to hide it, to fool his readers. He must attack it head on -- perhaps with the same deliberation in explaining, inspiring, and leading, but without artifice. It is this sense in which Galois has nailed a universal truth.

If we replace "author" with "teacher" in this discussion we still have truths. The teacher's job is to eliminate incidental difficulties while exposing essential ones. Yet the teacher must be deliberate, too, and prepare the reader, the student, to overcome the difficulty. Indeed, a large part of the teacher's craft is the judicious use of simplification and unfolding, leading students to a deeper understanding.

Sometimes, we teachers can use difficulty to our advantage. As I discussed recently, the brain often learns best when it it encounters its own limitations. Some say that is the only way we learn, but I don't think I believe the notion when taken to this extreme. But I think that difficulty is often the teacher's best source of leverage. Confront students with difficulty, and then help them to find resolution.

Ben Blum-Smith expresses a similar viewpoint in his recent nugget on teaching students to do proofs in mathematics. He launches his essay with remarks by Paul Lockhart, whose essay I discussed last summer. Blum-Smith's teaching nugget is this:

The impulse toward rigorous proof comes about when your intuition fails you. If your intuition is never given a chance to fail you, it's hard to see the point of proof.

This is just as true for us as we learn to create programs as it is when we learn to create proofs. If our intuition and our current toolbox never fail us, it's hard to see the point of learning a new tool -- especially one that is difficult to learn.

Blum-Smith then quotes Lockhart:

Rigorous formal proof only becomes important when there is a crisis -- when you discover that your imaginary objects behave in a counterintuitive way; when there is a paradox of some kind.

This quote doesn't inspire cool thoughts in me the way so many other passages in Lockhart's paper do, but one word stands way out on this reading: crisis. It inspires Blum-Smith as well:

... what happens is that when kids reach a point in their mathematical education where they are asked to prove things, they find
  • that they have no idea how to accomplish what is being asked of them, and
  • that they don't really get why they're being asked to do it in the first place.

The way out of this is to give them a crisis. We need to give them problems where the obvious pattern is not the real pattern. What you see is not the whole story! Then, there is a reason to prove something.

We need to give our programming students problems in which the obvious solution, the solution that flows naturally from their fingers onto the keyboards, doesn't feel right, or maybe even doesn't work at all. There is more to the story; there is reason to learn something new.

Teachers who know a lot and can present useful knowledge to students can be quite successful, and every teacher really needs to be able to play this role sometime. But that is not enough, especially in a world where increasingly knowledge is a plentiful commodity. Great teachers have to know how to create in the minds of their students a crisis: a circumstance in which they doubt what they know just enough to spur the hard work needed to learn.

A good writer can do this in print, but I think that this is a competitive advantage available to classroom teachers: they operate in a more visceral environment, in which one can create safe and reliably effective crises in their students minds. If face-to-face university courses with domain experts are to thrive in the new, connected world, it will be because they are able to exploit this advantage.

~~~~

Postscript: Galois, the mathematician quoted at the top of this article, was born on October 25. That was the date of one of my latest confrontations with difficulty. Let me assure you: You can run, but you cannot hide!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

November 18, 2009 6:45 AM

The Gang-of-Four Book at Fifteen

One of the fun parts of teaching software engineering this semester has been revisiting some basic patterns in the design part of the course, and now as we discuss refactoring in the part of the course that deals with implementation and maintenance. 2009 is the 15th anniversary of the publication of Design Patterns, the book that launched software patterns into the consciousness of mainstream developers. Some folks reminisced about the event at OOPSLA this year, but I wasn't able to make it to Orlando. OOPSLA 2004 had a great 10th-anniversary celebration, which I had the good fortune to attend and write about.

I wasn't present at OOPSLA in 1994, when the book created an unprecedented spectacle in the exhibit hall; that just predates my own debut at OOPSLA. But wish I had been!

InformIT recently ran a series of interviews with OO and patterns luminaries, sharing their thoughts on the book and on how patterns have changed the landscape of software development. The interview with Brian Foote had a passage that I really liked:

InformIT: How has Design Patterns changed your impressions about the way software is built?

The vision of reuse that we had in the object-oriented community in hindsight seems like a God that Failed. Just as the Space Shuttle never lived up to its promised reuse potential, libraries, frameworks, and components, while effective in as far as they went, never became foundations of routine software reuse that many had envisioned and hoped.

Instead, designs themselves, design ideas, patterns became the loci of reuse. We craft our ideas, by hand, into each new artifact we build.

This insight gets to the heart of why patterns matter. Other forms of reuse have their place and use, but they operate at a code level that is ultimately fragile in the face of the different contexts in which our programs operate. So they are, by necessity, limited as vehicles for reuse.

Design ideas are less specific, more malleable. They apply in a million contexts, though never in quite the same way. We mold them to the context of our program. The patterns we see in our designs and tests and implementations give us the abstract raw material out of which to create our programs. We still strive for reuse, but at a different level of thinking and working.

Read the full interview linked above. It is typical Brian Foote: entertaining and full of ideas presented slightly askew from the typical vantage point. That twist helps me to think differently about things that may have otherwise become commonplace. And as so often happens, I had to look a word up in the dictionary before I reached the end. I always seem to learn something from Brian!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 13, 2009 2:18 PM

Learning via Solutions to our Limitations

Yesterday I introduced refactoring in my software engineering course. Near the beginning of my code demo, I got sidetracked a bit when I mentioned that I would be using JUnit to run some automated tests. We have not talked about testing yet, automated or otherwise, and I thought that refactoring might be a good way to show its value.

One student wondered why he should go to the trouble; why not just write a few lines of code to do his own testing? My initial response turned too quickly to the idea of automation, which seemed natural given the context of refactoring. Automating tests is essential when we are working in a tight cycle of code-test-refactor-test. This wasn't all that persuasive to the student, who had not seen us refactor yet. Fortunately, another student, who has used testing frameworks at work, jumped in to point out the real flaw in what the first student had proposed: interspersing test code and production code. I think that was more persuasive to the class, and we moved on.

That got me to thinking about a different way to introduce both testing frameworks and refactoring next time. The key pedagogical idea is to focus on students' current experience and why they need something new. Necessity gives birth not only to invention but also to the desire to learn.

Somedays, I think the web is magic. This popped into newsfeed when I refreshed this morning:

whenever possible, introduce new skills and new knowledge as the solution to the limitations of old skills and old knowledge

Meyer, who teaches HS math, has a couple of images contrasting the typical approach to lesson planning (introduce concept, pay "brief homage to workers who use it", work sample problems) to an approach based on the limitations of old skills:

  1. summarize briefly relevant prior skills
  2. show a "sample problem that renders those skills pretty well useless"
  3. describe the new skill

I like to teach design patterns using a more active version of this approach:

  1. give the students a problem to solve, preferably one that looks like a good fit for their current skill set
  2. as a group, explore the weaknesses in their solutions or the difficulties they had creating them
  3. introduce a pattern that balances the forces in this problem, and the discuss the more general context in which it applies

I need to remember to use this strategy with more of the new skills and techniques. It's hard to do this in the small for all techniques, but when I can tie the new idea to an error students make or a difficulty they have, I usually have better success. (My favorite success story with this approach was helping students to learn selection patterns -- ways to use if statements -- in CS1 back in the mid-1990s.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

September 09, 2009 10:04 PM

Reviewing a Career Studying Camouflage

Camouflage Conference poster

A few years ago I blogged when my university colleague Roy Behrens won a faculty excellence award in his home College of Humanities and Fine Arts. That entry, Teaching as Subversive Inactivity, taught me a lot about teaching, though I don't yet practice it very well. Later, I blogged about A Day with Camouflage Scholars, when I had the opportunity to talk about how a technique of computer science, steganography, related to the idea of camouflage as practiced in art and the military. Behrens is an internationally recognized expert on camouflage who organized an amazing one-day international conference on the subject here at my humble institution. To connect with these scholars, even for a day, was a great thrill. Finally, I blogged about Feats of Association when Behrens gave a mesmerizing talk illustrating "that the human mind is a connection-making machine, an almost unwilling creator of ideas that grow out of the stimuli it encounters."

As you can probably tell, I am a big fan of Behrens and his work. Today, I had a new chance to hear him speak, as he gave a talk associated with his winning another award, this time the university's Distinguished Scholar Award. After hearing this talk, no one could doubt that he is a worthy recipient, whose omnivorous and overarching interest in camouflage reflects a style of learning and investigation that we could all emulate. Today's talk was titled "Unearthing Art and Camouflage" and subtitled my research on the fence between art and science. It is a fence that more of us should try to work on.

The talk wove together threads from Roy's study of the history and practice of camouflage with bits of his own autobiography. It's a style I enjoyed in Kurt Vonnegut's Palm Sunday and have appreciated at least since my freshman year in college, when in an honors colloquium at Ball State University I was exposed to the idea of history from the point of view of the individual. As someone who likes connections, I'm usually interested in how accomplished people come to do what they do and how they make the connections that end up shaping or even defining their work.

Behrens was in the first generation of his family to attend college. He came from a small Iowa town to study here at UNI, where he first did research in the basement of the same Rod Library where I get my millions. He held his first faculty position here, despite not having a Ph.D. or the terminal degree of discipline, an M.F.A. After leaving UNI, he earned an M.A. from the Rhode Island School of Design. But with a little lucky timing and a publication record that merited consideration, he found his way into academia.

From where did his interest in camouflage come? He was never interested in military, though he served as a sergeant in the Vietnam-era Marine Corps. His interest lay in art, but he didn't enjoy the sort of art in which subjective tastes and fashion drove practice and criticism. Instead, he was interested in what was "objective, universal, and enduring" and as such was drawn to design and architecture. He and I share an interest in the latter; I began mu undergraduate study as an architecture major. A college professor offered him a chance to do undergraduate research, and his result was a paper titled "Perception in the Visual Arts", in which he first examined the relationship between the art we make and the science that studies how we perceive it. This paper was later published in major art education journal.

That project marked his first foray into perceptual psychology. Behrens mentioned a particular book that made an impression on him, Aspects of Form, edited by Lancelot Law Whyte. It contained essays on the "primacy of pattern" by scholars in both the arts and the sciences. Readers of this blog know of my deep interest in patterns, especially in software but in all domains. (They also know that I'm a library junkie and won't be surprised to know that I've already borrowed a copy of Whyte's book.)

Behrens noted that it was a short step from "How do people see?" to "How are people prevented from seeing?" Thus began what has been forty years of research on camouflage. He studies not only the artistic side of camouflage but also its history and the science that seeks to understand it. I was surprised to find that as a RISD graduate student he already intended to write a book on the topic. At the time, he contacted Rudolf Arnheim, who was then a perceptual psychologist in New York, with a breathless request for information and guidance. Nothing came of that request, I think, but in 1990 or so Behrens began a fulfilling correspondence with Arnheim that lasted until his death in 2007. After Arnheim passed away, Behrens asked his family to send all of his photos so that Behrens could make copies, digitize them, and then return the originals to the family. They agreed, and the result is a complete digital archive of photographs from Arnheim's long professional life. This reminded me of Grady Booch's interest in preservation, both of the works of Dijkstra and of the great software architectures of past and present.

While he was at RISD, Behrens did not know that the school library had 455 original "dazzle" camouflage designs in its collection and so missed out on the opportunity to study them. His ignorance of these works was not a matter of poor scholarship, though; the library didn't realize their significance and so had them uncataloged on a shelf somewhere. In 2007, his graduate alma mater contacted him with news of the items, and he has now begun to study them, forty years later.

As grad student, Behrens became in interested in the analogical link between (perceptual) figure-ground diagrams and (conceptual) Venn diagrams. He mentioned another book that helped him make this connection, Community and Privacy, by Serge Chermayeff and Christopher Alexander, whose diagrams of cities and relationships were Venn diagrams. This story brings to light yet another incidental connection between Behrens's work and mine. Alexander is, of course, the intellectual forebear of the software patterns movement, through his later books Notes On The Synthesis Of Form, The Timeless Way Of Building, A Pattern Language, and The Oregon Experiment.

UNI hired Behrens in 1972 into a temporary position that became permanent. He earned tenure and, fearing the lack of adventure that can come from settling down to soon, immediately left for the University of Wisconsin-Milwaukee. He worked there ten years and earned his tenure anew. It was at UW-M where he finally wrote the book he had begun planning in grad school. Looking back now, he is embarrassed by it and encouraged us not to read it!

At this point in the talk, Behrens told us a little about his area of scholarship. He opened with a meta-note about research in the era of the world wide web and Google. There are many classic papers and papers that scholars should know about. Most of them are not yet on-line, but one can at least find annotated bibliographies and other references to them. He pointed us to one of his own works, Art and Camouflage: An Annotated Bibliography, as an example of what is now available to all on the web.

Awareness of a paper is crucial, because it turns out that often we can find it in print -- even in the periodical archives of our own libraries! These papers are treasures unexplored, waiting to be rediscovered by today's students and researchers.

Camouflage consists of two primary types. The first is high similarity, as typified by figure-ground blending in the arts and mimicry in nature. This is the best known type of camouflage and the type most commonly seen in popular culture.

The second is high difference, or what is often called figure disruption. This sort of camouflage was one of the important lessons of World War I. We can't make a ship invisible, because the background against which it is viewed changes constantly. A British artist named Norman Wilkinson had the insight to reframe the question: We are not trying to hide a ship; we are trying to prevent the ship from being hit by a torpedo!

(Redefining one problem in terms of another is a standard technique in computer science. I remember when I first encountered it as such, in a graduate course on computational theory. All I had to do was find a mapping from a problem to, say, 3-SAT, and -- voilá! -- I knew a lot about it. What a powerful idea.)

This insight gave birth to dazzle camouflage, in which the goal came to be break an image into incoherent or imperceptible parts. To protect a ship, the disruption need not be permanent; it needed only to slow the attackers sufficiently that they were unable to target it, predict its course, and launch a relatively slow torpedo at it with any success.

a Gabon viper, which illustrates coincident disruption

Behrens offered that there is a third kind of camouflage, coincident disruption, that is different enough to warrant its own category. Coincident disruption mixes the other two types, both blending into the background and disrupting the viewer's perception. He suggested that this may well be the most common form of camouflage found in nature using the Gabon viper, pictured here, as one of his examples of natural coincident disruption.

Most of Behrens' work is on modern camouflage, in the 20th century, but study in the area goes back farther. In particular, camouflage was discussed in connection to Darwin's idea of natural selection. Artist Abbott Thayer was a preeminent voice on camouflage in the 19th century who thought and wrote on both blending and disruption as forms in nature. Thayer also recommended that the military use both forms of camouflage in combat, a notion that generated great controversy.

In World War I, the French ultimately employed 3,000 artists as "camoufleurs". The British and Americans followed suit on a smaller scale. Behrens gave a detailed history of military camouflage, most of which was driven by artists and assisted by a smaller number of scientists. He finds World War II's contributions less interesting but is excited by recent work by biologists, especially in the UK, who have demonstrated renewed interest in natural camouflage. They are using empirical methods and computer modeling as ways to examine and evaluate Thayer's ideas from over a hundred years ago. Computational modeling in the arts and sciences -- who knew?

Toward the end of his talk, Behrens told several stories from the "academic twilight zone", where unexpected connections fall into the scholar's lap. He called these the "unsung delights of researching". These are stories best told first hand, but they involved a spooky occurrence of Shelbyville, Tennessee, on a pencil he bought for a quarter from a vending machine, having the niece and nephew of Abbott Thayer in attendance at a talk he gave in 1987, and buying a farm in Dysart, Iowa, in 1992 only then to learn that Everett Warner, whom he had studied, was born in Vinton, Iowa -- 14 miles away. In the course of studying a topic for forty years, the strangest of coincidences will occur. We see these patterns whether we like to or not.

Behrens's closing remarks included one note that highlights the changes in the world of academic scholarship that have occurred since he embarked on his study of camouflage forty years ago. He admitted that he is a big fan of Wikipedia and has been an active contributor on pages dealing with the people and topics of camouflage. Social media and web sites have fundamentally changed how we build and share knowledge, and increasingly they are being used to change how we do research itself -- consider the Open Science and Polymath projects.

Today's talk was, indeed, the highlight of my week. Not only did I learn more about Behrens and his work, but I also ended up with a couple of books to read (the aforementioned Whyte book and Kimon Nicolaïdes's The Natural Way to Draw), as well as a couple of ideas about what it would mean for software patterns to hide something. A good way to spend an hour.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

July 16, 2009 11:36 AM

Lengthen, Then Strengthen

Update: I've added a link to a known use.

(A pattern I've seen in running that applies more broadly.)

You are developing a physical skill or an ability that will take you well beyond your current level of performance. Perhaps you are a non-runner preparing for a 5K, or a casual runner training for a marathon, or an experienced runner coming back from a layoff.

To succeed, you will need endurance, the ability to perform steadily over a long period. You will also need strength, the ability to perform at a higher speed or with greater power over a shorter period of time. Endurance enables you to last for an amount of time longer than usual. It requires you to develop your slow-twitch muscles and your aerobic capacity, which depends on effective delivery of oxygen to your muscles. Strength enables you to work faster or harder, such as uphill or against an irregular force. It requires you to develop your fast-twitch muscles and your anaerobic capacity, which depends on muscles working effectively in the absence of oxygen.

You might try to develop strength first. Strength training involves many repetitions of intense work done for short durations. When you are beginning your training, you can handle short durations more easily than long ones. The high intensity will be uncomfortable, but it won't last for long. This does not work very well. First, you won't be able to work at an intense enough level to train your muscles properly, which means that your training sessions will not be as effective as you'd hope. Second, because your muscles are still relatively week, subjecting them to intense work even for short periods greatly increases the risk of injury.

You might try to develop strength and endurance in parallel. This is a strategy commonly tried by people who are in a hurry to reach a specific level of performance. You do longer periods of low-intensity work on some days and longer periods of high-intensity work on others. This strengthens your both your slow- and fast-twitch muscles and allows you to piggyback growth in one area on top of growth in the other. Unfortunately, this does not work well, either. There is a small decrease in the risk of injury from your strength training, but not as much as you might think. Our bodies adapt to change rather slowly, which means that your muscles don't grow stronger fast enough to prepare them for the intensity of strength training. Even when you don't injure yourself, you increase the risk of plateauing or fatigue.

Therefore, build a strong aerobic base first. Train for several weeks or even months at a relatively low level of intensity, resting occasionally to give your body a chance to adapt. This will build endurance, with slower speed or less power than you might want, but also strengthen your muscles, joints, and bones. Only then add to your regimen exercises that build your anaerobic capacity through many repetitions of high-intensity, short-duration activities. These will draw on the core strength developed earlier.

Continue to do workouts focused on endurance. These will give your body a chance to recover from the higher intensity workouts and time to adapt to those stresses in the form of more speed or power. For all but the most serious athletes, one or two strength workouts a week are sufficient. Doing more increases the risk of injury, fatigue, or loss of interest in training. As in so many endeavors, steady, regular practice tends to be much more valuable than occasional or spotty practice. This is especially true when the goal requires a long period of preparation, such as a marathon.

Examples. Every training program I have ever seen for runners, from 5Ks up to marathons, emphasizes the need for a strong aerobic base before before worrying about speed or other forms of power. This is especially true for beginners. Some beginners are eager to improve quickly and often don't realize how hard training can be on their bodies. Others fear that is they are not working "hard enough" they are not making progress. Low-intensity endurance training does work your body hard enough, just not in short bursts that make you strain.

Greg McMillan describes the usual form this pattern takes as well as a variation in Time To Rethink Your Marathon Training Program?. In its most common form, a runner first builds aerobic base, then works on strength in the form of hills and tempo runs, and finally works on speed. Gabriele Rosa, whom McMillan calls "arguably the world's greatest marathon coach", structures his training programs differently. He still starts his athletes with a significant period building aerobic base (Lengthen) followed by by a period that develops anaerobic capability (Strengthen). But he starts the anaerobic phase with short track workouts that develop the runners speed, down to 200m intervals, and only then has the runner move to strength workouts. Rosa's insight is that "the goal in marathon training is to fatigue the athlete with the duration of the workouts and not the speed, so speed needed to be developed first". This variation may not work well for runners coming back from injuries or who are otherwise prone to injury, because the speed workouts stress the body in a more extreme way than the tempo and cruise workouts of longer-distance strength work.

Lengthen, then Strengthen applies even to more experienced runners coming back from periods of little or no training. Many such runners assume that they can quickly return to the level they were at before the layoff, but the body will have adapted to the lower level of exertion and require retraining. Elite athletes returning from injury usually take several months to build their aerobic base before resuming hard training regimens.

I have written this pattern from the perspective of running, but it applies to other physical activities, too, such as biking and swimming. The risk of injury in some sports is lower than in running, due to less load on muscles, joints, and bones, but the principles of endurance and strength are the same.

Related Ideas. I think this pattern is also present in some forms of learning. For example, it is useful to build attention span and vocabulary when learning learning a new discipline before trying to build critical skills or deep expertise. The gentler form of learning provides a base of knowledge that is required for expert analysis or synthesis.

I realize that this application of the pattern is speculative. If you have any thoughts about it, or the pattern more generally, please let me know.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Running

July 13, 2009 3:23 PM

Patterns as Compression Technology

In trying to understand the role patterns and pattern languages play both in developing software and in learning to develop software, I often look for different angles from which to look at patterns. I've written the idea of patterns as descriptive grammar and the idea of patterns as a source of freedom in design. Both still seem useful to me as perspectives on patterns, and the latter is among the most-read articles on my blog. The notion of patterns-as-grammar also relates closely to one of the most commonly-cited roles that patterns play for the developer or learner, that of vocabulary for describing the meaningful components of a program.

This weekend, I read Brian Hayes's instructive article on compressive sensing, The Best Bits. Hayes talks about how it is becoming possible to imagine that digital cameras and audio recorders could record compressed streams -- say, a 10-megapixel camera storing a 3MB photo directly rather than recording 30MB and then compressing it after the fact. The technique he calls compressive sensing is a beautiful application of some straightforward mathematics and a touch of algorithmic thinking. I highly recommend it.

While reading this article, though, the idea of patterns as vocabulary came to mind in a new way, triggered initially by this passage:

... every kind of signal that people find meaningful has a sparse representation in some domain. This is really just another way of saying that a meaningful signal must have some structure or regularity; it's not a mere jumble of random bits.

an optical illusion -- can you see it?

Programs are meaningful signals and have structure and regularity beyond the jumble of seemingly random characters at the level of the programming level. The chasm between random language stuff and high-level structure is most obvious when working with beginners. They have to learn that structure can exist and that there are tools for creating it. But I think developers face this chasm all the time, too, whenever they dive into a new language, a new library, or a new framework. Where is the structure? Knowing it is there and seeing it are too different matters.

The idea of a sparse representation is fundamental to compression. We have to find the domain in which a signal, whether image or sound, can be represented in as few bits as possible while losing little or even none of the signal's information. A pattern language of programs does the same thing for a family of programs. It operates at a level (in Hayes' terms, in a domain) at which the signal of the program can be represented sparsely. By describing Java's I/O stream library as a set of decorators on a set of concrete streams, we convey a huge amount of information in very few words. That's compression. If we say nothing else, we have a lossy compression, in that we won't be able to reconstruct the library accurately from the sparse representation. But if we use more patterns to describe the library (such as Abstract Class and "Throw, Don't Catch"), we get a representation that pretty accurately captures the structure of the library, if not the bit-by-bit code that implements it.

This struck me as a useful way to think about what patterns do for us. If you've seen other descriptions of patterns as a means for compression, I'd love to hear from you.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 20, 2009 4:26 PM

Bright Lines in Learning and Doing

Sometimes it pays to keep reading. Last time, I commented on breaking rules and mentioned a thread on the XP mailing list. I figured that I had seen all I needed there and was on the verge of skipping the rest. Then I saw a message from Laurent Bossavit and decided to read. I'm not surprised to learn something from Laurent; I have learned from him before.

Laurent's note introduced me to the legal term bright line. In the law, a bright-line rule is...

... a clearly defined rule or standard, composed of objective factors, which leaves little or no room for varying interpretation. The purpose of a bright-line rule is to produce predictable and consistent results in its application.

As Laurent says, Bright lines are important in situations where temptations are strong and the slope particularly steep, a well-known example is alcoholics' high vulnerability to even small exceptions. Test-driven development, or even writing tests soon after code and thus maintaining a complete suite of automated tests, requires a bright line for many developers. It's too easy to slide back into old habits, which for most developers are much older and stronger. Staying on the right side of the line may be the only practical way to Live Right.

This provides a useful name for what teachers often do in class: create bright lines for students. When students are first learning a new concept, they need to develop a new habit. A bright-line rule -- "Thou shalt always write a test first." or "Thou shalt write no line of code outside of a pair." -- removes from the students' minds the need to make a judgment that they are almost always not prepared to make yet: "Is this case an exception?" While learning, it's often better to play Three Bears and overdo it. This gives your mind a chance to develop good judgment through experience.

(For some reason, I am reminded of one way that I used to learn to play a new chess opening. I'd play a bazillion games of speed chess using it. This didn't train my mind to think deeply about the positions the opening created, but it gave me a bazillion repetitions. I soon learned a lot of patterns that allowed me to dismiss many bad alternatives and focus my attention on the more interesting positions.)

I often ask students to start with a bright line, and only later take on the challenge of a balancing test. It's better to evolve toward such complexity, not try to start there.

The psychological benefits of a bright-line test are not limited to beginners. Just as alcoholics have to hold a hard line and consider every choice consciously every day, some of us need a good "Thou shalt.." or "Thou shalt not..." in certain cases. As much as I like to run, I sometimes have to force myself out of bed at 5:00 AM or earlier to do my morning work-out. Why not just skip one? I am a creature of habit, and skipping even one day makes it even harder to get up the next, and the difficulty grows until I have a new habit.

(This has been one of the most challenging parts of trying to get back up to my old mileage after several extended breaks last year. I am proud finally to have done all five of my morning runs last week -- no days off, no PM make-ups. A new habit is in formation.)

If you know you have a particular weakness, draw a bright line for yourself. There is no shame in that; indeed, I'd say that it shows professional maturity to recognize the need and address it. If you need a bright line for everything, that may be a problem...

Sometimes, I adopt a bright line for myself because I want everyone on the team to follow a practice. I may feel comfortable exercising judgment in the gray area but not feel the rest of the team is ready. So we all play by the rules rather than discuss every possible judgment call. As the team develops, we can begin having those discussions. This is similar to how I teach many practices.

This may sound too controlling to you, and occasionally a student will say as much. But nearly everyone in class benefits from taking the more patient road to expertise. Again, from Laurent:

Rules which are more ambiguous and subtle leave more room for various fudge factors, and that of course can turn into an encouragement to fudge, the top of a slippery slope.

Once learners have formed their judgment, they are ready to balance forces. Until then, most are more likely to backslide out of habit than to make an appropriate choice to break the rule. And time spent arguing every case before they are ready is time not spent learning.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 18, 2009 8:58 PM

Practice and Dogma in Testing

Shh.

I have a secret.

When I am writing a program, I will on occasion add a new piece of functionality without writing a test.

I am whispering because I have seen the reaction on the XP mailing list and on a number of blogs that Kent Beck received to his recent article, To Test or Not to Test? That's a Good Question. In this short piece, Kent describes his current thinking that, like golf, software development may have "long game" and "short game", which call for different tools and especially mentalities. One of the differences might be whether one is willing to trade automated testing for some other value, such as delivering a piece of software sooner.

Note that Kent did not say that in the long game he chooses not to test his code; he simply tested manually. He also didn't say that he plans never to write the automated tests he needs later; he said he would write them later, either when he has more time or, perhaps, when he has learned enough to turn 8 hours of writing a test into something much shorter.

Many peoples' public reactions to Kent's admission have been along these lines: "We test you to make this decision, Kent, but we don't trust everyone else. And by saying this is okay, you will contribute to the delinquency of many programmers." Now you know why I need to whisper... I am certainly not in the handful of programmers so good that these folks would be willing to excuse my apostasy. Kent himself is taking a lot of abuse for it.

I have to admit that Kent's argument doesn't seem that big a deal to me. I may not agree with everything he says in his article, but at its core he is claiming only that there is a particular context in which programmers might choose to use their judgment and not write tests before or immediately after writing some code. Shocking: A programmer should use his or her judgment in the course of acting professionally. Where is the surprise?

One of the things I like about Kent's piece is that he helps us to think about when it might be useful to break a particular rule. I know that I'll be breaking rules occasionally, but I often worry that I am surrendering to laziness or sloppiness. Kent is describing a candidate pattern: In this context, with these goals, you are justified in breaking this rule consciously. We are balancing forces, as we do all the time when building anything. We might disagree with the pattern he proposes, but I don't understand why developers would attack the very notion of making a trade-off that results in breaking a rule.

In practice, I often play a little loose with the rules of XP. There are a variety of reasons that lead me to do so. Sometimes I pay for not writing a test, and when I do I reflect on what about the situation made the omission so dangerous. If the only answer I can offer is "You must write the test, always.", then I worry that I have moved from behaving like a professional to behaving like a zealot. I suspect that a lot of developers make similar trade-offs.

I do appreciate the difficulty this raises for those of us who teach XP, whether at universities or in industry. If we teach a set of principles as valuable, what happens to our students' confidence in the principles when we admit that we don't follow the rules slavishly? Well, I hope that my students are learning to think, and that they realize any principle or rule is subject to our professional judgment in any given circumstance.

Of course, in the context of a course, I often ask students to follow the rules "slavishly", especially when the principles in question require a substantial change in how they think and behave. TDD is an example, as is pair programming. More broadly, this idea applies when we teach OOP or functional programming or any other new practice. (No assignment statements or sequences until Week 10 of Programming Languages!) Often, the best way to learn a new practice is to live it for a while. You understand it better then than you can from any description, especially how it can transform the way you think. You can use this understanding later when it comes to apply your judgment about potential trade-offs.

Even still, I know that, no matter how much an instructor encourages a new practice and strives to get students to live inside it for a while, some students simply won't do it. Some want to but struggle changing their habits. I feel for them. Others willfully choose not to try the something new and deny themselves the opportunity to grow. I feel for them, too, but in a different way.

Once students have had a chance to learn a set of principles and to practice them for a while, I love to talk with them about choices, judgment, and trade-offs. They are capable of having a meaningful discussion then.

It's important to remember that Kent is not teaching novices. His primary audience is professional programmers, with whom he ought to be able to have a coherent conversation about choices, judgment, and trade-offs. Fortunately, a few folks on the P list have entertained the "long game versus short game" claim and related their own experiences making these kind of decisions on a daily basis.

If we in the agile world rely on unthinking adherence to rules, then we are guilty of proselytizing, not educating. Lots of folks who don't buy the agile approaches love when they see examples of this rigidity. It gives them evidence to support their tenuous position about the whole community. From all of my full-time years in the classroom, I have learned that perhaps the most valuable asset I can possess is my students' trust in my goals and attitudes. Without that, little I do is likely to have any positive effect on them.

Kent's article has brought to the surfaced another choice agilistas face most every day: the choice between dogma and judgment. We tend to lose people when we opt for unthinking adherence to a rule or a practice. Besides, dogmatic adherence is rarely the best path to getting better every day at what we do, which is, I think one of the principles that motivate the agile methods.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 15, 2009 8:30 PM

Robert's Rules of Order and Agile Forces

I am coming to a newfound respect for Robert's Rules of Order these days. I've usually shied away from that level of formality whenever chairing a committee, but I've experienced the forces that can drive a group in that direction.

For the last year, I have been chairing a campus-wide task force. Our topic is one on which there are many views on campus and for which there is not currently a shared vision. As a result, we all realized that our first priority was communication: discussing key issues, sharing ideas, and learning what others thought. I'll also say that I have learned a lot about what I think from these discussions. I've learned a lot about the world that lies outside of my corner of campus.

With sharing ideas and building trust as our first goals, I kept our meetings as unstructured as possible, even allowing conversations to drift off topic at times. That turned out well sometimes, when we came to a new question or a new answer unexpectedly.

We are nearing the end of our work, trying to reach closure on our descriptions and recommendations. This is when I see forces pushing us toward more structure. It is easy to keep talking, to talk around a decision so much that we find ourselves doubting a well-considered result, or even contradicting the it. At this point, we are usually cover well-trod ground. A little formality -- motion, second, discussion, vote, repeat -- may help. At least I now have some first hand experience of what might have led Mr. Robert to define his formal set of rules.

It occurs to me that Robert's Rules are a little like the heavyweight methodologies we often see in the software development world. We agile types are sometimes prone to look down on big formal methodologies as obviously wrong: too rigid, too limiting, too unrealistic. But, like the Big Ball of Mud, these methodologies came into being for a reason. Most large organizations would like to ensure some level of consistency and repeatability in their development process over time. That's hard to do when you have a 100 or a 1000 architects, designers, programmers, and testers. A natural tendency is to formalize the process in order more closely to control it. If you think you value safety more than discovery, or if you think you can control the rate of change in requirements, then a big process looks pretty attractive.

Robert's Rules looks like a solution to a similar problem. In a large group, the growth in communication overhead can outpace the value gained by lots of free-form discussion. As a group grows larger, the likelihood of contention grows as well, and that can derail any value the group might gain from free-form discussion. As a group reaches the end of its time together, free-form discussion can diverge from consensus. Robert's Rules seek to ensure that everyone has a chance to talk, but that the discussion more reliably reach a closing point. They opt for safety and lowering the risk of unpredictability, in lieu of discovery.

Smaller teams can manage communication overhead better than large ones. This is one of the key ideas behind agile approaches to software development: keep teams small so that they can learn at the same time they are making steady process toward a goal. Agile approaches can work in large organizations, too, but developers need to take into account the forces at play in larger and perhaps more risk-averse groups. That's where the sort of expertise we find in Jutta Eckstein's Agile Software Development in the Large comes in so handy.

While I sense the value of running a more structured meeting now, I don't intend to run any of my task force or faculty meetings using Robert's Rules any time soon. But I will keep in mind the motivation behind them and try to act in the spirit of a more directed discussion when necessary. I would rather still value people and communication over rules and formalisms, to the greatest extent possible.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Patterns, Software Development

April 12, 2009 6:46 PM

Language Driving Programming

William Stafford's Writing the Australian Crawl includes several essays on language, words, and diction in poetry. Words and language -- he and others say -- are co-authors of poems. Their shapes and sounds drive the writer in unexpected ways and give rise to unexpected results, which are the poems that they needed to write, whatever they had in mind when they started. This idea seems fundamental to the process of creation for most poets.

We in CS think a lot about language. It is part of the fabric of our discipline, even when we don't deal in software. Some of us in CS education think and talk way too much about programming languages: Pascal! Java! Ada! Scheme!

But even if we grant that there is art in programming and programs, can we say that language drives us as we build our software? That language is the co-author of our programs? That its words and shapes (and sounds?) drive the programmer in unexpected ways and gives rise to unexpected results, which are the programs we need to write, whatever we have in mind when we start? Can the programmer's experience resemble in any way the poet's experience that Stafford describes?

[Language] begins to distort, by congealing parts of the total experience into successive, partially relevant signals.... [It] begins to enhance the experience because of a weird quality of language: the successive distortions of language have their own cumulative potential, and under certain conditions the distortions of language can reverberate into new experiences more various, more powerful, and more revealing than the experiences that set off language in the first place.

Successive distortions with cumulative potential... Programmers tend not to like it when the language they use, or must use, distorts what they want to say, and the cumulative effects of such distortions in a program that can give us something that feels cumbersome, feels wrong, is wrong.

Still... I think of my experiences coding in Smalltalk and Scheme, and recall hearing others tell similar tales. I have felt Smalltalk push me towards objects I wasn't planning to write, even to objects of a kind I had previously been unaware. Null objects, and numbers as control structures; objects as streams of behavior. Patterns of object-oriented programs often give rise to mythical objects that don't exist in the world, which belies OOP's oft-stated intention to build accurate models of the world. I have felt Scheme push me toward abstractions I did not know existed until just that moment, abstractions so abstract that they make me -- and many a programmer already fearful of functional style -- uncomfortable. Yet it is simply the correct code to write.

For me: Smalltalk and Lisp and Scheme, yes. Maybe Ruby. Not Java. C?

Is my question even meaningful? Or am I drowning in my own inability to maintain suitable boundaries between things that don't belong together?


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

January 22, 2009 4:05 PM

A Story-Telling Pattern from the Summit

At the Rebooting Computing Summit, one exercise called for us to interview each other and then report back to the group about the person we interviewed. The reports my partner and I gave, coupled with some self-reported experiences later in the day, reminded me of a pattern I've experienced in other contexts. Here is a rough first draft. Let me know what you think.

Second-Hand Story

When we need to know a person's story, our first tendency is often to ask him to tell us. After all, he know it best, because he lived it. He has had a chance to reflect on it, to reconsider decisions, and to evaluate what the story "means".

This approach can disappoint us. Sometimes, the person is too close to the experience and attaches accidental emotions and details. Sometimes, even though he has had a chance to reflect on his experience, he hasn't reflected enough -- or perhaps not at all! Telling the story may be the first time he has thought about some of those experiences in a long time. While trying to tell the story and summarize its meaning at the same time, the storyteller may reach for an easily-found answer. The result can be trite, convenient, or self-protective. Maybe the person is simply too close to an experience to see its true meaning.

Therefore, ask the person to tell his story to someone else, focusing on "just the facts". Then, ask the interviewer to tell the story, perhaps in summary form. Let the interviewer and the listeners look for patterns and themes.

The interviewer has distance and an opportunity to listen objectively. She is less likely to impose well-rehearsed personal baggage over the story.

The result can still be trite. If the listener does not listen carefully, or is too eager to stereotype the story, then the resulting story may well be worse than the original, because it is not only sanitized but sanitized by someone without intimate connection to it.

It can be refreshing to hear someone else tell your own story, to draw conclusions, to summarize what is most important. A good listener can pick up on essential details and remove the shroud of humility or disappointment that too often obscures your own view. You can learn something about yourself!

This technique depends on two people's ability to tell a story, not one. The original story-teller must be open, honest, and willing to describe situations more than evaluate them too much. (A little evaluation is unavoidable and also useful. The listener learns something about the story-teller from that party of the story, too.) The interviewer must be a careful listener and have a similar enough background to be able to put the story into context and form reasonable abstractions about it.

Examples. I found the interviewer's reports at the Rebooting Computing summit to be insightful, including the ones that Joe Carthy and I gave on one another. Hearing someone else "tell my story" let me hear it more objectively than if I had told it myself. Occasionally I felt let like chiming in to correct or add something, but I'm not sure than anything I said could have done a better job introducing myself to the rest of the group. Something Joe said during his interview of me made me think more about just how my non-CS profs helped lead me into CS, something I had never thought much about before that.

Later that day, we heard several self-reported stories, and those stories -- told by the same people who had reported on others earlier -- sounded flat and trite. I kept thinking, "What's the real story?" Maybe someone else could have told it better!

Related Ideas. I am reminded of several techniques I learned as an architecture major while studying Betty Edwards's Drawing on the Right Side of the Brain:

  • drawing a chair by looking at the negative space around it
  • drawing a picture of Igor Stravinsky upside down, in order to trick the visualization mechanism in our minds that jump right to stereotypes
  • drawing a paper wad, which matched no stereotyped form already in our memories

This pattern and these exercises are all examples of techniques for indirect learning. This is is perhaps the first time I realized just how well indirect learning can work in a social setting.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

December 28, 2008 9:34 PM

Small Surprises While Grading

Early last week, I spent the last couple of days before Christmas wrapping up the grades on my Programming Languages course for fall semester. While grading the final exam, I seemed surprised by something on almost every problem. Here are a few that stand out:

cons and list

... are not the same. We spent some time early in the semester looking at how cons allocates a single new cell, and list allocates one cell per argument. Then we used them in a variety of ways throughout the rest of the course. After fifteen weeks programming in Scheme, how can so many people confuse them?

Overuse of accumulator variables

... is endemic to undergraduate students learning to program functionally. Two of the exam problems asked for straightforward procedures following the structural recursion pattern. These problems were about as simple examples of structural recursion as you can find: The largest value in a binary tree is the larger of

  • the largest value in the left subtree, and
  • the largest value in the right subtree.

The zip of two lists is a list with a list of their cars consed into the zip of their cdrs. Many students used an accumulator variable to solve both problems. Some succeeded, with unnecessarily complex code, and some solutions buckled under the weight of the complexity.

Habits are hard to break. I have colleagues who tell me that OOP is easy. I look at their code and say, "yes, but...." The code isn't really OO; it just bears the trappings of classes and methods. Sequential, imperative programming habits run deep. An accumulator variable is often a crutch used by a sequential programmer to avoid writing a functional solution. I see that in my own code occasionally -- the accumulator is as often a code smell as a solution.

At a time when the world is looking to parallel computing in a multicore world, we need to find a way to change the habits programmers form, either by teaching functional programming better or by teaching functional programming sooner so that students form different habits.

Scheme procedure names

... are different than primitive procedure names in most other languages. They are bound to their values just like every other symbol. They can be re-bound, either locally or globally, using the same mechanisms used to bind values to any other names. This means that in this code:

    (lambda (f g)
      (lambda (x)
        (+ (f x) (g x))))

the + symbol is a free variable, bound at the top level to the primitive addition operator. After we talked about this idea several times through the semester, I threw the students a bone on the final with a question that asked students to recognize + symbol as a free variable in a piece of code just like this one. The bone sailed past most of them.

Bound and free variables

... remain a tough topic for students to grasp, at least from my teaching. We spent several days in class talking about the idea of bound and free variables, and then writing code that could check a piece of code for bound and free variables. One of those sessions made a point of pointing out that occurs bound does not equal does not occur free, and that occurs free does not equal does not occur bound. For one thing, a variable could occur both bound and free in the same piece of code. For another, it might not occur at all! Yet when a final exam problem asked students to define an occurs-bound? procedure, several of them wrote the one-liner (not (occurs-free? x exp)). If only they knew how close they were... But they wrote that one-liner without understanding.

Syntactic abstraction

... is an idea that befuddles many of my students even after half a semester in which we work with the idea. Our Quiz 3 is tough for many of the students; it is often their lowest quiz grade of the course. In past semesters, though, students seemed to go home after being disappointed with their Quiz 3 score, hit the books, and come away with some understanding. This semester, several students came to the final exam with the same hole in their knowledge -- including students with two of the top three scores for the course. This makes me sad and disappoints me.

I can't do much to ensure that students will care enough to hit the books to overcome their disappointments, but I can change what I do. The next time I teach this course, I will probably have them start by working with for and while constructs in their own favorite languages. Maybe by stripping away the functional programming wrapping and the Scheme code we use to encounter these ideas, they will feel comfortable in a more familiar context and see the idea of syntactic abstraction to be really quite simple.

Postlude

Am I romanticizing the good old days, when men were men and all students went home and learned it all? Maybe a little, but I had a way to ground my nostalgia. I went back and checked the grades students earned in recent offerings of this course, which had very much the same content and structure. The highest score this semester was higher than the top score in the two most recent offerings, by 2-4%. Overall, through, grades lower. In fact, after the top score, the entire group had shifter down a whole letter grade. I don't think the students of the recent past were that much smarter or better prepared than this group, but I do think they had different attitudes and expectations.

One piece of evidence for this conclusion was that this semester there were far more 0s in the final grid of grades. I even had a couple of 0s on quizzes, where students simply failed to show up. This is, of course, much worse than simply scoring lower, because one things students can control is whether they do their work and submit assignments. As I wrote recently, each person must control what he or she can control. I am not sure how best to get juniors and seniors in college to to adopt this mindset, but maybe I'll need to bring Twyla Tharp -- or Bobby Knight -- into my classroom.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

October 09, 2008 5:53 PM

I Got Nowhere Else To Go

Some days, things go well, beyond expectation. Enjoy them! Today was one for me.

I've been thinking a lot about how students learn a new style of programming or a language that is quite different from their experience. Every class has its own personality, which includes interaction style, interest in Big Ideas, and curiosity. Last night it occurred to me that another important part of that personality is trust.

I was grading a quiz and suddenly felt a powerful personal connection to Gunnery Sergeant Foley from one of my favorite movies, An Officer and a Gentleman. There is a scene halfway through the film when he catches the protagonist, Zack Mayo, running an illegal contraband operation out of his barracks. The soldiers are in their room one afternoon when Foley walks in and declaims, "In every class, there's always one guy who thinks he's smarter than me. In this class, that's you, Mayo." He then dislodges a ceiling tile to reveal Mayo's stash of contraband and lets everyone know the jig is up.

Sergeant Foley breaking Mayo down

Beyond the occasional irrational desire I have to be Lou Gossett breaking the spirits of cocky kids and building them back up from scratch, while grading solutions to a particular exam problem I couldn't help but think, "In every class, there's always one guy who thinks he's smarter than me..." Some of the students seemed to be going out of their ways not to use the technique we had learned in class, which resulted in them writing complex, often incorrect code. More practically for them, they ended up writing more code than they needed, which spent extra time they didn't have the luxury of spending. I felt bad for them grade-wise, but also a little sad that they seemed to have missed out on the beautiful idea beyond the programming pattern they were not using.

(Don't worry, class. This irrational desire of mine is fleeting. I don't want your DOR. Quite the contrary; I am looking for ways help you succeed!)

Sometimes, I wonder if the problem is that students don't really trust me. Why should they? Sure, I'm the teacher, but they feel pretty good about their programming skills, and the patterns I show them may be different and complex enough that they'd rather trust their own skills than my claim that, say, mutual recursion makes life better. They'll learn that with enough experience, and then they may realize that they can trust me after all.

In many ways, though, a bigger part of the problem may be a failure of storytelling. On my side are the stories I tell to engage students in an idea and its use. To paraphrase Merlin Mann paraphrasing Cliff Atkinson, I need to tell a story that makes the students feel like an character with a problem they care about and then show how our new way of solving their problem -- their problem -- makes them winners in the end. I think I do a better job of this now than I did ten years ago in this course, but I always wonder how I can do better.

On their side is, perhaps, a failure of their own storytelling -- not just about bugs, as Guzdial writes, but about the problem domain itself, the data types at play, and the kind of problem they are solving. I suspect writing code over nested symbolic lists that represent programs is so different from the students' experience that many of them have a hard time getting a real sense of what is going on. As long as the domain and task remain completely abstract in the mind, the problems look almost like random markings on the page. Where to start? That disorientation may account for not starting in what seems to me to be the obvious location.

As a teacher, failures in their storytelling become failures in my storytelling. I need to reconsider how I communicate the "big picture" behind my course. Asking students to create their own examples is one micro-step in this direction. But I also need to think about the macro-level -- something like XP's notion of metaphor. That practice has proved to be a stumbling block for XP, and I expect that it will remain a challenge for me.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

September 23, 2008 6:53 PM

Shut Up. Better Yet, Ask a Question.

On the way out of class today, I ran into the colleague who teaches in the room after me. I apologized for being slow to get out of the room and told him that I had talked more than usual today. From the looks on the faces of my students, I gathered that they needed a bit more. What they really needed was more time with same material. Most of all, they needed me to slow down -- rather than cover more material, they needed a chance to think more about what they had just learned. My way of doing that was to keep talking about the current example.

I told my colleague that there is probably a pedagogical pattern called Shut Up. And if not, then maybe there should be.

He said that the real pattern is Ask a Question.

I bowed down to him.

We talked a bit more, about how we both desire to use the Ask a Question pattern more often. We don't, out of habit and out of convenience. Professors lecture. It's what we do. The easiest thing to do is almost always: just keep talking, saying what I had planned to say.

I give myself some credit for how I ended class today. At the very least, I realized that I should not introduce new material. I was able to Let the Plan Go [1]

Better than sticking to a plan that is off track for my students is to keep talking, but about same stuff, only in a different way. This can sometimes be good. It gives me a chance to show students another side of the same idea, so that they might understand the idea better by seeing it from different perspectives.

Is Shut Up better than that? Sometimes. There are times when students just need... time -- time for the idea to sink in, time to process.

Is Ask a Question better still? Yes, in most cases. Even if I show students an idea, rather than telling them something, they remain largely passive in the process. Asking a question engages them in the idea. More and different parts of their brain can go to work. Most everything we know about how people learn says that this is A Good Thing.

Now, I do give myself a little credit here, too. I know about the Active Student pattern [2] and have changed my habits slowly over time. I try to toss in a question for students every now and then, if only to shut myself up for a while. But my holding pattern today probably didn't use enough questions. I was under time pressure (class is almost over!) and didn't have the presence of mind to turn the last few minutes into an exercise. I hope to do better next time.

~~~~~

[1] You can read the Let the Plan Go pattern in Seminars, an ambitious pattern language by Astrid Fricke and Markus Völter.

[2] The Active Student pattern is documented in Joe Bergin's paper "Some Pedagogical Patterns". There is a lot of good stuff in this one!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

September 19, 2008 5:12 PM

Design Creates People, Not Things

The latest issue of ACM's on-line pub Ubiquity consists of Chauncey Bell's My Problem with Design, an article that first appeared on his blog a year ago. I almost stopped reading it early on, distracted by other things and not enamored with its wordiness. (I'm one to talk about another writer's wordiness!) I'm glad I read the whole article, because Bell has an inspiring take on design for a world that has redefined the word from its classic sense. He echoes a common theme of the software patterns and software craftsmanship crowd, that in separating design from the other tasks involved in making an artifact we diminish the concept of design, and ultimately we diminish the quality of the artifact thus made.

But I was especially struck by these words:

The distinctive character of the designer shapes each design that affects us, and at the same time the designer is shaped by his/her inventions. Successful designs shape those for whom they are designed. The designs alter people's worlds, how they understand those worlds, and the character and possibilities of inhabiting those worlds. ...

Most of our contemporaries tell a different story about designing, in which designers fashion or craft artifacts (including "information") that others "use." One reason that we talk about it this way, I think, is that it can be frightening to contemplate the actual consequences of our actions. Do we dare speak a story in which, in the process of designing structures in which others live, we are designing them, their possibilities, what they attend to, the choices they will make, and so forth?

(The passage I clipped gives the networked computer as the signature example of our era.)

Successful designs shape those for whom they are designed. In designing structures for people, we design them, their possibilities.

I wonder how often we who make software think this sobering thought. How often do we simply string characters together without considering that our product might -- should?! -- change the lives of its users? My experience with software written by small, independent developers for the Mac leads me to think that at least a few programmers believe they are doing something more than "just" cutting code to make a buck.

I have had similar feelings about tools built for the agile world. Even if Ward and Kent were only scratching their own itches when they built their first unit-testing framework in Smalltalk, something tells me they knew they were doing more than "making a tool"; they were changing how they could write Smalltalk. And I believe that Kent and Erich knew that JUnit would redefine the world of the developers who adopted it.

What about educators? I wonder how often we who "design curriculum" think this sobering thought. Our students should become new people after taking even one of our courses. If they don't, then the course wasn't part of their education; it's just a line on their transcripts. How sad. After four years in a degree programs, our students should see and want possibilities that were beyond their ken at the start.

I've been fortunate in my years to come to know many CS educators for whom designing curriculum is more than writing a syllabus and showing up 40 times in a semester. Most educators care much more than that, of course, or they would probably be in industry. (Just showing up out there pays better than just showing up around here, if you can hold the gig.) But even if we care, do we really think all the time about how our courses are creating people, not just degree programs? And even if we think this way in some abstract way, how often do we let it seep down into our daily actions. That's tough. A lot of us are trying.

I know there's nothing new here. Way back, I wrote another entry on the riff that "design, well done, satisfies needs users didn't know they had". Yet it's probably worth reminding ourselves about this every so often, and to keep in mind that what we are doing today, right now, is probably a form of design. Whose world and possibilities are we defining?

This thought fits nicely with another theme among some CS educators these days, context. We should design in context: in the context of implementation and the other acts inherent in making something, yes, but also in the context of our ultimate community of users. Educators such as Owen Astrachan are trying help us think about our computing in the context of problems that matter to people outside of the CS building. Others, such as Mark Guzdial, have been preaching computing in context for a while now. I write occasionally on this topic here. If we think about the context of our students, as we will if we think of design as shaping people, then putting our courses and curricula into context becomes the natural next step.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

September 10, 2008 2:59 PM

"Yes, We Do That"

When I was in grad school, my advisor sent me to a series of swank conferences on expert systems in business, finance, and accounting. Among the things these conferences did for me was to give a chance to stay at Ritz Carlton hotels. This Midwestern boy had never been treated so well.

At the 1990 conference, I heard a talk by Gary Ribar from KPMG Peat Marwick, one of the Big Six accounting consulting firms of the time. Ribar described LoanProbe, a program that evaluated the collectibility of commercial loans. LoanProbe was a rule-based system organized in a peculiar way, with its 9000 rules separated into thirty-three separate "knowledge bases". It was a significant application that interacted with sixty external programs and two large data bases. Peat Marwick took LoanProbe a step further and used its organizational technique to build a knowledge acquisition program that enabled non-programmers to create systems with a similar structure.

I was so excited. I recognized this technique as what we in our lab called structured matching, a remarkably common and versatile pattern in knowledge-based systems. LoanProbe looked to me like the largest documented application of structured matching, which I was working on as a part of my research. Naturally, I wanted to make a connection to this work and share experiences with the speaker.

After the talk, I waited in line to speak with him. When my turn came, I gushed that their generic architecture was very cool and that "we do something just like that in our lab!". I expected camaraderie, but all I received back was an icy stare, a curt response, and an end to the conversation.

I didn't understand. Call me naive. For a while, I wondered if Mr. Ribar was simply an unfriendly guy. Then I realized that he probably took my comment not as a compliment -- We do that, too! -- but as a claim that the work he described was less valuable because it was not novel. I realized that I had violated one of the basic courtesies of research by telling him that his work was known already.

These days, I think fondly of Ribar, that talk, and that conference. He was behaving perfectly reasonably, given the culture in which he worked. Novelty is prized.

A few years after that conference, I came across PLoP and the software patterns community. This group of people valued discovering, documenting, and sharing common solutions, the patterns that make our software and our programming lives better. Structured Matcher did that, and it appeared in programs from all sorts of domain.

Jeff Patton has described this feature of the patterns community nicely in his article on emerging best practices in user experience:

If you tell someone a great idea, and they say "Yes, we do something like that too!", that's a pattern.

Documenting and sharing old, proven solutions that expert practitioners use may not get you a publication in the best research journal (though it might, if you are persistent and fortunate), but it will make the world better for programmers and software users. That is valuable, too.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

September 09, 2008 6:24 PM

Language, Patterns, and Blogging

My semester has started with a busy bang, complicated beyond usual by a colleague's family emergency, which has me teaching an extra course until he returns. The good news is that my own course is programming languages, so I am getting to think about fun stuff at least a couple of days a week.

Teaching Scheme to a typical mix of eager, indifferent, and skeptical students brought to mind a blog entry I read recently on Fluent Builders in Java. This really is a neat little design pattern for Java or C++ -- a way to make those code written in these languages look and feel so much better to the reader. But looking at the simple example:

Car car = Car.builder()
   .year(2007)
   .make("Toyota")
   .model("Camry")
   .color("blue")
   .build();

... can't help me think about the old snark that we are reinventing Smalltalk and Lisp one feature at a time. A language extension here, a design pattern there, and pretty soon you have the language people want to use. Once again, I am turning into an old curmudgeon before my time.

As the author points out in a comment, Ruby gives us an more convenient way to fake named parameters: passing a hash of name/value pairs to the constructor. This is a much cleaner hack for programmers, because we don't have to do anything special; hashes are primitives. From the perspective of teaching Programming Languages this semester, what like most about the Ruby example is that it implements the named parameters in data, not code. The duality of data and program is one of those Big Ideas that all CS students should grok before they leave us, and now I have a way to talk about the trade-off using Java, Scheme, and an idiomatic construction in Ruby, a language gaining steam in industry.

Of course, we know that Scheme programmers don't need patterns... This topic came up in a recent thread on the PLT Scheme mailing list. Actually, the Scheme guys gave a reasonably balanced answer, in the context of a question that implied an unnecessary insertion of pattern-talk into Scheme programming. How would a Scheme programmer solve the problem that gives rise to fluent builders? Likely, write a macro: extend the language with new syntax that permits named parameters. This is the "pattern as language construct" mentality that extensible syntax allows. (But this leaves other questions unanswered, including: When is it worth the effort to use named parameters in this way? What trade-offs do we face among various ways to implement the macro?)

Finally, thinking ahead to next semester's compilers class, I can't help but think of ways to use this example to illustrate ideas we'll discuss there. A compiler can look for opportunities to optimize the cascaded message send shown above into a single function call. A code generator could produce a fluent builder for any given class. The latter would allow a programmer to use a fluent builder without the tedium of writing boilerplate code, and the former would produce efficient run-time code while allowing the programmer to write code in a clear and convenient way. See a problem; fix it. Sometimes that means creating a new tool.

Sometimes I wonder whether it is worth blogging ideas as simple as these. What's the value? I have a new piece of evidence in favor. Back in May 2007, I wrote several entries about a paper on the psychology of security. It was so salient to me for a while that I ended up suggesting to a colleague that he might use the paper in his capstone course. Sixteen months later, it is the very same colleague's capstone course that I find myself covering temporarily, and it just so happens that this week the students are discussing... Schneier's paper. Re-reading my own blog entries has proven invaluable in reconnecting with the ideas that were fresh back then. (But did I re-read Schneier's paper?)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

August 12, 2008 4:24 PM

TDD and GTD: Instances of a Pattern

I once wrote that extreme programming is a self-help system. This generalizes pretty well to other software methodologies, too. As we step away from developing software to personal hygiene, there is an entire ecosystem around the notion of life hacks, self-help for managing information and combatting data overload. Programmers and techies are active players in the lifehacking community because, well, we love to make tools to solve our problems and we love self-help systems. In the end, sometimes, we spend more time making tools and playing with them than actually solving our problems.

One of the popular lifehacking systems among techies is David Allen's Getting Things Done, or GTD. I've never read the book or adopted the system, but I've read about it and borrowed some of its practices in trying to treat my own case of information overload. The practices I have borrowed feel a lot like XP and especially test-driven development. Maybe that's why they appeal to me.

Consider this post on the basic concepts of GTD. Here is why GTD makes me think of TDD:

  1. think in terms of outcomes: write a test
  2. take the next action: take a simple action
  3. review your circumstances regularly: refactor

This is not a perfect match. In GTD, a goal from Step 1 may require many next actions, executed in sequence. In TDD, we decompose such big goals into smaller steps so that we can define a very clear next action to perform. And in GTD, Step 3 isn't really refactoring of a system. It's more a global check of where you are and how your lists of projects and next actions need to be revised or pruned. What resonates, though, is its discipline of regular review of where you are headed and how well your current 'design' can get you there.

It's not a perfect match, but then no metaphor is. Yet the vibe feels undeniably similar to me. Each has a mindset of short-term accountability through tests, small steps to achieve simple, clear goals, and regular review and clean-up of the system. The lifehackers who play with GTD even like to build tools to automate as much as they can so that they stay in the flow of getting things done as much as possible and trust their tools to help them manage performance and progress.

Successful patterns recur. I shouldn't be surprised to find these similarities.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

August 11, 2008 2:38 PM

Side Effects and Types in Refactoring

Greg Wilson relates an observation by Michael Feathers: "refactoring pure functional code is a lot easier than refactoring imperative code". In one sense, this ought not to surprise us. When we eliminate side effects from our code, dependencies among functions flow through parameters, which make individual functions more predictably independent of one another. Without side effects, we don't have sequences of statements, which encourages smaller functions, which also makes it easier to understand the functions.

(A function call does involve sequencing, because arguments are evaluated before the function is invoked. But this encourages small functions, too: Deeply-nested expressions can be quite hard to read.)

There is another force counteracting this one, though. Feathers has been playing a lot with Haskell, which is strongly-typed through manifest types and type inferencing. Many functional languages are dynamically-typed, and dynamic typing makes it harder to refactor functional programs -- at least to guarantee that a particular refactoring does not change the program's behavior.

I'm a Scheme programmer when I use a functional language, so I encounter the conflict between these two forces. My suspicion from personal experience is that functional programmers need less support, or at least different kinds of support, when it comes to refactoring tools. The first key step is to identify refactorings from FP practice. From there, we can find ways to automate support for these refactorings. This is a longstanding interest of mine. One downside to my current position is a lack of time to devote to this research project...


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

August 07, 2008 2:57 PM

Design Ideas Lying in Wait

Ralph Johnson pointed me to a design idea for very large databases called a shard. This is a neat little essay for several reasons. First, its author, Todd Hoff, explains an architecture for massive, distributed databases that has grown up in support of several well-known, high-performance web sites, including Flickr, Google, and LiveJournal. Second, Hoff also wrote articles that describe the architectures of Flickr, Google, and LiveJournal. Third, all four pages point to external articles that are the source of the information summarized. Collectively, these pages make a wonderful text on building scalable data-based web systems.

I've posted this entry in my Patterns category because this recurring architecture has all the hallmarks of a design pattern. It even has great name and satisfies Rule Of Three, something I've mentioned before -- and what a fine three it is. Each implementation uses the idea of a shard slightly differently, in fitting with the particular forces at play in the three companies' systems.

Buried near the bullet list on the Google page was an item worth repeating:

Don't ignore the Academy. Academia has a lot of good ideas that don't get translated into production environments. Most of what Google has done has prior art, just not prior large scale deployment.

This advice is a bit different from some advice I once shared for entrepreneurs, looking for Unix commands that haven't been implemented on the web yet, but the spirit is similar. Sometimes I hear envious people remark that Google hasn't done anything special; they just used a bunch of ideas others created to build a big system. Now, I don't think that is strictly true, but I do think that many of the ideas they used existed before in the database and networking worlds. And to the extent that is true, good for them! They paid attention in school, read beyond their assignments, and found some cool ideas that they could try in practice. Isn't that the right thing to do?

In any case, I recommend this article and encourage others to write more like it.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

July 28, 2008 3:44 PM

Revolution, Then Evolution

I recently started reading The Art of Possibility, by Roz and Ben Zander, and it brought to mind a pattern I have seen many times in literature and in life. Early on, the Zanders explain that this book is "not about making incremental changes that lead to new ways of doing things based on old beliefs". It is "geared toward causing a total shift of posture [and] perceptions"; it is "about transforming your entire world".

That's big talk, but the Zanders are not alone in this message. When talking to companies about creating new products, reaching customers, and running a business, Guy Kawasaki uses the mantra Revolution, Then Evolution. Don't try to get better at what you are doing now, because you aren't always doing the right things. But also don't worry about trying to be perfect at doing something new, because you probably won't be. Transform your company or your product first, then work to get better.

This pattern works in part because people need to be inspired. The novelty of a transformation may be just what your customers or teammates need to rally their energies, when "just" trying to get better will make them weary.

It also works despite running contrary to our fixation these days with "evolving". Sometimes, you can't get there from here. You need a mutation, a change, a transformation. After the transformation, you may not be as good as you would like for a while, because you are learning how to see the world differently and how to react to new stimuli. That is when evolution becomes useful again, only now moving you toward a higher peak than was available in the old place.

I have seen examples of this pattern in the software world. Writing software patterns was a revolution for many companies and many practitioners. The act of making explicit knowledge that had been known only implicitly, or the act of sharing internal knowledge with others and growing a richer set of patterns, requires a new mindset for most of us. Then we find out we are not very good, so we work to get better, and soon we are operating in a world that we may not have been able even to imagine before.

Adopting agile development, especially a practice-laden approach such as XP, is for many developers a Revolution, Then Evolution experience. So are major lifestyle changes such as running.

Many of you will recognize an old computational problem that is related to this idea: hill climbing. Programs that do local search sometimes get stuck at a local maximum. A better solution exists somewhere else in the search space, but the search algorithm makes it impossible for the program to get out of the neighborhood of the local max. One heuristic for breaking out of this circumstance is occasionally to make a random jump somewhere else in the search space, and see where hill climbing leads. If it leads to a better max, stay there, else jump back to the starting point.

In AI and computer science more generally, it is usually easier to peek somewhere else, try for a while, and pop back if it doesn't work out. Most individuals are reluctant to make a major life change that may need to be undone later. We are, for the most part, beings living in serial time. But it can be done. (I sometimes envy the freer spirits in this world who seem built for this sort of experimentation.) It's even more difficult to cause a tentative radical transformation within an organization or team. Such a change disorients the people involved and strains their bonds, which means that you had better well mean it when you decide to transform the team they belong to. This is a major obstacle to Revolution, Then Evolution, and one reason that within organizations it almost always requires a strong leader who has earned everyone's trust, or at least their respect.

As a writer of patterns, I struggle with how to express the context and problem for this pattern. The context seems to be "life", though there are certainly some assumptions lurking underneath. Perhaps this idea matters only when we are seeking a goal or have some metric for the quality of life. The problem seems to be that we are not getting better, despite an effort to get better. Sometimes, we are just bored and need a change.

Right now, the best I can say from my own experience is that Revolution, Then Evolution applies when it has been a while since I made long-term progress, when I keep finding myself revisiting the same terrain again and again without getting better. This is a sign that I have plateaued or found a local maximum. That is when it is time to look for a higher local max elsewhere -- to transform myself in some way, and then begin again the task of getting better by taking small steps.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

July 07, 2008 1:58 PM

Patterns in My Writing

While reading this morning I came across a link to this essay. College students should read it, because it points out many of the common anti-patterns in the essays that we professors see -- even in papers written for computer science courses.

Of course, if you read this blog, you know that my writing is a poster child for linguistic diffidence, and pat expressions are part of my stock in trade. It's sad to know that these anti-patterns make up so much of my word count.

This web page also introduced me to Roberts's book Patterns in English. With that title, I must check it out. I needed a better reason to stop by the library than merely to return books I have finished. Now I have one.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

June 13, 2008 3:55 PM

Two Patterns Dealing with Side Effects

I ran across a pattern today that reminded of another I encountered while at ChiliPLoP. Both help developers deal with side effects, changes to a program's state. Let me share these patterns with you all, with a common explanation of their value.

Most programs written these days are in a style where setting and changing the values of one or more variables, via a sequence of imperative statements. Side effects are a common source of programmers' misunderstanding when reading code. A program can change a variable's state almost anywhere, and that makes reading any bit of code uneasy: "Do I really know what this variable means right now?" Using only local variables only in the context of a small procedure is one way to localize effects. Even still, problems can arise.

I won't try write full patterns for these. I'll write short patlets and give you links to the articles I read, which do a good job talking about the ideas.

Mutual Recursion with Side Effects

In Solving Every Sudoku Puzzle, Peter Norvig builds a Python program for the title task. About 40% of the way in, he says:

If you have two mutually-recursive functions that both alter the state of an object, try to move almost all the functionality into just one of the functions. Otherwise you will probably end up duplicating code.

Duplicate code is usually a bad thing, because it creates a problem for programmers keeping the copies in sync. Duplicate code that modifies state is especially bad, because it brings in the extra complication of making another procedure harder to understand.

Side Effects and Python's Default Parameters

Coincidentally, the second pattern involves Python, too. This time the pattern is Python-specific, addressing an issue with a language feature.

Context: You are writing a procedure with a default parameter.

Problem: The procedure needs to modify the default parameter. Python evaluates a default parameter's initial value only on the first call to the procedure.

Solution: Whenever the parameter is missing, initialize it within the method.

Tim Ottinger gives his implementation technique in Python's Mutable Default Problem:

def function( item, stuff=None ):
   stuff = stuff or []
   # ... modify stuff, etc.

This pattern allows the user to take advantage of the default parameter when he has no collection to send. It does so by following a principle laid out in the article Ottinger references: In Python, "don't use mutable objects as function defaults."

This pattern may not be Python-specific, because there may be other languages with this initialize-once behavior. It seems like a bug to me, not a feature, but I'm not a Python programmer and so can't speak authoritatively. But I am glad that Ruby doesn't do this.

~~~~~

Postscript: While preparing this article, I learned something new about Ruby, unrelated to this issue. It is possible to extract from a class a new class that has a subset of the original class's protocol. Very nice indeed!)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

June 04, 2008 2:07 PM

Not Reading Books

I have another million to my credit, and it was a marvelous little surprise.

Popular culture is full of all sorts of literary references with which you and I are supposed to be familiar. Every year brings another one or two. The Paradox of Choice. The Tipping Point. The Wisdom of Crowds. Well-read people are expected, well, to have read the books, too. How else can we expect to keep up with our friends when they discuss these books, or to use the central wisdoms they contain in knowing ways?

I have a confession. I have read only two or three chapters of The Wisdom of Crowds. I have read only an excerpt from The Tipping Point that appeared in the New Yorker or some other literary magazine. And while I've seen a Google talk by Barry Schwartz on-line, I may not have read anything more than a short precis of the work. Of course, I have learned a lot about them from my friends, and by reading about them in various other contexts. But, strictly speaking, I have not read any of them.

To be honest, I feel no shame about this state of affairs. There are so, so many books to read, and these just have not seemed important enough to displace others from my list. And in the case of The Wisdom of Crowds, I found that one or two chapters told me pretty much all I needed to understand the Big Idea it contained. Much as Seth Godin has said about many popular business books, many books in the popular canon can be boiled down to much shorter works in their essence, with the rest being there for elaboration or academic gravitas.

cover of How to Talk About Books You Haven't Read

For airplane reading on my trip to the workshop at Google, I took Pierre Bayard's How to Talk About Books You Haven't Read. Bayard's thesis is that neither I nor anyone else should feel shame about not having read any given book, even if we feel a strong compulsion to comment, speak, or write about it. In not reading and talking anyway, we are part of a grand intellectual tradition and are, in fact, acting out of necessity. There are simply too many books to read.

This problem arises even in the most narrow technical situation. When I wrote my doctoral dissertation, I surely cited works with which I was familiar but which I had not "read", or, having read them, had only skimmed them for specific details. I recall feeling a little bit uneasy; what if some party of the book or dissertation that I had not studied deeply said something surprising or wrong? But I knew a lot about these works in context: from other people's analyses, from other works by the same author, and even from having discussed the work with the author him- or herself. But in an important way, I was talking about a work I "had not read".

How I could cite the work anyway and still feel I was being intellectually honest gets to one of the central themes of Bayard's book: the relationships between ideas are often more important than the ideas themselves. To understand a work in the context of the universal library means more than just to know the details of the work, and the details themselves are often so affected by conditions outside of the text that they are less reliable than the bigger picture anyway.

First, let me assure you. Bayard wrote this book with a wink in his eye. At times, he speaks with a cavalier sarcasm. He also repeats himself in places; occasional paragraphs sound as if they have been lifted verbatim from previous chapters.

Second, this book fits Seth Godin's analysis of popular business books pretty well. Two or three chapters were sufficient to express the basic idea of this book. But such a slim product would have missed something important. How to Talk About Books You Haven't Read started as a joke, perhaps over small talk at a cocktail party, but as Bayard expanded on the idea he ended up with an irreverent take on reading, thinking, and understanding that carries a lot more truth than I might first have imagined. Readers of this blog who are software patterns aficionados might think of Big Ball of Mud in order to understand just what I mean: antipattern as pattern, when looked at from a different angle.

This book covers a variety of books that deal in some way with not reading books but talking about them. Along the way, Bayard explores an even wider variety of ideas. Many of these sound silly, even wrong, at first, and he uses this to weave a lit-crit tale that is perfect parody. But as I read, I kept saying, "Yeah, but..." in a way, this really is true.

For example, Bayard posits that reading too much can cause someone to lose perspective in the world of ideas and to lose one's originality. In a certain way, the reader subordinates himself to the writer, and so reading too much means always subordinating to another rather than creating ideas oneself. We could read this as encouragement not to read (much), which would miss his joke. But there is another level at which he is dead-on right. I knew quite a few graduate students who learned this firsthand when they got into a master's program and found that they preferred to immerse themselves in the research of others than to do creative research of their own. And there many blogs which do a fine job reporting on other people's work but which never seem to offer much new. (I struggle with that danger each time I write in this blog.)

Not reading does not mean that we cannot have an opinion. My friends and I are examples of this. Students are notorious for this, and Bayard, a lit professor, discusses the case of students in class at some length. But I was most taken by his discussion of Laura Bohannan's experience telling the story of Hamlet to the Tiv people of West Africa. As she told the story, the Tiv elders interpreted the story for her, correcting her -- and Western culture, and Shakespeare -- along the way. One of the interpretations was a heterodoxy that has a small but significant following among Shakespeare scholars. The chief even ventured to predict how the story ended, and did a pretty good job. Bayard used this as evidence that not reading a book may actually leave our eyes open to new possibilities. Bohannan's story is available on-line, and you really should read it -- it is delightful.

Bayard talks about so many different angles on our relationship with books and stories about them, including

  • how we as listeners tend to defer to a speaker, and thus allow a non-reader to successfully talk about a book to us, and
  • how some readers are masters at imposing their own views on even books they have read, which should give us all license to impose ourselves on the books we haven't read.

One chapter focuses on our encounters with writers, and the ticklish situations they create for the non-reader and for the writer. In another, Bayard deals with the relationship among professors, students, and books. It made me think about how students interpret the things I say in class, whether about our readings or the technical material we are learning. Both of these chapters figure in a second entry I'm writing about this book, as well as chapters on the works of Montaigne and Wilde.

One chapter uses as his evidence the campus novels of David Lodge, of whom I am a big fan. I've never blogged about them, but I did use the cover of one of his books to illustrate a blog entry. Yet another draws on Bill Murray's classic Groundhog Day, an odd twist in which actually reading books enters into Bayard's story and supports his thesis. I have recommended this film before and gladly do so again.

As in so many situations, our fear of appearing weak or unknowledgable is what prevents us from talking freely about a book we haven't read, or even to admit that we have not read it. But this same fear is also responsible for discouraging us from talking about books we have read and about ideas we have considered. This is ultimately the culprit that Bayard hopes to undermine:

But our anxiety in the face of the Other's knowledge is an obstacle to all genuine creativity about books. The idea that the Other has read everything, and thus is better informed than us, reduces creativity to a mere stopgap that non-readers might resort to in a pinch. In truth, readers and non-readers alike are caught up in an endless process of inventing books, whether we like it or not, and the real question is not how to escape that process, but how to increase its dynamism and its range.

Bayard's book is worth reading just for his excerpts of other books and for his pointer to the story about the Tiv. I should probably feel guilt at not having read this book yet when so many others have, but I'm just happy to have read it now.

While reading on the plane coming home, I glanced across the aisle and noticed another passenger reading Larry Niven's Ringworld. I smiled and thought "FB++", in Bayard's rating system. I could talk about it nonetheless.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

May 12, 2008 12:24 PM

Narrative Fallacy on My Mind

In his recent bestseller The Black Swan: The Impact of the Highly Improbable, Nassim Nicholas Taleb uses the term narrative fallacy to describe man's penchant for creating a story after the fact, perhaps subconsciously, in order to explain why something happened -- to impute a cause for an event we did not expect. This fallacy derives from our habit of imposing patterns on data. Many view this as a weakness, but I think it is a strength as well. It is good when we use it to communicate ideas and to push us into backing up our stories with empirical investigation. It is bad when we let our stories become unexamined truth and when we use the stories to take actions that are not warranted or well-founded.

Of late, I've been thinking of the narrative fallacy in its broadest sense, telling ourselves stories that justify what we see or want to see. My entry on a response to the Onward! submission by my ChiliPLoP group was one trigger. Those of us who believe strongly that we could and perhaps should be doing something different in computer science education construct stories about what is wrong and what could be better; we're like anyone else. That one OOPSLA reviewer shed a critical light on our story, questioning its foundation. That is good! It forces us to re-examine our story, to consider to what extent it is narrative fallacy and to what extent it matches reality. In the best case, we now know more about how to tell the story better and what evidence might be useful in persuading others. In the worst, we may learn that our story is a crock. But that's a pretty good worst case, because it gets us back on the path to truth, if indeed we have fallen off.

A second trigger was finding a reference in Mark Guzdial's blog to a short piece on universal programming literacy at Ken Perlin's blog. "Universal programming literacy" is Perlin's term for something I've discussed here occasionally over the last year, the idea that all people might want or need to write computer programs. Perlin agrees but uses this article to consider whether it's a good idea to pursue the possibility that all children learn to program. It's wise to consider the soundness of your own ideas every once in a while. While Perlin may not be able to construct as challenging a counterargument as our OOPSLA reviewer did, he at least is able to begin exploring the truth of his axioms and the soundness of his own arguments. And the beauty of blogging is that readers can comment, which opens the door to other thinkers who might not be entirely sympathetic to the arguments. (I know...)

It is essential to expose our ideas to the light of scrutiny. It is perhaps even more important to expose the stories we construct subconsciously to explain the world around us, because they are most prone to being self-serving or simply convenient screens to protect our psyches. Once we have exposed the story, we must adopt a stance of skepticism and really listen to what we hear. This is the mindset of the scientist, but it can be hard to take on when our cherished beliefs are on the line.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns

May 07, 2008 3:20 PM

Patterns as Descriptive Grammar

I've tried to explain the idea of software patterns in a lot of different ways, to a lot of different kinds of people. Reading James Tauber's Grammar Rules reminds me of one of my favorites: a pattern language is a descriptive grammar. Patterns describe how (good) programmers "really speak" when they are working in the trenches.

Talking about patterns as grammar creates the potential for the sort of misunderstanding that Tauber discusses in his entry. Many people, including many linguists, think of grammar rules as, well, rules. I was taught to "follow the rules" in school and came to think of the rules as beyond human control. Linguists know that the rules of grammar are man-made, yet some still seem to view them as prescriptive:

It is as if these people are viewing rules of grammar like they would road rules--human inventions that one may disagree with, but which are still, in some sense, what is "correct"...

Software patterns are rarely prescriptive in this sense. They describe a construct that programmers use in a particular context to balance the forces at play in the problem. Over time, they have been found useful and so recur in similar contexts. But if a programmer decides not to use a pattern in a situation where it seems to apply, the programmer isn't "wrong" in any absolute sense. But he'll have to resolve the competing forces in some other way.

While the programmer isn't wrong, other programmers might look at him (or, more accurately, his program) funny. They will probably ask "why did you do it that way?", hoping to learn something knew, or at least confirm that the programmer has done something oddly.

This is similar to how human grammar works. If I say, "Me wrote this blog", you would be justified in looking at me funny. You'd probably think that what I speaking incorrectly.

Tauber points out that, while I might be violating the accepted rules of grammar, I'm not wrong in any absolute sense:

... most linguists focus on modeling the tacit intuitions native speakers have about their language, which are very often at odds with the "rules of grammar" learnt at school.

He gives a couple of examples of rules that we hear broken all of the time. For example, native speakers of English almost always say "It's me", not "It's I", though that violates the rules of nominative and accusative case. Are we all wrong? In Sr. Jeanne's 7th-grade English class, perhaps. But English grammar didn't fall from the heavens as incontrovertible rules; it was created by humans as a description of accepted forms of speech.

When a programmer chooses not to use a pattern, other programmers are justified in taking a second look at the program and asking "why?", but they can't really say he's guilty of anything more than doing things differently.

Like grammar rules, some patterns are more "right" than others, in the sense that it's less acceptable to break some than others. I can get away with "It's me", even in more formal settings, but I cannot get away with "Me wrote this blog", even in the most informal settings. An OO programmer might be able get away with not using the Chain of Responsibility pattern in a context where it applies, but not using Strategy or Composite in appropriate contexts just makes him look uninformed, or uneducated.

A few more thoughts:

So, patterns are not like a grammar for programming language, which is prescriptive. To speak Java at all, you have to follow the rules. They are like the grammar of a human language, which model observations about how people speak in the wild.

As a tool for teaching and learning, patterns are so useful precisely because they give us a way to learn accepted usages that go beyond the surface syntactic rules of a language. Even better, the pattern form emphasizes documenting when a construct works and why. Patterns are better than English grammar in this regard, at least better than the way English grammar is typically taught to us as schoolchildren.

There are certainly programmers, software engineers, and programming language theorists who want to tell us how to program, to define prescriptive rules. There can be value in this approach. We can often learn something from a model that has been designed based on theory and experience. But to me prescriptive models for programming are most useful when we don't feel like we have to follow them to the letter! I want to be able to learn something new and then figure out how I can use it to become a better programmer, not a programmer of the model's kind.

But there is also a huge, untapped resource in writing the descriptive grammar of how software is built in practice. It is awfully useful to know what real people do -- smart, creative people; programmers solving real problems under real constraints. We don't understand programming or software development well enough yet not to seek out the lessons learned by folks working in the trenches.

This brings to mind a colorful image, of software linguists venturing into the thick rain forest of a programming ecosystem, uncovering heretofore unexplored grammars and cultures. This may not seem as exotic as studying the Pirahã, but we never know when some remote programming tribe might upend our understanding of programming...


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

April 24, 2008 6:56 AM

On the Small Doses Pattern

The Small Doses pattern I wrote up in my previous entry was triggered almost exclusively by the story I heard from Carl Page. The trigger lives on in the text that runs from "Often times, the value of Small Doses..." to the end, and in the paragraph beginning "There is value in distributing...". The story was light and humorous, just the sort of story that will stick with a person for twenty or more years.

As I finally wrote the pattern, it grew. That happens all the time when I write. It grew both in size and in seriousness. At first I resisted getting too serious, but increasingly I realized that the more serious kernel of truth needed telling. So I gave it a shot.

The result of this change in tone and scope means that the pattern you read is not yet ready for prime time. Rather than wait until it was ready, though, I decided to let the pattern be a self-illustration. I have put it out now, in its rough form. It is rough both in completeness and in quality. Perhaps my readers will help me improve. Perhaps I will have time and inspiration soon to tackle the next version.

In my fantasies, I have time to write more patterns in a Graduate Student pattern language (code name: Chrysalis), even a complete language, and cross-reference it with other pattern languages such as XP. Fantasies are what they are.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

April 23, 2008 6:16 PM

The Small Doses Pattern

Update: Added a known use contributed by Joe Bergin. -- 05/19/08.

Also Known As    Frequent Releases

From Pattern Language    Graduate Student

You are writing a large document that one or more other people must read and provide feedback on before it is distributed externally.

The archetype is a master's thesis or doctoral dissertation, which must be approved by a faculty committee.

You want to give your reviewers the best document possible. You want them to give you feedback and feel good about approving the document for external distribution.

You want to publish high-quality work. You would also like to have your reviewers see only your best work, for a variety of reasons. First, you respect them and are thankful for their willingness to help you. Second, they often have the ability to help you further in the future, in the form of jobs or recommendations. Good work will impress your reviewers more than weaker work. A more complete and mature document is more likely to resemble the final result than a rougher, earlier version.

In order to produce a document of the highest quality, you need time -- time to research, write, and revise. This delays your opportunity to distribute it to your reviewers. It also delays their opportunity to give you feedback.

Your reviewers are busy. They have their own work to do and are often volunteering their time to serve you.

Big tasks require a lot of time. If a task takes a lot of time to do, some people face a psychological barrier to starting the task.

A big task decomposed into several smaller tasks may take as much time as the big task, or more, but each task takes a smaller time. It is often easier to find time in small chunks, which makes it easier to start on the task in the first place.

The sooner your reviewers are able to read parts of the document, the sooner they will be able to give you feedback. This feedback helps you to improve the document, both in the large (topic, organization) and the small (examples, voice, illustrations, and so on).

Therefore:

Distribute your document to reviewers periodically over a relatively drawn out period. Early versions can be complete parts of the document, or rough versions of the entire document.

In the archetypal thesis, you might give the reviewers one chapter per week, or you might give a whole thesis in which each part is in various stages of completeness.

There is certainly a trade-off between the quality of a document and the timeliness of delivery. Don't worry; this is just a draft. You are always free to improve and extend your work. Keep in mind that there is also a trade-off between the quality of a document and the amount of useful feedback you are able to incorporate.

There is value in distributing even a very rough or incomplete document at regular intervals. If reviewers read the relatively weak version and make suggestions, they will feel valuable. If they don't read it, they won't know or mind that you have made changes in later versions. Furthermore, they may feel wise for not having wasted their time on the earlier draft!

With the widespread availability of networks, we can give our reviewers real-time access to an evolving document in the form of an-line repository. In such a case, Small Doses may take the form of comments recorded as each change is committed to the repository. It is often better for your reviewers if you give them periodic descriptions of changes made to the document, so that they don't have to wade through minutiae and can focus on discrete meaningful jumps in the document.

I have seen Small Doses work effectively in a variety of academic contexts, from graduate students writing theses to instructors writing lecture notes and textbooks for students. I've seen it work for master's students and doctoral students, for text and code and web sites. Joe Bergin says that this pattern is an essential feature of the Doctor of Professional Studies program at Pace University. Joe has documented patterns for succeeding in the DPS program. (If you know of particularly salient examples of Small Doses, I'd love to hear them.)

Often times, the value of Small Doses is demonstrated best in the results of its applying its antipattern, Avalanche. Suppose you are nearing a landmark date, say, the deadline for graduation or the end of spring semester, when faculty will scatter for the summer. You dump a 50-, 70-, or 100+-page document on your thesis committee. In the busy time before the magic date, committee members have a difficulty finding time to read the whole document. Naturally, they put off reading it. Soon they fall way behind and have a hard time meeting the deadline -- and they blame you for the predicament, for unrealistic expectations. Of course, had you given the committee a chapter at a time for 5 or 6 weeks leading up to the magic date, some committee members would still have fallen behind reading it, because they are distracted with their own work. But then you could blame the them for not getting done!

Related Patterns    Extreme Programming and other agile software methodologies encourage Short Iterations and Frequent Releases. Such frequent releases are the Small Doses of the software development world. They enable more continuous feedback and allow the programmers to improve the quality of the code based on what they learn. The Avalanche antipattern is the source of many woes in the world of software, in large part due to the lack of feedback they afford from users and clients.

I learned the Small Doses pattern from Carl Page, my Artificial Intelligence II professor at Michigan State University in 1987. Professor Page was a bright guy and very good computer scientist, with an often dark sense of humor. He mentored his students and advisees with sardonic stories like Small Doses. He was also father to another famous Page, Larry.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

March 01, 2008 3:39 PM

Toward Less Formal Software

This week I ran across Jonathan Edwards's review of Gregor Kiczales's OOPSLA 2007 keynote address, "Context, Perspective and Programs" (which is available from the conference podcast page). Having recently commented on Peter Turchi's keynote, I figured this was a good time to go back and listen to all of Gregor's again. I first listened to part of it back when I posted it to the page, but I have to admit that I didn't understand it all that well then. So a second listen was warranted. This time I had access to his slides, which made a huge difference.

In his talk, Kiczales tries to bring together ideas from philosophy, language, and our collective experience writing software to tackle a problem that he has been working around his whole career: programs are abstractions, and any abstraction represents a particular point of view. Over time, the point of view changes, which means that the program is out of sync. Software folks have been thinking about ways to make programs capable of responding naturally as their contexts evolve. Biological systems have offered some inspiration in the last decade or so. Kiczales suggests that computer science's focus on formality gets in the way of us finding a good answer to our problem.

Some folks took this suggestion as meaning that we would surrender all formalism and take up models of social negotiation as our foundation. Roly Perera wrote a detailed and pointed review of this sort. While I think Perera does a nice job of placing Kiczales's issues in their philosophical context, I do not think Kiczales was saying that we should go from being formal to being informal. He was suggesting that we shouldn't have to go from being completely formal to being completely informal; there should be a middle ground.

Our way of thinking about formality is binary -- is that any surprise? -- but perhaps we can define a continuum between the two. If so, we could write our program at an equilibrium point for the particular context it is in and then, as the context shifts, allow the program to move along the continuum in response.

Now that I understand a little better what Kiczales is saying, his message resonates well with me. It sounds a lot like the way a pattern balances the forces that affect a system. As the forces change, a new structure may need to emerge to keep the code in balance. We programmers refactor our code in response to such changes. What would it be like for the system to recognize changes in context and evolve? That's how natural systems work.

As usual, Kiczales is thinking thoughts worth thinking.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

February 24, 2008 12:48 PM

Getting Lost

While catching up on some work at the office yesterday -- a rare Saturday indeed -- I listened to Peter Turchi's OOPSLA 2007 keynote address, available from the conference podcast page. Turchi is a writer with whom conference chair Richard Gabriel studied while pursuing his MFA at Warren Wilson College. I would not put this talk in the same class as Robert Hass's OOPSLA 2005 keynote, but perhaps that has more to do with my listening to an audio recording of it and not being there in the moment. Still, I found it to be worth listening as Turchi encouraged us to "get lost" when we want to create. We usually think of getting lost as something that happens to us when we are trying to get somewhere else. That makes getting lost something we wish wouldn't happen at all. But when we get lost in a new land inside our minds, we discover something new that we could not have seen before, at least not in the same way.

As I listened, I heard three ideas that captured much of the essence of Turchi's keynote. First was that we should strive to avoid preconception. This can be tough to do, because ultimately it means that we must work without knowing what is good or bad! The notions of good and bad are themselves preconceptions. They are valuable to scientists and engineers as they polish up a solution, but they often are impediments to discovering or creating a solution in the first place.

Second was the warning that a failure to get lost is a failure of imagination. Often, when we work deeply in an area for a while, we sometimes feel as if we can't see anything new and creative because we know and understand the landscape so well. We have become "experts", which isn't always as dandy a status as it may seem. It limits what we see. In such times, we need to step off the easy path and exercise our imaginations in a new way. What must I do in order to see something new?

This leads to the third theme I pulled from Turchi's talk: getting lost takes work and preparation. When we get stuck, we have to work to imagine our way out of the rut. For the creative person, though, it's about more about getting out of a rut. The creative person needs to get lost in a new place all the time, in order to see something new. For many of us, getting lost may seem like as something that just happens, but the person who wants to be lost has to prepare to start.

Turchi mentioned Robert Louis Stevenson as someone with a particular appreciation for "the happy accident that planning can produce". But artists are not the only folks who benefit from these happy accidents or who should work to produce the conditions in which they can occur. Scientific research operates on a similar plane. I am reminded again of Robert Root-Bernstein's ideas for actively engaging the unexpected. Writers can't leave getting lost to chance, and neither can scientists.

Turchi comes from the world of writing, not the world of science. Do his ideas apply to the computer scientist's form of writing, programming? I think so. A couple of years ago, I described a structured form of getting lost called air-drop programming, which adventurous programmers use to learn a legacy code base. One can use the same idea to learn a new framework or API, or even to learn a new programming language. Cut all ties to the familiar, jump right in, and see what you learn!

What about teaching? Yes. A colleague stopped by my office late last week to describe a great day of class in which he had covered almost none of what he had planned. A student had asked a question whose answer led to another, and then another, and pretty soon the class was deep in a discussion that was as valuable, or more, than the planned activities. My colleague couldn't have planned this unexpectedly good discussion, but his and the class's work put them in a position where it could happen. Of course, unexpected exploration takes time... When will they cover all the material of the course? I suspect the students will be just fine as they make adjustments downstream this semester.

What about running? Well, of course. The topic of air-drop programming came up during a conversation about a general tourist pattern for learning a new town. Running in a new town is a great way to learn the lay of the land. Sometimes I have to work not to remember landmarks along the way, so that I can see new things on my way back to the hotel. As I wrote after a glorious morning run at ChiliPLoP three years ago, sometimes you run to get from Point A to Point B; sometimes, you should just run. That applies to your hometown, too. I once read about an elite women's runner who recommended being dropped off far from your usual running routes and working your way back home through unfamiliar streets and terrain. I've done something like this myself, though not often enough, and it is a great way to revitalize my running whenever the trails start look like the same old same old.

It seems that getting lost is a universal pattern, which made it a perfect topic for an OOPSLA keynote talk.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Running, Software Development, Teaching and Learning

January 30, 2008 8:39 AM

What is a Tree?

Ira Greenberg's Cobalt Rider

I can talk about something other than science. As I write this, I am at a talk called "What is a Tree?", by computational artist Ira Greenberg. In it, Greenberg is telling his story of going from art to math -- and computation -- and back.

Greenberg started as a traditional artist, based in drawing and focused in painting. He earned his degrees in Visual Art, from Cornell and Penn. His training was traditional, too -- no computation, no math. He was going to paint.

In his earliest work, Greenberg was caught up in perception. He found that he could experiment only with the motif in front of him. Over time he evolved from more realistic natural images to images that were more "synthetic", more plastic. His work came to be about shape and color. And pattern.

Just one word -- plastics.  From The Graduate.

Alas, he wasn't selling anything. Like all of us, he needed to make some money. His uncle told him to "look into computers -- they are the future". (This is 1993 or so...) Greenberg could not have been less interested. Working with computers seemed like a waste of time. But he got a computer, some software, and some books, and he played. In spite of himself, he loved it. He was fascinated.

Soon he got paying gigs at places like Conde Nast. He was making good money doing computer graphics for marketing and publishing folks. At the time, he said, people doing computer graphics were like mad scientists, conjuring works with mystical incantations. He and his buddies found work as a hired guns for older graphic artists who had no computer skills. They would stand over his should, point at the screen, and say in rapid-fire style, "Do this, do this, do this." "We did, and then they paid us."

All the while, Greenberg was still doing his "serious work" -- painting -- on side.

Ira Greenberg's Blue Flame

But he got good at this computer stuff. He liked it. And yet he felt guilty. His artist friends were "pure", and he felt like a sell-out. Even still, he felt an urge to "put it all together", to understand what this computer stuff really meant to his art. He decided to sell out all the way: to go to NYC and sell these marketable skills for big money. The time was right, and the money was good.

It didn't work. Going to an office to produce commercial art for hire changed him, and his wife notice. Greenberg sees nothing wrong with this kind of work; it just wasn't for him. Still, he liked at least one thing about doing art in the corporate style: collaboration. He was able to work with designers, writers, marketing folks. Serious painters don't collaborate, because they are doing their own art.

The more he work with computers in the creative process, the more he began to feel as if using tools like Photoshop and LightWave was cheating. They provide an experience that is too "mediated". With any activity, as you get better you "let the chaos guide you", but these tools -- their smoothness, their engineered perfection, their Undo buttons -- were too neat. Artists need fuzziness. He wanted to get his hands dirty. Like painting.

So Greenberg decided to get under the hood of Photoshop. He started going deeper. His artist friends thought he was doing the devil's work. But he was doing cool stuff. Oftentimes, he felt that the odd things generated by his computer programs were more interesting than his painting!

Ira Greenberg's Hoffman Plasticity Visualizer

He went deeper with the mathematics, playing with formulas, simulating physics. He began to substitute formulas inside formulas inside formulas. He -- his programs -- produced "sketches".

At some point, he came across Processing, "an open source programming language and environment for people who want to program images, animation, and interactions". This is a domain-specific language for artists, implemented as an IDE for Java. It grew out of work done by John Maeda's group at the MIT Media Lab. These days he programs in ActionScript, Java, Flash, and Processing, and promotes Processing as perhaps the best way for computer-wary artists to get started computationally.

What is a Tree? poster

With his biographical sketch done, he moved on to the art that inspired his talk's title. He showed a series of programs that demonstrated his algorithmic approach to creativity. His example was a tree, which was a double entendré for his past as a painter of natural scenes and also for his embrace of computer science.

He started with the concept of a tree in a simple line drawing. Then he added variation: different angles, different branching factors. These created asymmetry in the image. Then he added more variation: different scales, different densities. Then he added more variation: different line thickness, "foliage" at the end of the smallest branches. With randomness elements in the program, he gets different outputs each time he runs the code. He added still more variation: color, space, dimension, .... He can keep going along as many different conceptual dimensions as he likes to create art. He can strive for verisimilitude, representation, abstraction, ... any artistic goal he might seek with a brush and oils.

Greenberg's artistic medium is code. He writes some code. He runs it. He change some things, and runs it again. This process is interactive with the medium. He evolves not a specific work of art, but an algorithm that can generate an infinite number of works.

I would claim that in a very important sense his work is the algorithm. For most artists, the art is in the physical work they produce. For Greenberg, there is a level of indirection -- which is, interestingly, one of the most fundamental concepts of computer science. For me, perhaps the algorithm is the artistic work! Greenberg's program is functional, not representational, and what people want to see is the art his programs produce. But code can be beautiful, too.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

December 05, 2007 2:42 PM

Catch What You're Thrown

Context     You are in an Interactive Performance, perhaps a play, using Scripted Dialogue.

Problem     The performer speaking before you delivers a line incorrectly. The new line does not change the substance of the play, but it interrupts the linguistic flow.

Example     Instead of saying "until the first of the year", the performer says as "for the rest of the year".

Forces     You know your lines and want to deliver them correctly.

The author wrote the dialogue with a purpose in mind.

Delivering the line as you memorized it is the safest way for you to proceed, and also the safest route back on track.

BUT...     Delivering the scripted line will call attention to the error. This may disconcert your partner. It will also break the mood for the audience.

So:     Adapt your line to the set up. Respond in a way that is seamless to the audience, retains the key message of the story, and gets the dialogue back on track.

That is, catch what you are thrown.

Example     Change your line to say "for the rest of the year?" instead of "until the first of the year?"

Related Patterns     If the performer speaking before you misses a line entirely, or gets off the track of the Scripted Dialogue, deliver a Redirecting Line.

----

Postscript: This category of my blog is intended for software patterns and discussion thereof, but this is a pattern I just learned and felt a strong desire to right. I may well try to write Redirecting Line and maybe even the higher-level Scripted Dialogue and Interactive Performance patterns, if the mood strikes me and the time is available. I never thought of pattern language of performance when I signed on for this gig... And just so you know, I was the performer who mis-delivered his line in the example given above, where I first encountered Catch What You're Thrown.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Personal

November 28, 2007 3:33 PM

Learning About Software from The Theater, and Vice Versa

Last Sunday was my fourth rehearsal as a newly-minted actor. It was our first run-through of the entire play on stage, and the first time the director had a chance to give notes to the entire cast. The whole afternoon, my mind was making connections between plays and programs -- and I don't mean the playbill. These thoughts follow from my experiences in the play and my experiences as a software developer. Here are a few.

Scenes and Modularity     Each scene is a like a module that the director debugs. In some plays, the boundaries between scenes are quite clean, with a nice discrete break from one to the next. With a play on stage, the boundaries between scenes may be less clear.

In our play, scenes often blend together. Lights go down on one part of the stage and up on another, shifting the audience's attention, while the actors in the first scene remain in place. Soon, the lights shift back to the first and the action picks up where it left off. Plays work this way in part because they operate in limited real estate, and changing scenery can be time-consuming. But this also

It is easier to debug scenes that are discrete modules. Debugging scenes that run together is difficult for the same reasons that debugging code modules with leaky boundaries is. A change in one scene (say, repositioning the players) can affect the linked scene (say, by changing when and where a player enters).

Actors and Code     If the director debugs a scene, then maybe the actors, their lines, and the props are like the code. That just occurred to me while typing the last section!

Time Constraints     It is hard to get things right in a rush. Based on what he sees as the play executes, the director makes changes to the content of the show. He also refactors: rearranging the location of a prop or a player, either to "clean things up" or to facilitate some change he has in mind. It takes time to clean things up, and we only know that something needs to be changed as we watch the play run, and think about it.

Mock Objects and Stand-Ins     When rehearsing a scene, if you don't have the prop that you will use in the play itself, then use something to stand in its place. It helps you get used to the the fact that something will be there when you perform, and it helps the director remember to take into account its use and placement. You have to remember to have it at the right time in the right place, and return it to the prop table when it's not in use.

A good mock prop just needs to have some of the essential features of the actual prop. When rehearsing a scene in which I will be carrying some groceries to the car, I grabbed a box full of odds and ends out of some office. It was big enough and unwieldy enough to help me "get into the part".

Conclusion for Now     The relationship between software and plays or movies is not a new idea, of course. I seem to recall a blog entry a couple of years ago that explored the analogy of developing software as producing a film, though I can't find a link to it just now. Googling for it led me to a page on Ward's wiki and to two entries on the Confused of Calcutta blog. More reading for me... And of course there is Brenda Laurel's seminal Computers As Theatre. You really should read that!

Reading is a great way to learn, but there is nothing quite like living a metaphor to bring it home.

The value that comes from making analogies and metaphors comes in what we can we learn from them. I'm looking forward to a couple of weeks more of learning from this one -- in both directions.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 10, 2007 4:21 PM

Programming Challenges

Gerald Weinberg's recent blog Setting (and Character): A Goldilocks Exercise describes a writing exercise that I've mentioned here as a programming exercise, a pedagogical pattern many call Three Bears. This is yet another occurrence of a remarkably diverse pattern.

Weinberg often describes exercises that writers can use to free their minds and words. It doesn't surprise me that "freeing" exercises are built on constraints. In one post, Weinberg describes The Missing Letter, in which the writer writes (or rewrites) a passage without using a randomly chosen letter. The most famous example of this form, known as a lipogram, is La disparition, a novel written by Georges Perec without an using the letter 'e' -- except to spell the author's name on the cover.

When I read that post months ago, I immediately thought of creating programming exercises of a similar sort. As I quoted someone in a post on a book about Open Court Publishing, "Teaching any skill requires repetition, and few great works of literature concentrate on long 'e'." We can design a set of exercises in which the programmer surrenders one of her standard tools. For instance, we could ask her to write a program to solve a given problem, but...

  • with no if statements. This is exactly the idea embodied in the Polymorphism Challenge that my friend Joe Bergin and I used to teach a workshop at SIGCSE a couple of years ago and which I often find useful in helping programmers new to OOP see what is possible.

  • with no for statements. I took a big step in understanding how objects worked when I realized how the internal iterators in Smalltalk's collection classes let me solve repetition tasks with a single message -- and a description of the action I wanted to take. It was only many years later that I learned the term "internal iterator" or even just "iterator", but by then the idea was deeply ingrained in how I programmed.

    Recursion is the usual path for students to learn how to repeat actions without a for statement, but I don't think most students get recursion the way most folks teach it. Learning it with a rich data type makes a lot more sense.

  • with no assignment statements. This exercise is a double-whammy. Without assignment statements, there is no call explicit sequences of statements, either. This is, of course, what pure functional programming asks of us. Writing a big app in Java or C using a pure functional style is wonderful practice.

  • with no base types. I nearly wrote about this sort of exercise a couple of years ago when discussing the OOP anti-pattern Primitive Obsession. If you can't use base types, all you have left are instances of classes. What objects can do the job for you? In most practical applications, this exercise ultimately bottoms out in a domain-specific class that wraps the base types required to make most programming languages run. But it is a worthwhile practice exercise to see how long one can put off referring to a base type and still make sense. The overkill can be enlightening.

    Of course, one can start with an language that provides only the most meager set of base types, thus forcing one to build up nearly all the abstractions demanded by a problem. Scheme feels like that to most students, but only a few of mine seem to grok how much they can learn about programming by working in such a language. (And it's of no comfort to them that Church built everything out of functions in his lambda calculus!)

This list operates at the level of programming construct. It is just the beginning of the possibilities. Another approach would be to forbid the use of a data structure, a particularly useful class, or an individual base type. One could even turn this idea into a naming challenge by hewing close to Weinberg's exercise and forbidding the use of a selected letter in identifiers. As an instructor, I can design an exercise targeted at the needs of a particular student or class. As a programmer, I can design an exercise targeted at my own blocks and weaknesses. Sometimes, it's worth dusting off an old challenge and doing it for it's own sake, just to stay sharp or shake off some rust.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

November 02, 2007 1:24 PM

Refactoring, Functional Programming-Style

Back in 2002, I had a sabbatical (which are, for political reasons, called "professional development assignments" here) to document some of the patterns of programs that are written in a functional style. So much good work had been done by then on object-oriented patterns, and a few people had begun to describe OO patterns that were motivated by functional programming techniques. But few people had begun to document functional programming patterns. I had a feeling that something useful could come from moving into that space. In the end I didn't make the sort of progress I had hoped, but I did learn a lot that has continued to influence my teaching and research.

One of the areas explored as a part of that project was refactoring functional programs. I believe that refactorings are the behavioral side of patterns, and by looking at work on existing work on functional refactorings I hoped to speed my search for patterns. There hadn't been much work done on refactoring functional programs at that time, but I wasn't alone in thinking of it -- Simon Thompson and Claus Reinke were just beginning their quite productive research program on the same topic. That work, like most of the work that deals with patterns and refactoring in functional programming, focused more on the language-processing side of the topic, considering refactorings as program transformation in the vein of compiler optimizations. What seems to me still to be missing is a look at refactoring of functional programs from the software developer's perspective: I have a piece of code that I want to restructure in order to improve its design, or in order to facilitate the next addition to the program. What do I do next?

the logo of Untyped, a software firm

Noel Welsh recently wrote a blog entry called Refactoring Functional Programs that makes a very nice contribution in the space. Noel is just the sort of person who should be documenting FP patterns and refactorings, a practicing developer who works primarily in a functional or mostly functional language and who is growing a significant software system over time. His essay talks about a sequence of refactorings that he applied to his system, a web application framework, which moved one of its modules from a long-ish, overly complex, repetitive piece of code to something shorter, simpler, and well-factored. You will enjoy reading the article even if you don't program in Scheme, because Noel explains the thinking he did as he attacked this piece of code, including an alternative step that he considered but rejected due to the forces that affect his system.

For the record, his three refactorings were these. The names are part mine, part his:

  • convert mutually tail-recursive state machine to continuation-passing style
  • separate computational abstraction from control abstraction
  • convert continuation-passing style to direct style

One of the things I like about this story is how it makes a "round-trip" to CPS and back, using the trip to change the program's design in a way that might have been inaccessible otherwise. I teach my students a similar "refactoring pattern" when teaching them functional programming, make a round-trip through Mutual Recursion, by way of Program Derivation, to convert a messy one-procedure solution into a clean and one-elegant one-procedure solution.

I hope that other developers using functional programming in the trenches will follow Noel's lead and document some of the patterns and refactorings that they discover in their code and process. More than anything else, this sort of pragmatic, nuts-and-bolts writing will help to increase the accessibility of functional programming style to a wider audience of programmers.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 10, 2007 7:10 PM

Three Lists, Three Agile Ideas

Browsing through several overflowing mailboxes from various agile software development lists -- thank the mail gods for procmail -- I ran across three messages that made save them for later reflection or use in a class.

On the extreme programming mailing list, Laurent Bossavit wrote:

> How do you deal with [a change breaks lots of tests]?

See it as an opportunity. A single change breaking a lot of tests means that your design has excessive coupling, *or* that your unit tests are written at too low a level of abstraction. This is an important and valuable thing to have learned about your code and tests.

Most of the time, we treat unexpected obstacles as problems -- either problems to solve, or problems to avoid. When we operate under time pressure, we usually try to avoid such obstacles. Students often find themselves in a bind for time and so seek to avoid problems, rather using them as an opportunity to make their programs better. This is another reason to start assignments early: to have time to be opportunistic on unexpected chances to learn!

Time pressure isn't the only reason students avoid problems. Some are driven by grades, in order to impress potential employers. (Maybe this will change in a world not driven by "getting a job".) Others are simply afraid to take the risk. Our school system does a pretty good job of beating risk-taking behavior out of students by the time they reach the university, and universities often don't do enough to undo this before they graduate into professional careers.

On the agile content of Laurent's comment, he is spot-on, of course. All those broken tests are excellent feedback on a weakness in either the system or the tests. Listen to the code.

On the Crystal Clear mailing list (dedicated to discussing "the ultralight Crystal Clear software development methodology"), methodology creator Alistair Cockburn wrote:

"Deciding what to build" is my new replacement phrase for "requirements". The word "requirements" tends to imply
  • that [someone can] correctly locate and pluck them,
  • that they are "true"

None of those are correct. They don't already pre-exist so they can't be "plucked", [no one is] in that unique position [to locate and pluck them], and they aren't "true".

which is why I nowadays prefer to talk about "deciding what to build".

I wish that more people -- software engineers, programmers, and consumers of software and software development services alike -- would think like this! "Gathering requirements" is a metaphor that goes wrong both on 'gathering' and on 'requirements'. "Deciding what to build" creates a completely different mental image and sets up a completely different set of performance expectations. As Alistair tells it, "deciding what to build" is a negotiation, not an exam question about Platonic ideals that has a correct answer. A negotiation can take into account many factors, including benefits, costs, preferences, and time.

The developer and customer can then record the results of their negotiation in a "contract" of whatever form they prefer. If they are so inclined, the contract can take the form of a set of tests that captures what the developer will deliver, say, FIT tests. When one side needs to "break the contract", the negotiation should have specified relative preferences that enable the parties to arrive at a new understanding of the system to be built. More negotiation -- not "adding a requirement" or "not delivering a requirement".

Finally, for a touch of humor, I give you this passage from a message to the refactoring mailing list from Buddha Buck:

Our team is considering implementing the refactoring "Replace C++ With Language With Good Refactoring Tool Support". It's a major refactoring, and full implementation might take months, if we decide to do it.

There have to be some pleasant steps to execute in this refactoring, and some even more pleasant milestones. That last rm *.cpp has to feel good.

More seriously, I think there is a great opportunity to write refactorings that aren't about software architecture and code. The patterns of Christopher Alexander begat the software patterns world, which caused many of us to write the patterns of other kinds of systems, including music, management, and pedagogy. In many software circles, refactorings are behavior-preserving modifications that target the patterns of the domain. If we write patterns of management or pedagogy, then why not write refactorings that help people prepare their environments for disruptive change? An interesting question comes to mind: what does it mean for a change to a management system to "preserve behavior"? This seems like a very cool avenue for some thought, even if it hits a dead end.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

August 31, 2007 5:39 PM

Good Writing, Good Programming

This week I ran across a link to an old essay called Good Writing, by Marc Raibert. By now I shouldn't be so happy to be reminded how much good programming practice is similar to good writing in general. But I usually am. The details of Raibert's essay are less important to me than some of his big themes.

Raibert starts with something that people often forget. To write well, one must ordinarily want to write well and believe that one can. Anyone who teaches introductory computer science knows how critical motivation and confidence are. Many innovations in CS1 instruction over the years have been aimed at helping students to develop confidence in the face of what appear to be daunting obstacles, such as syntax, rigor, and formalism. Much wailing and gnashing of teeth has followed the slowly dawning realization that students these days are much less motivated to write computer programs than they have been over most of the last thirty years. Again, many innovations and proposals in recent years have aimed at motivating students -- more engaging problems, media computation, context in the sciences and social sciences, and so on. These efforts to increase motivation and confidence are corporate efforts, but Raibert reminds us that, ultimately, the individual who would be a writer must hold these beliefs.

After stating these preconditions, Raibert offers several pieces of advice that apply directly to computing. Not surprisingly, my favorite was his first: Good writing is bad writing that was rewritten. This fits nicely in the agile developer's playbook. I think that few developers or CS teachers are willing to say that it's okay to write bad code and then rewrite. Usually, when folks speak in terms of do the simplest thing that will work and refactor mercilessly, they do not usually mean to imply that the initial code was bad, only that it doesn't worry inordinately about the future. But one of the primary triggers for refactoring is the sort of duplication that occurs when we do the simplest thing that will work without regard for the big picture of the program. Most will agree that most such duplication is a bad thing. In these cases, refactoring takes a worse program and creates a better one.

Allowing ourselves to write bad code empowers us, just as it empowers writers of text. We need not worry about writing the perfect program, which frees us to write code that just works. Then, after it does, we can worry about making the program better, both structurally and stylistically. But we can do so with the confidence that comes from knowing that the substance of our program is on track.

Of course, starting out with the freedom to write bad code obligates us to re-write, to refactor, just as it obligates writers of text to re-write. Take the time! That's how we produce good code reliably: write and re-write.

I wish more of my freshmen would heed this advice:

The first implication is that when you start a new [program], there is nothing wrong with using bad writing. Your goal when you start is to get your ideas down on paper in any form you can.

For the novice programmer, I do not recommend writing ungrammatical or "stream of consciousness" code, but I do encourage them to take the ideas they have after having thought about the problem and expressing them in code. The best way to find out if an idea is a good one is to see it run in code.

Raibert's other advice also applies. When I read Spill the beans fast, I think of making my code transparent. Don't hide its essence in subterfuge that makes me seem clever; push its essence out where all can see it and understand the code. Many of the programmers whose code I respect most, such as Ward Cunningham, write code that is clear, concise, and not at all clever. That's part of what makes it beautiful.

Don't get attached to your prose is essential when writing prose, and I think it applies to code as well. Just because you wrote a great little method or class yesterday doesn't mean that it should survive in your program of tomorrow. While programming, you discover more about your problem and solution than you knew yesterday. I love Raibert's idea of a PRIZE_WINNING_STUFF.TXT file. I have a directory labeled playground/ where I place all the little snippets I've built as simple learning experiments, and now I think I need to create a winners/ directory right next to it!

Raibert closes with the advice to get feedback and then to trust your readers. A couple of months back I had a couple of entries on learning from critics, with different perspectives from Jerry Weinberg and Alistair Cockburn. That discussion was about text, not code (at least on the surface). But one thing that we in computer science need to do is to develop a stronger culture of peer review of our code. The lack of one is one of the things that most differentiates us from other engineering disciplines, to which many in computing look for inspiration. I myself look more to the creative disciplines for inspiration than to engineering, but on this the creative and engineering disciplines agree: getting feedback, using it to improve the work, and using it to improve the person who made the work are essential. I think that finding ways to work the evaluation of code into computer science courses, from intro courses to senior project courses, is a path to improving CS education.

PLoP 2007

This last bit of advice from Raibert is also timely, if bittersweet for me... In just a few days, PLoP 2007 begins. I am quite bummed that I won't be able to attend this year, due to work and teaching obligations. PLoP is still one of my favorite conferences. Patterns and writers' workshops are both close to my heart and my professional mind. If you have never written a software pattern or been to PLoP, you really should try both. You'll grow, if only when you learn how a culture of review and trust can change how you think about writing.

OOPSLA 2007

The good news for me is that OOPSLA 2007, which I will be attending, features a Mini-PLoP. This track consists of a pattern writing bootcamp, a writers' workshop for papers in software development, and a follow-up poster. I hope to attend the writers' workshop session on Monday, even if only as a non-author who pledges to read papers and provide feedback to authors. It's no substitute for PLoP, but it's one way to get the feeling.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

August 08, 2007 3:40 PM

Let's Kill and Dick and Jane

No, I've not become homicidal. That is the title of a recent book about the Open Court Publishing Company, which according to its subtitle "fought the culture of American education" by trying to change how our schools teach reading and mathematics. Blouke Carus, the creator of Open Court's reading program, sought to achieve an enviable goal -- engagement with and success in the world of great ideas for all students -- in a way that was beholden to neither the traditionalist "back to basics" agenda nor the progressivist "child-centered" agenda. Since then, the reading series has been sold to McGraw-Hill.

Thanks to the creator of the TeachScheme! project, Matthias Felleisen, I can add this book to my list of millions. He calls Let's Kill and Dick and Jane "TeachScheme! writ large". Certainly there are differences between the K-8 education culture and the university computer science culture, but they share enough commonalities to make reform efforts similarly difficult to execute. As I have noted before, universities and their faculty are a remarkably conservative lot. TeachScheme! is a principled, comprehensive redefinition of introductory programming education. In arguing for his formulation, Felleisen goes so far as to explain why Structure and Interpretation of Computer Programs -- touted by many, including me, as the best CS book ever written -- is not suitable for CS 1. (As much as I like SICP, Matthias is right.)

But TeachScheme! has not succeeded in the grand way its implementors might have hoped, for many of the reasons that Open Court's efforts have come up short of its founders' goals. Some of the reasons are cultural, some are historical, and some are probably strategic.

The story of Open Court is of immediate interest to me for our state's interest in changing K-12 math and science education in a fundamental way, a reform effort that my university has a leading role in, and which my department and I have a direct interest in. We believe in the need for more and better computer scientists and software developers, but university CS enrollments remain sluggish. Students who are turned off to science, math, and intellectual ideas in grade school aren't likely to select CS as a major in college... Besides, like Carus, I have a great interest in raising the level of science and math understanding across the whole population.

This book taught me a lot about what I had understood only incompletely as an observer of our education system. And I appreciated that it avoided the typical -- and wrong -- conservative/liberal dichotomy between the traditional and progressive approaches. America's education culture is a beast all its own, essentially anti-intellectual and exhibiting an inertia borne out of expectations, habit, and a lack of will and time to change. Changing the system will take a much more sophisticated and patient approach than most people usually contemplate.

Though I have never developed a complete curriculum for CS 1 as Felleisen has, I have long aspired to teaching intro CS in more holistic way, integrating the learning of essential tools with higher-level design skills, built on the concept of a pattern language. So Open Court's goals, methods, and results all intrigue me. Here are some of the ideas that caught my attention as I read the story of Open Court:

  • On variety in approach:

    A teacher must dare to be different! She must pull away from monotonous repetition of, 'Today we are going to write a story.' Most children view that announcement with exactly what it deserves, and there are few teachers who are not aware of what the reactions are.

    s/story/program/* and s/children/students/* to get a truth for CS instructors such as me.

  • A student is motivated to learn when her activities scratch her own itch.

  • Teaching techniques that may well transfer to my programming classroom: sentence lifting, writing for a real audience, proofreading and revising programs, and reading good literature from the beginning.

  • On a curriculum as a system of instruction:

    The quality of the Open Court program was a substantive strength and a marketing weakness. It required teachers to be conversant with a variety of methods. And the program worked best when used as a system... Teachers accustomed to trying a little of this and a little of that were likely to be put off by an approach that did not lend itself to tinkering.

    I guess I'm not the only person who has trouble sticking to the textbook. To be fair to us tinkerers, systematic integrated instructional design is so rare as to make tinkering a reasonable default stance.

  • On the real problem with new ideas for instruction:

    Thus Open Court's usual problem was not that it contradicted teachers' ideology, but that it violated their routine.

    Old habits dies hard, if at all.

  • In the study of how children learn to comprehend text, schema theory embodied the "idea that people understand what they read by fitting it into patterns they already know". I believe that this is largely true for learning to understand programs, too.

  • Exercising existing skills does not constitute teaching.

  • Quoting a paper by reading scholars Bereiter and Scardamalia, the best alternative model for a school is

    ... the learned professions ...[in which] adaptation to the expectations of one's peers requires continual growth in knowledge and competence.

    In the professions, we focus not only on level of knowledge but also on the process of continuously getting better.

  • When we despair of revolutionary change:

    There are circumstances in which it is best to package our revolutionary aspirations in harmless-looking exteriors.... We should swallow our pride and let [teachers] think we are bland and acceptable. We should fool them and play to their complacent pieties, But we should never for a moment fool ourselves.

    Be careful what you pretend to be.

  • Too often, radical new techniques are converted into "subject matter" to be added to the curriculum. But in that form they usually become overgeneralized rules that fail too often to be compelling. More insidious is that this "makes it possible to incorporate new elements into a curriculum without changing the basic goals and strategies of the curriculum". Voilá -- change with no change.

    Then we obsess about covering all the material, made harder by the inclusion of all this new material.

    I've seen this happen to software patterns in CS classrooms. It was sad. I usually think that we elementary patterns folks haven't done a good enough job yet, but Open Court's and TeachScheme!'s experiences do not encourage me to think that it will be easy to do well enough.

  • On the value of a broad education: This book tells of how the people at Open Court who were focused exclusively on curriculum found themselves falling prey to invalid plans from marketers.

    In an academic metaphor, [Open Court's] people had been the liberal-arts students looking down on the business school. But now that they needed business-school expertise, they were unable to judge it critically.

    Maybe a so-called "liberal education" isn't broad enough if it leaves the recipient unable to think deeply in an essential new context. In today's world, both business and technology are essential components of a broadly applicable education.

  • On the need for designing custom literature suitable for instruction of complete novices:

    Teaching any skill requires repetition, and few great works of literature concentrate on long "e".

    My greatest inspiration in this vein is the Suzuki literature developed for teaching violin and later piano and other instruments. I've experienced the piano literature first hand and watched my daughters work deep into the violin literature. At the outset, both use existing works (folk tunes, primarily) whenever appropriate, but they also include carefully designed pieces that echo the great literature we aspire to for our students. As the student develops technical skill, the teaching literature moves wholly into the realm of works with connection to the broader culture. My daughters now play real violin compositions from real composers.

  • But designing a literature and curriculum for students is not enough. Ordinary teachers can implement more aggressive and challenging curricula only with help: help learning the new ideas, help implementing the ideas in classroom settings, and help breaking the habits and structures of standing practice. Let's Kill Dick and Jane is full of calls from teachers for more help in understanding Open Court's new ideas and in implementing them well in the face of teachers' lack of experience and understanding. Despite its best attempts, Open Court never quite did this well enough.

    TeachScheme! is a worthy model in this regard. It has worked hard to train teachers in its approach, to provide tangible support, and to build a community of teachers.

    In my own work on elementary patterns, my colleague and friend Robert Duvall continually reminds us all of the need for providing practical support to instructors who might adopt our ideas -- if only they had the instructional tools to make a big change in how they teach introductory programming.

  • ... which leads me to a closing observation. One of the big lessons I take from the book is that to effect change, one must understand and confront issues that exist in the world, not the the issues that exist only in our idealized images of the world or in what are basically the political tracts of activists.

    In an odd way, this reminds me of Drawing on the Right Side of the Brain, with its precept that we should draw what we see, not what we think we see.

Let's Kill Dick and Jane is a slim volume, a mere 155 pages, and easy to read. It's not perfect, neither in its writing nor in its analysis, but it tells an important story well. I recommend it to anyone with aspirations of changing how we teach computer science to students in the university or high school. I also recommend it to anyone who is at all interested in the quality of our educational system. In a democracy such as the U.S., that should be everyone.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

July 27, 2007 6:58 PM

A Nice Example of a Functional Programming Pattern

David Altenburg gives a nice example of a pattern you'll find in functional programs, which he calls Enumerate, Map, Filter, Accumulate. This isn't the typical snappy pattern name, but it does show deference to Chapter 2 of Structure and Interpretation of Computer Programs, which discusses in some detail this way of processing a stream of data. I like this blog entry because it illustrates nicely a difference between how functional programmers think and how OO programmers think. He even gives idiomatic Java and Ruby code to demonstrate the difference more clearly. Seeing the Ruby example also makes clear just how powerful first-order blocks can be in an OO language.

SICP is a veritable catalog of the patterns of programs written in a functional style. They aren't written in Alexander's pattern style, but they are patterns nonetheless.

(In a snarkier mood, I might say: I am sure some Lisp aficionado will explain that Enumerate, Map, Filter, Accumulate isn't a pattern because we could define a enumapfilacc macro and make it go away...)


Posted by Eugene Wallingford | Permalink | Categories: Patterns

July 10, 2007 5:53 PM

Thinking Ahead to OOPSLA

OOPSLA 2007 logo

I haven't written much in anticipation of OOPSLA 2007, but not because I haven't been thinking about it. In years when I have had a role in content, such as the 2004 and 2005 Educators' Symposia or even the 2006 tutorials track, I have been excited to be deep in the ideas of a particular part of OOPSLA. This year I have blogged just once, about the December planning meeting. (I did write once from the spring planning meeting, but about a movies.) My work this year for the conference has been in an administrative role, as communications chair, which has focused on sessions and schedules and some web content. Too be honest, I haven't done a very good job so far, but that is a subject for another post. For now, let's just say that I have not been a very good Mediator nor a good Facade.

I am excited about some of the new things we are doing this year to get the word out about the conference. At the top of this list is a podcast. Now, podcasts have been around for a while now, but they are just now becoming a part of the promotional engine for many organizations. We figured that hearing about some of the cool stuff that will happen at OOPSLA this year would complement what you can read on the web. So we arranged to have two outfits, Software Engineering Radio and DimSumThinking, co-produce a series of episodes on some of the hot topics covered at this year's conference.

Our first episode, on a workshop titled No Silver Bullet: A Retrospective on the Essence and Accidents of Software Engineering, organized by Dennis Mancl, Steven Fraser, and Bill Opdyke, is on-line at the OOPSLA 2007 Podcast page. Stop by, give it a listen, and subscribe to the podcast's feed so that you don't miss any of the upcoming episodes. (We are available in iTunes, too.) We plan to role new interviews out every 7-10 for the next few months. Next up is a discussion of a Scala tutorial with Martin Odersky, due out on July 16.

If you would like to read a bit more about the conference, check out conference chair Richard Gabriel's The (Unofficial) How To Get Around OOPSLA Guide, and especially his ooPSLA Impressions. As I've written a few times, there really isn't another conference like OOPSLA. Richard's impressions page does a good job communicating just how, mostly in the words of people who've been to OOPSLA and seen it.

While putting together some of his podcast episodes, Daniel Steinberg of DimSumThinking ran into something different than usual: fun.

I've done three interviews for the oopsla podcast -- every interviewee has used the same word to describe OOPSLA: fun. I just thought that was notable -- I do a lot of this sort of thing and that's not generally a word that comes up to describe conferences.

And that fun comes on top of the ideas and the people you will encounter, that will stretch you. We can't offer a Turing Award winner every year, but you may not notice with all the intellectual foment. (And this year, we can offer John McCarthy as an invited speaker...)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

June 26, 2007 7:46 PM

Digging a Hole Just to Climb Out

I remember back in my early years teaching (*) I had a student who came in to ask a question about a particular error message she had received from our Pascal compiler. She had some idea of what caused it, and she wanted to know what it meant. It was a pretty advanced error, one we hadn't reached the point of making in class, so I asked her how she had managed to bump into it. Easy, she said; she was intentionally creating errors in her program so that she could figure out what sort of error messages would result.

If you teach programming for very long, you are bound to encounter such a student. She was doing fine in class but was afraid that she was having life too easy and so decided to use her time productively -- creating errors that she could learn to debug and maybe even anticipate.

I've written many times before about practice, including practice for practice's sake. That entry was about the idea of creating "valueless software", which one writes as a learning exercise, not for anyone's consumption. But my forward-thinking student was working more in the vein of fixin' what's broke, in which one practices in an area of known weakness, with the express purpose of making that part of one's game strong. My student didn't know many of the compiler error messages that she was going to face in the coming weeks, so she set out to learn them.

I think that she was actually practicing a simple form of an even more specific learning pattern: consciously seeking out, even creating, challenges to conquer. Make a Mess, Clean it Up!, is a neat story about an example of this pattern in the history of the Macintosh team. There, Donn Denman talks about Burrell Smith's surprising way of getting better at Defender, a video game played by the Mac programmers as a way to relax or pump themselves up:

Instead of avoiding the tough situations, he'd immediately create them, and immediately start learning how to handle the worst situation imaginable. Pretty soon he would routinely handle anything the machine could throw at him.

He'd lose a few games along the way, but soon he was strong in areas of the game that his competitors may not have even encountered before -- because they had spent time avoiding difficult times! Denman saw in this game-playing behavior something he recognized in Smith's life as a programmer: he

... likes challenges so much that he actually seeks them out and consciously creates them. In the long run, this approach makes sense. He seems to aggressively set up challenging situations throughout his life. Then, when life throws him a curve ball, he'll swing hard, and knock it out of the park.

The article uses two metaphors for this pattern: make a mess so that you can clean it up, and choose to face tough situations so that you are ready for the curve balls life throws you. (I guess those tough situations must be akin the nasty breaking stuff of a major-league pitcher.) My title for this entry offers a third metaphor: digging yourself into a hole so that you can how to get out of a hole. As much a baseball fan as I was growing up, the metaphor of digging oneself into a hole was the more common. Whatever it's name, the idea is the same.

I find that I'm more likely to apply this pattern in some parts of my life than others. In programming, I can recover from bad situations by re-compiling, re-booting, or at worst reinstalling. In running, I can lose all the races I want, or come up short in a training session all I want -- so long as I don't put my body at undue risk. The rest of life, the parts that deal with other people, require some care. It's hard to create an environment in which I can screw up my interpersonal relationships just so that I can learn how to get out of the mess. There's a different metaphor for such behavior -- burning bridges -- that connotes its irreversibility. Besides, it's not right to treat people as props in my game. I suppose that this is a place in which role play can help, though artificial situations can go only so far.

Where games, machines, and tools are concerned, though, digging a deep hole just for the challenge of getting out of it can be a powerful way to learn. Pretty soon, you can find yourself as master of the machine.

----

(*) Yikes, how old does that make me sound?


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

April 05, 2007 8:57 PM

Feats of Association

An idea is a feat of association.
-- Robert Frost

Yesterday I went to a talk by Roy Behrens, an earlier talk of whose I enjoyed very much and blogged about. That time he talked about teaching as a "subversive inactivity", and this time he spoke more on the topic of his scholarly interest, in creativity and design, ideas and metaphors, similarities and differences, even camouflage! Given that these are his scholarly interests, I wasn't surprised that this talk touched on some of the same concepts as his teaching talk. There are natural connection between how ideas are formed at the nexus os similarity and difference and how one can best help people to learn. I found this talk energizing and challenging in a different sort of way.

In the preceding paragraph, I first wrote that Roy "spoke directly on the topic of his scholarly interest", but there was little direct about this talk. Instead, Roy gave us parallel streams of written passages and images from a variety of sources. This talk felt much like an issue of his commonplace book/journal Ballast Quarterly Review, which I have blogged about before. The effect was mesmerizing, and it had its intended effect in illustrating his point: that the human mind is a connection-making machine, an almost unwilling creator of ideas that grow out of the stimuli it encounters. We all left the talk with new ideas forming.

I don't have a coherent, focused essay on this talk yet, but I do have a collection of thoughts that are in various stages of forming. I'll share what I have now, as much for my own benefit as for what value that may have to you.

Similarity and difference, the keys to metaphor, matter in the creation of software. James Coplien has written an important book that explicates the roles of commonality analysis and variability analysis in the design of software that can separate domain concerns into appropriate modules and evolve gracefully as domain requirements change. Commonality and variability; similarity and difference. As one of Roy's texts pointed out, the ability to recognize similarity and difference is common to all practical arts -- and to scientists, creators, and inventors.

The idea of metaphor in software isn't entirely metaphorical. See this paper by Noble, Biddle, and Tempero that considers how metaphor and metonymy relate to object-oriented design patterns. These creative fellows have explored the application of several ideas from the creative arts to computing, including deconstruction and postmodernism.

To close, Roy showed us the 1952 short film Blacktop: A Story of the Washing of a School Play Yard. And that's the story it told, "with beautiful slow camera strides, the washing of a blacktop with water and soap as it moves across the asphalt's painted lines". This film is an example of how to make something fabulous out of... nothing. I think the more usual term he used was "making the familiar strange". Earlier in his talk he had read the last sentence of this passage from Maslow (emphasis added):

For instance, one woman, uneducated, poor, a full-time housewife and mother, did none of these conventionally creative things and yet was a marvelous cook, mother, wife, and home-maker. With little money, her home was somehow always beautiful. She was a perfect hostess. Her meals were banquets, her taste in linens, silver, glass crockery and furniture was impeccable. She was in all these areas original, novel, ingenious, unexpected, inventive. I learned from her and others like her that a first-rate soup is more creative than a second-rate painting, and that, generally, (un)cooking or parenthood or making a home could be creative while poetry need not be; it cold be uncreative.

Humble acts and humble materials can give birth to unimagined creativity. This is something of a theme for me in the design patterns world, where I tell people that even novices engage in creative design when they write the simplest of programs and where so-called elementary patterns are just as likely to give rise to creative programs as Factory or Decorator.

Behrens's talk touched on two other themes that run through my daily thoughts about software, design, and teaching. One dealt with tool-making, and the other with craft and limitations.

At one point during the Q-n-A after the talk, he reminisced about Let's Pretend, a radio show from his youth which told stories. The value to him as a young listener lay in forcing -- no, allowing -- him to create the world of the story in his own mind. Most of us these days are conditioned to approach an entertainment venue looking for something that has already been assembled for us, for the express purpose of entertaining ourselves. Creativity is lost when our minds never have the opportunity to create, and when our minds' ability to create atrophies from disuse. One of Roy's goals in teaching graphic design students is to help students see that they have the tools they need to create, to entertain.

This is true for artists, but in a very important sense it is true for computer science students, too. We can create. We can build our own tools--our own compilers, our own IDEs, our own applications, our own languages... anything we need! That is one of the great powers of learning computer science. We are in a new and powerful way masters of our own universe. That's one of the reasons I so enjoy teaching Programming Languages and compilers: because they confront CS students directly with the notion that their tools are programs just like any other. You never have to settle for less.

Finally, may favorite passage from Roy's talk plays right into my weakness for the relationship between writing and programming, and for the indispensable role of limitation in creativity and in learning how to create. From Anthony Burgess:

Art begins with craft, and there is no art until craft has been mastered. You can't create until you're willing to subordinate the creative impulses to the constriction of a form. But the learning of craft takes a long time, and we all think we're entitled to shortcuts.... Art is rare and sacred and hard work, and there ought to be a wall of fire around it.

One of my favorite of my blog posts is from March 2005, when I wrote a piece called Patterns as a Source of Freedom. Only in looking back now do I realize that I quoted Burgess there, too -- but only the sentence about willing subordination! I'm glad that Roy gave the context around that sentence yesterday, because it takes the quote beyond constriction of form to the notion of art growing out of craft. It then closes with that soaring allusion. Anyone who has felt even the slightest sense of creating something knows what Burgess means. We computer scientists may not like to admit that what we do is sometimes art, and that said art is rare and sacred, but that doesn't change reality.

Good talk -- worth much more in associations and ideas than the lunch hour it cost. My university is lucky to have Roy Behrens, and other thinkers like him, on our faculty.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 29, 2007 4:20 PM

It Seemed Like a Good Idea at the Time

At SIGCSE a couple of weeks ago, I attended an interesting pair of complementary panel sessions. I wrote about one, Ten Things I Wish They Would Have Told Me..., in near-real time. Its complement was a panel called "It Seemed Like a Good Idea at the Time". Here, courageous instructors got up in front of a large room full of their peers to do what for many is unthinkable: tell everyone about an idea they had that failed in practice. When the currency of your profession is creating good ideas, telling everyone about one of your bad ideas is unusual. Telling everyone that you implemented your bad idea and watched it explode, well, that's where the courage comes in.

My favorite story from the panel was from the poor guy who turned his students loose writing Java mail agents -- on his small college's production network. He even showed a short video of one of the college's IT guys describing the effects of the experiment, in a perfect deadpan. Hilarious.

We all like good examples that we can imitate. That's why we are all drawn to panels such as "Ten Things..." -- for material to steal. But other than the macabre humor we see in watching someone else's train wreck, what's the value in a panel full of bad examples?

The most immediate answer is that we may have had the same idea, and we can learn from someone else's bad example. We may decide to pitch the idea entirely, or to tweak our idea based on the bad results of our panelist. This is useful, but the space of ideas -- good and bad -- is large. There are lots of ways to tweak a bad idea, and not all of them result in a good idea. And knowing that an idea is bad leaves us with the real question unanswered: Just what should we do?

(The risk that the cocky among us face is the attitude that says, "Well, I can make that work. Just watch." This is the source of great material for the next "It Seemed Like a Good Idea at the Time" panel!)

All this said, I think that failed attempts are invaluable -- if we examine them in the right way. Someone at SIGCSE pointed out that negative examples help us to create a framework in which to validate hypotheses. This is how science works from failed experiments. This idea isn't news to those of us who like to traffic in patterns. Bad attempts put us on the road to a pattern. We discover the pattern by using the negative example to identify the context our problem lies and the forces that drive a solution. Sometimes a bad idea really was a good idea -- had it only been applied in the proper context, where the forces at play would have resolved themselves differently. We usually only see these patterns after looking at many, many examples, both good and bad, and figuring what makes them tick.

A lot of CS instruction aims to expose students to lots of examples, in class and in programming assignments. Too often, though, we leave the student discover context and forces on their own, or to learn them implicitly. This is one of the motivations of my work on elementary patterns, to help guide students in the process of finding patterns in their and other people's experiences.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

March 22, 2007 6:53 PM

Patterns in Space and Sound -- Merce Cunningham

A couple of nights I was able to see a performance by the Merce Cunningham Dance Company here on campus. This was my first exposure to Cunningham, who is known for his exploration of patterns in space and sound. My knowledge of the dance world is limited, but I would call this "abstract dance". My wife, who has some background in dance, might call it something else, but not "classical"!

The company performed two pieces for us. The first was called eyeSpace, and it seemed the more avant garde of the two. The second, called Split Sides, exemplifies Cunningham's experimental mindset quite well. From the company's web site:

Split Sides is a work for the full company of fourteen dancers. Each design element was made in two parts, by one or two artists, or, in the case of the music, by two bands. The order in which each element is presented is determined by chance procedure at the time of the performance. Mathematically, there are thirty-two different possible versions of Split Sides.

And a mathematical chance it was. At intermission, the performing arts center's director came out on stage with five people, most local dancers, and a stand on which to roll a die. Each of the five assistants in turn rolled the die, to select the order of the five design elements in question: the pieces, the music, the costumes, the backgrounds, and a fifth element that I've forgotten. This ritual heightened the suspense for the audience, even though most of us probably had never seen Split Sides before, and must have added a little spice for the dancers, who do this piece on tour over and over.

In the end, I preferred the second dance and the second piece of music (by Radiohead), but I don't know to what extent this enjoyment derived from one of the elements or the two together. Overall, I enjoyed the whole show quite a bit.

Not being educated in dance, my take on this sort of performance is often different from the take of someone who is. In practice, I find that I enjoy abstract dance even more than classical. Perhaps this comes down to me being a computer scientist, an abstract thinker who enjoys getting lost in the patterns I see and hear on stage. A lot of fun comes in watching the symmetries being broken as the dance progresses and new patterns emerge.

Folks trained in music may sometimes feel differently, if only because the patterns we see in abstract dance are not the patterns they might expect to see!

Seeing the Merce company perform reminded of a quote about musician Philip Glass, which I ran across in the newspaper while in Carefree for ChiliPLoP:

... repetition makes the music difficult to play.

"As a musician, you look at a Philip Glass score and it looks like absolutely nothing," says Mark Dix, violist with the Phoenix Symphony, who has played Glass music, including his Third String Quartet.

"It looks like it requires no technique, nothing demanding. However, in rehearsal, we immediately discovered the difficulty of playing something so repetitive over so long a time. There is a lot of room for error, just in counting. It's very easy to get lost, so your concentration level has to be very high to perform his music."

When we work in the common patterns of our discipline -- whether in dance, music, or software -- we free our attention to focus on the rest of the details of our task. When we work outside those patterns, we are forced to attend to details that we have likely forgotten even existed. That may make us uncomfortable, enough so that we return to the structure of the pattern language we know. That's not necessarily a bad thing, for it allows us to be productive in our work.

But there can be good in the discomfort of the broken pattern. One certainly learns to appreciate the patterns when they are gone. The experience can remind us why they are useful, and worth whatever effort they may require. The experience can also help us to see the boundaries of their usefulness, and maybe consider a combination, or see a new pattern.

Another possible benefit working without the usual patterns is hidden in Dix's comments above. Without the patterns, we have to concentrate. This provides a mechanism whereby we attend to details and hone our concentration, our attention to detail. I think it also allows us to focus on a new technique. Programming at the extremes, without an if-statement, say, forces you to exercise the other techniques you know. The result may be that you are a better user of polymorphism even after you return to the familiar patterns that include imperative selection.

And I can still enjoy abstract dance and music as an outsider.

There is another, more direct connection between Cunningham's appearance and software. He has worked with developers to create a new kind of choreography software called DanceForms 1.0. While his troupe was in town, they asked the university to try to arrange visits with computer science classes to discuss their work. We had originally planned for them to visit our User Interface Design course and our CS I course (which has a media computation theme), but schedule changes on our end prevented that. I had looked forward to hearing Cunningham discuss what makes his product special, and to see how they had created "palettes of dance movement" that could be composed into dances. That sounds like a language, even if it doesn't have any curly braces.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 15, 2007 4:55 PM

Writing about Doing

Elvis Costello

Last week I ran across this quote by noted rocker Elvis Costello:

Writing about music is like dancing about architecture -- it's really a stupid thing to want to do.

My immediate reaction was an intense no. I'm not a dancer, so my reaction was almost exclusively to the idea of writing about music or, by extension, other creative activities. Writing is the residue of thinking, an outward manifestation of the mind exploring the world. It is also, we hope, occasionally a sign of the mind growing, and those who read can share in the growth.

I don't imagine that dancing is at all like writing in this respect.

Perhaps Costello meant specifically writing about music and other pure arts. But I did study architecture for a while, and so I know that architecture is not a pure art. It blends the artistic and purely creative with an unrelenting practical element: human livability. People have to be able to use the spaces that architects create. This duality means that there are two levels at which one can comment on architecture, the artistic and the functional. Costello might not think much of people writing about the former, but he may allow for the value in people writing about the latter.

I may be overthinking this short quote, but I think it might have made more sense for Costello to have made this analogy: "Writing about music is like designing a house about dancing ...". But that doesn't have any of the zip of his original!

I can think of one way in which Costello's original makes some sense. Perhaps it is taken out of context, and implicit in the context is the notion of only writing about music. When someone is only a critic of an art form, and not a doer of the art form, there is a real danger of becoming disconnected from what practitioners think, feel, and do. When the critic is disconnected from the reality of the domain, the writing loses some or all of its value. I still think it is possible for an especially able mind to write about without doing, but that is a rare mind indeed.

What does all this have to do with a blog about software and teaching? I find great value in many people's writing about software and about teaching. I've learned a lot about how to build more expressive, more concise, and more powerful software from people who have shared their experiences writing such software. The whole software patterns movement is founded upon the idea that we should share our experiences of what works, when and why. The pedagogical patterns community and the SIGCSE community do the same for teachers. Patterns really do have to be founded in experience, so "only" writing patterns without practicing the craft turns out to be a hollow exercise for both the reader and the writer, but writing about the craft is an essential way for us to share knowledge. I think we can share knowledge both of the practical, functional parts of software and teaching and of the artistic element -- what it is to make software that people want to read and reuse, to make courses that people want to take. In these arts, beauty affects functionality in a way that we often forget.

I don't yet have an appetite for dancing about software, but my mind is open on the subject.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 21, 2007 5:50 PM

ChiliPLoP 2007 Redux

Our working group had its most productive ChiliPLoP in recent memory this year. The work we did isn't ready for public consumption yet, so I can't post a link just yet, but I am hopeful that we will be able to share our results with interested educators soon enough. For now, a summary.

This year, we made substantial progress toward producing a well-documented resource for instructors who want to teach Java and OOP. As our working paper begins:

The following set of exercises builds up a simple application over a number of iterations. The purpose is to demonstrate, in a small program, most of the key features of object-oriented programming in Java within a period of two to three weeks. The course can then delve more deeply into each of the topics introduced, in whatever order the instructor deems appropriate.

An instructor can use this example to lay a thin but complete foundation in object-oriented Java for an intro course within the first few weeks of the semester. By introducing many different ideas in a simple way early, the later elements of the course can be ordered at the instructor's discretion. So many other approaches to teaching CS 1 create strict dependencies between topics and language constructs, which limits the instructor's approach over the course of the whole semester. The result is that most instructors won't adopt a new approach, because they either cannot or do not want to be tied down for the whole semester. We hope that our example enables instructors to do OO early while freeing them to build the rest of their course in a way that fits their style, their strengths and interests, and their institution's curriculum.

Our longer-term goal is that this resource serve as a good example for ourselves and for others who would like to document teaching modules and share them with others. By reducing external dependencies to a minimum, such modules should assist instructors in assembling courses that use good exercises, code, and OO programming practice.

... but where are the patterns? Isn't a PLoP conference about patterns? Yes, indeed, and that is one reason that I'm more excited about the work we did this week than I have been in a while. By starting with a very simple little exercise, growing progressively into an interesting simulation via short, simple steps, we have assembled both a paradigmatic OO CS 1 program and the sequence of changes necessary to grow it. To me, this is an essential step in identifying the pattern language that generates the program. I may be a bit premature, but I feel as if we are very close to having documented a pattern sequence in the Alexandrian sense. Such a pattern sequence is an essential part of a pattern-oriented approach to design, and one that only a few people -- Neil Harrison and Jim Coplien -- have written much about. And, like Alexander's idea of pattern diagnosis, pattern sequences will, I think, play a valuable role in how we teach pattern-directed design. My self-assigned task is to explore this extension of our ChiliPLoP work while the group works on filling in some details and completing our public presentation.

One interesting socio-technical experiment we ran this week was to write collaboratively using a Google doc. I'm still not much a fan of browser-based apps, especially word processors, but this worked out reasonably well for us. It was fast, performed autosaves in small increments, and did a great job handling the few edit conflicts we caused in two-plus days. We'll probably continue to work in this doc for a few more weeks, before we consider migrating the document to a web page that we can edit and style directly.

Two weeks from today, the whole crew of us will be off to SIGCSE 2007, which is an unusual opportunity for us to follow up our ChiliPLoP and hold ourselves accountable for not losing momentum. Of course, two weeks back home like this would certainly wipe my mind clear of any personal momentum I have built up, so I will need to be on guard!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 06, 2007 10:31 PM

Basic Concepts: The Unity of Data and Program

I remember vividly a particular moment of understanding that I experienced in graduate school. As I mentioned last time, I was studying knowledge-based systems, and one of the classic papers we read was William Clancey's Heuristic Classification. This paper described an abstract decomposition for classification programs, the basis of diagnostic systems, that was what we today would call a pattern. It gave us the prototype against which we could pattern our own analysis of problem-solving types.

In this paper, Clancey discussed how configuration and planning are two points of view on the same problem, design. A configuration system produces as output an artifact capable of producing state changes in some context; a planning system takes such an artifact as input. A configuration takes as input a sequence of desired state changes, to be produced by the configured system; a planning system produces a sequence of operations that produces desired state changes in the given artifact. Thus, the same kind of system could produce a thing, an artifact, or a process that creates an artifact. In a certain sense, things and processes were the same kind of entity.

Wow. Design and planning systems could be characterized by similar software patterns. I felt like I'd been shown a new world.

Later I learned that this interplay between thing and process ran much deeper. Consider this Lisp (or Scheme) "thing", a data value known as a list:

(a 1 2)

If I replace the symbol "a" with the symbol "+", I also have a Lisp list of size 3:

(+ 1 2)

But this Lisp list is also the Lisp program for computing the sum of 1 and 2! If I give this program to a Lisp interpreter, I will see the result:

> (+ 1 2)
3

In Lisp, there is no distinction between data and program. Indeed, this is true for C, Java, or any other programming language. But the syntax of Lisp (and especially Scheme) is so simple and uniform that the unity of data and program stands out starkly. It also makes Scheme a natural language to use in a course on the principles of programming languages. The syntax and semantics of Lisp programs are so uniform that one can write a Lisp interpreter in about a page of Lisp code. (If you'd like, take a look at my implementation of John McCarthy's Lisp-in-Lisp, in Scheme, based on Paul Graham's essay The Roots of Lisp. If you haven't read that paper, please do soon.)

There is no distinction between data and program. This is one of the truly beautiful ideas in computer science. It runs through everything that we do, from von Neumann's stored program computer, itself to the implementation of a virtual machine for Java to run inside a web browser.

A related idea is the notion that programs can exist at arbitrary levels of abstraction. For each level at which a program is data to another program, there is yet another program whose behavior is to produce that data. An assembler produces machine language from assembly language.

  • A compiler produces assembly language from a high-level program.

  • The program-processing programs themselves can be produced. From a language grammar, lex and yacc produce components of the compiler.

  • One computer can pretend to be another. Windows can emulate DOS programs. Mac OS X can run old-style OS 9 programs in its Classic mode. Intel-based Macs can run programs compiled for a PowerPC-based Mac in its Rosetta emulation mode.

One of the lessons of computer science is that "machine" is an abstract idea. Everything can be interpreted by someone -- or something -- else.

I don't know enough of the history of mathematics or natural philosophy to say to what extent these ideas are computer science's contributions to our body of knowledge. On the one hand, I'm sure that deep thinkers throughout history at least had reason and resource to make some of the connections between thing and process, between design and planning. On the other, I imagine that before we had digital computers at our disposal, we probably didn't have sufficient vocabulary or the circumstances needed to explore issues of natural language to the level of program versus data, or of languages being processed from abstract to concrete, down to the details of a particular piece of hardware. Church. Turing. Chomsky. McCarthy. These are the men who discovered the fundamental truths of language, data, and program, and who laid the foundations of our discipline.

At first, I wondered why hadn't I learned this small set of ideas as an undergraduate student. In retrospect, I'm not surprised. My alma mater's CS program was aimed at applications programming, taught a standard survey-style programming languages course, and didn't offer a compilers course. Whatever our students here learn about the practical skills of building software, I hope that they also have the chance to learn about some of the beautiful ideas that make computer science an essential discipline in the science of the 21st century.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns

February 05, 2007 8:46 PM

Programming Patterns and "The Conciseness Conjecture"

For most of my research career, I have been studying patterns in software. In the beginning didn't think of it in these terms. I was a graduate student in AI doing work in knowledge-based systems, and our lab worked on so-called generic tasks, little molecules of task-specific design that composed into systems with expert behavior. My first uses of the term "pattern" were home-grown, motivated by an interest to help novice programmers recognize and implement the basic structures that made up their Pascal, Basic, and C programs. In the mid-1990s I came into contact with the work of Christopher Alexander and the software patterns community, and I began to write up patterns of both sorts, including Sponsor-Selector, loops, Structured Matcher, and even elementary patterns of Scheme programming for recursion. My interest turned to the patterns at the interface between different programming styles, such as object-oriented and functional programming. It seemed to me that many object-oriented design patterns implemented constructs that were available more immediately in functional languages, and I wondered whether it were true that patterns in any one style would reflect the linguistic shortcomings of the style, implementing ideas available directly in another style.

My work in this area has always been more phenomenological than mathematical, despite occasional short side excursions into promising mathematics like group and category theory. I recall a 6- to 9-month period five or six years ago when a graduate student and I looked group theory and symmetries as a possible theoretical foundation for characterizing relationships among patterns. I think that this work still holds promise, but I have not had a chance to take it any further.

Only recently did I finally read Matthias Felleisen's 1991 paper On the Expressive Power of Programming Languages. I should have read it sooner! This paper develops a framework for comparing languages on the basis of their expressiveness powers and then applies it to many of the issues relevant to language research of the day. One section in particular speaks to my interest in programming patterns and shows how Felleisen's framework can help us to discern the relative merits of more expressive languages and the patterns that they embody. This section is called "The Conciseness Conjecture". Here is the conjecture itself:

Programs in more expressive programming languages that use the additional features in a sensible manner contain fewer programming patterns than equivalent programs in less expressive languages.

Felleisen gives a couple of examples to illustrate his conjecture, including one in which Scheme with assignment statements realizes the implementation of an stateful object more concisely, more clearly, and with less convention than a purely functional subset of Scheme. This is just the sort of example that led me to wonder whether functional programming's patterns, like OOP's patterns, embodied ideas that were directly expressible in another style's languages -- a Scheme extended with a simple object-oriented facility would make implementation of Felleisen's transaction manager even clearer than the stateful lambda expression that switches on transaction types.

Stated as starkly as it is, I am not certain I believe the conjecture. Well, that's not quite true, because in one sense it is obviously true. A more expressive language allows us to write more concise code, and less code almost always means fewer patterns. This is true, of course, because the patterns reside in the code. I say "almost always" because there is an alternative to fewer patterns in smaller code: the same number of patterns, or more, in denser code!

If we qualify "fewer programming patterns" as "fewer lower-level programming patterns", then I most certainly believe Felleisen's conjecture. I think that this paper makes important contribution to the study of software patterns by giving us a vocabulary and mechanism for talking about languages in terms of the trade-off between expressiveness and patterns. I doubt that Felleisen intended this, because his section on the Conciseness Conjecture confirms his uneasiness with pattern-driven programming. "The most disturbing consequence," he writes, of programming patterns is that they are an obstacle to understanding of programs for both human readers and program-processing programs." For him, an important result of his paper is to formalize "how the use of expressive languages seems to be the ability to abstract from programming patterns with simple statements and to state the purpose of a program in the concisest possible manner."

This brings me back to the notions of "concise" and "dense". I appreciate the goal of using the most abstract language possible to write programs, in order to state as unambiguously and with as little text as possible the purpose and behavior of a program. I love to show my students how, after learning the basics of recursive programming. they can implement a higher-order operation such as fold to eliminate the explicit recursion from their programs entirely. What power! all because they are using a language expressive enough to allow higher-order procedure. Once you understand the abstraction of folding, you can write much more concise code.

Where is the down side? Increasing concision ultimately leads to a trade-off on understandability. Felleisen points to the danger that dispersion poses for code readers: in the worst case, it requires a global understanding of the program to understand how the program embodies a pattern. But at the other end of the spectrum is the danger posed by concision: density. In the worst, the code is so dense as to overwhelm the reader's sense. If density were an unadulterated good, we would all be programming in a descendant of APL! The density of abstraction is often the reason "practical" programmers cite for not embracing functional programming is the density of the abstractions one finds in the code. It is an arrogance for us to imply that those who do not embrace Scheme and Haskell are simply not bright enough to be programmers. Our first responsibility is to develop means for teaching programmers these skills better, a challenge that Felleisen and his Teach Scheme! brigade have accepted with great energy. The second is to consider the trade-off between concision and dispersion in a program's understandability.

Until we reach the purest sense of declarative programming, all programs will have patterns. These patterns are the recurring structures that programmers build within their chosen style and language to implement behaviors not directly supported by the language. The patterns literature describes what to build, in a given set of circumstances, along with some idea of how to build the what in a way that makes the most of the circumstances.

I will be studying "On the Expressive Power of Programming Languages" again in the next few months. I think it has a lot more to teach me.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

January 25, 2007 4:52 PM

It's Not About Me

Our dean is relatively new in his job, and one of the tasks he has been learning about is fundraising. It seems that this is one of the primary roles that deans and other to-level administrators have to fill in these days of falling state funding and rising costs.

This morning he gave his department heads a short excerpt from the book Asking: A 59-Minute Guide to Everything Board Members, Volunteers, and Staff Must Know to Secure the Gift. Despite the massive title, apparently each chapter of the book is but a few chapters, focused on a key lesson. The excerpt we received is on this kernel: Donors give to the magic of an idea. Donors don't give to you because you need money. Everybody needs money. Donors give because there is something you can do.

For some reason, this struck as a lesson I have learned over the last few years in a number of different forms, in a number of different contexts. I might summarize the pattern as "It's not about me." Donors don't give because I need; they give because I can do something, something that matters out there. In the realm of interpersonal communication, the hearer is the final determinant of what is communicated. Listeners don't hear what I say; they hear what they understand me to have said. The blog Creating Passionate Users often talks about selling how my book or software is about empowering my users -- not about me, or any of the technical things that matter to me. The same applies to teachers. While in an important sense my Programming Languages course is about the content we want students to learn, in a very practical sense the course is about my students: what they need and why, how they learn, and what motivates them. Attending only to my interests or to the antiseptic interests of the "curriculum" is a recipe for a course almost guaranteed not to succeed.

Let's try this on. "Students don't learn because you think they need something. They learn because there is something they can do with their new knowledge." They learn because the magic of an idea.

That sounds about right to me. Like any pattern taken too far in a new context, this one fails if I overreach, but it does a pretty good job of capturing a basic truth about teaching and learning. Given the many forms this idea seems to take in so many contexts, I think it is just the sort of thing we mean by a pattern.


Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Patterns, Teaching and Learning

October 26, 2006 4:28 PM

OOPSLA Day 2: Jim Waldo "On System Design"

Last year, OOPSLA introduced another new track called Essays. This track shared a motivation from another recent introduction, Onward!, in providing an avenue for people to write and present important ideas that cannot find a home in the research-oriented technical program of the usual academic conference. But not all advances are in the form of novelty, of results from narrowly-defined scientific experiments. Some empirical results are the fruit of experience, reflection, and writing. The Essays track offers an avenue for sharing this sort of learning. The author writes an essay in the spirit of Montaigne and Bacon, using the writing to work out an understanding of his experience. He then presents the essay to the OOPSLA audience, followed by a thoughtful response from a knowledge person who has read the essay and thought about the ideas. (Essays crossed my path in a different arena last week, when I blogged a bit on the idea of blog as essay.)

Jim Waldo, a distinguished engineer at Sun, presented the first essay of OOPSLA 2006, titled "On System Design". He reflected on his many years as a software developer and lead, trying to get a handle on what he now believes about the design of software.

What is a "system"? To Waldo, it is not "just a program" in the sense he thinks meant by the postmodern programming crowd, but a collection of programs. It exists at many levels of scale that must be created and related; hence the need for us to define and manage abstractions.

Software design is a craft, in the classical sense. Many of Waldo's engineer friends are appalled at what software engineers think of as engineering (essentially the application of patterns to roll out product in a reliable, replicable way) because what engineers really do involves a lot of intuition and craft.

Software design is about technique, not facts. Learning technique takes time, because it requires trial and error and criticism. It requires patience -- and faith.

Waldo paraphrase Grady Booch as having said that the best benefit of the Rational toolset was that it gives developers "a way to look like they were doing something while they had time to think".

The traditional way to learn system design is via apprenticeship. Waldo usually asks designers he respects who they apprenticed with. At first he feared that at least a few would look at him oddly, not understanding the question. But he was surprised to find that every person answered without batting an eye. They all not only understood the question but had an immediate answer. He was also surprised to hear the same few names over and over. This may reflect Waldo moving in particular circles, or only a small set of master software developers out there!

In recent years, Waldo has despaired of the lack of time, patience, and faith shown in industry for developing developers. Is all lost? No. In reflecting on this topic and discussing with readers of his early drafts, Waldo sees hope in two parts of the software world: open source and extreme programming.

Consider open source. It has a built-in meritocracy, with masters at the top of the pyramid, controlling the growth and design of their systems. New developers learn from example -- the full source of the system being built. Developers face real criticism and have the time and opportunity to learn and improve.

Consider extreme programming. Waldo is not a fan of the agile approaches and doesn't think that the features on which they are sold are where they offer most. It isn't the illusion of short-term cycles or of the incrementalism that grows a big ball of mud which give him hope. In reality, the agile approaches are based in a communal process that builds systems over time, giving people the time to think and share, mentor and learn. Criticism is built into the process. The system is an open, growing example.

Waldo concludes that we can't teach system design in a class. As an adjunct professor, he believes that system design skills aren't a curriculum component but a curricular outcome. Brian Marick, the discussant on Waldo's took a cynical turn: No one should be allowed to teach students system design if they haven't been a committer to a large open-source project. (Presumably, having had experience building "real" big systems in another context would suffice.) More seriously, Marick suggested that it is only for recent historical reasons that we would turn to academia to solve the problem of producing software designers.

I've long been a proponent of apprenticeship as a way of learning to program, but Waldo is right that doing this as a part of the typical university structure is hard, if not impossible. We heard about a short-lived attempt to do this at last year's OOPSLA, but a lot of work remains. Perhaps if more people like Waldo, not just the more provocative folks at OOPSLA, start talking openly about this we might be able to make some progress.

Bonus reading reference: Waldo is trained more broadly as a philosopher, and made perhaps a surprising recommendation for a great document on the act of designing a new sort of large system: The Federalist Papers. This recommendation is a beautiful example of the value in a broad education. The Federalist Papers are often taught in American political science courses, but from a different perspective. A computer scientist or other thinker about the design of systems can open a whole new vista on this sort of document. Here's a neat idea: a system design course team taught by a computer scientist and a political scientist, with The Federalist Papers as a major reading!

Now, how to make system design as a skill an inextricable element of all our courses, so that an outcome of our major is that students know the technique? ("Know how", not "know that".)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

October 26, 2006 3:46 PM

OOPSLA Day 2: Guy Steele on Fortress

The first two sentences of Guy Steele's OOPSLA 2006 keynote this morning were eerily reminiscent of his describes the application of the principles from his justly famous OOPSLA 1998 invited talk Growing a Language. I'm pretty sure that, for that few seconds, he used only one-syllable words!

Guy Steele

This talk, A Growable Language, was related to that talk, but in a more practical sense. It applied the spirit and ideas expressed in that talk to a new language. His team at Sun is designing Fortress, a "growable language" with the motto "To Do for Fortran What Java Did for C".

The aim of Fortress is to support high-performance scientific and engineering programming without carrying forward the historical accidents of Fortran. Among the additions to Fortran will be extensive libraries, including for networking, a security model, type safety, dynamic compilation (to enable the optimization of running program), multithreading, and platform independence. The project is being funded by DARPA with the goal of improving programmer productivity for writing scientific and engineering applications -- to reduce the time between when a programmer receives a problem and when the programmer delivers the answer, rather than focus solely on the speed of the compiler or executable. Those are important, too, but we are shortsighted in thinking that they are the only formsof speed that matter. (Other DARPA projects in this vein are the Extend language from IBM[?] and the Chapel language from Cray.)

Given that even desktop computers are moving toward multicore chips, this sort of project offers potential value beyond the scientific programming community.

The key ideas behind the Fortress project are threefold:

  • Don't build a language; grow it piecemeal.
  • Create a programming notation that is more like the mathematical notation that this programmer community uses.
  • Make parallelism the default way of thinking.

Steele reminded his audience of the motivation for growing a language from his first talk: If you plan for a language, and then design it, then build it, you will probably miss the optimal window of opportunity for language. One of the guiding questions of the Fortress project is, Will designing for the growth of a language and its user community change the technical decisions the team makes or, more importantly, the way it makes them?

One of the first technical questions the team faced was what set of primitive data types to build into the language. Integer and float -- but what sizes? Data aggregates? Quaternions, octonions? Physical units such as meter and kilogram?? He "might say 'yes' to all of them, but he must say no to some of them." Which ones -- and why?

The strategy of the team is to, wherever possible, add a desired feature via a library -- and to give library designers substantial control over both the semantics and the syntax of the library. The result is a two-level language design: a set of features to support library designers and a set of features to support application programmers. The former have turned out to be quite obbject-oriented, while the latter is not obbject-oriented at all -- something of a surprise to the team.

At this time, the language defines some very cool types in libraries: lists, vectors, sets, maps (with better, more math-like notion), matrices and multidimensional vectors, and units of measurement. The language also offers as a feature mathematical typography, using a wiki-style mark-up to denote Unicode characters beyond what's available on the ASCII keyboard.

In the old model for designing a language, the designers

  • study applications
  • add language features to support application developers
In the new model, though, designers
  • study applications
  • add language features to support library designers in creating the desired features
  • let library designers create a library that supports application developers

At a strategic level, the Fortress team wants to avoid creating a monolithic "standard library", even when taking into account the user-defined libraries created by a single team or by many. Their idea is instead to treat libraries as replaceable components, perhaps with different versions. Steele says that Fortress effectively has make and svn built into its toolset!

I can just hear some of my old-school colleagues decrying this "obvious bloat", which must surely degrade the language's performance. Steele and his colleagues have worked hard to make abstraction efficient in a way that surpasses many of today's languages, via aggressive static and dynamic optimization. We OO programmers have come to accept that our environments can offer decent efficiency while still having features that make us more productive. The challenge facing Fortress is to sell this mindset to C and Fortran programmers with habits of 10, 20, even 40 years thinking that you have to avoid procedure calls and abstract data types in order to ensure optimal performance.

The first category of features in Fortress intend to support library developers. The list is an impressive integration of ideas from many corners of the programming language community, including first-class types, traits and trait descriptors (where, comprise, exclude), multiple inheritance of code but not fields, and type contracts. In this part of the language definition, knowledge that used to be buried in the compiler is brought out explicitly into code where it can be examined by programmers, reasoned over by the type inference system, and used by library designers. But these features are not intended for use by application programmers.

In order to support application developers, Steele and his team watched (scientific) programmers scribble on their white boards and then tried to convert as much of what they say as possible into their language. For example, Fortress takes advantage of subtle whitespace cues, as in phrases such as

{ |x| | x ← S, 3 | x }

Note the four different uses of the vertical bar, disambiguated in part by the whitespace in the expression.

The wired-in syntax of Fortress consists of some standard notation from programming and math:

  • () for grouping
  • , for separating values in a tuple
  • ; for separating statements on a line
  • . for selecting fields and methods
  • conservative, traditional rules of precedence

Any other operator can be defined as infix, prefix, or postfix. For example, ! is defined as a postfix for factorial. Similarly, juxtaposition is a binary operator, one which can be defined by the library designer for her own types. Even nicer for the cientific programmer, the compiler knows that the juxtaposition of functions is itself a function (composition!).

The syntax of Fortress is rich and consistent with how scientific programmers think. But they don't think much about "data types", and Fortress supports that, too. The goal is for library designers to think about types a lot, but application programmers should be able to do their thing with type inference filling in most of the type information.

Finally, scientists, engineers, and mathematicians use particular textual conventions -- fonts, characters, layout -- to communicate. Fortress allows programmers to post-process their code into a beautiful mathematical presentation. Of course, this idea and even its implementation are not new, but the question for the Fortress team was what it would be like if a language were designed with this downstream presentation as the primary mode o presentation?

The last section of Steele's talk looked a bit more like a POPL or ICFP paper, as he explained the theoretical foundations underlying Fortress's biggest challenge: mediating the language's abstractions down to efficient executable code for parallel scientific computation. Steele asserted that parallel programming is not a feature but rather a pragmatic compromise. Programmers do not think naturally in parallel and sop need language support. Fortress is an experiment in making parallelism the default mode of computation.

Steele's example focused on the loop, which in most languages conflates two ideas: "do this statement (or block) multiple times" and "do things in this order". In Fortress, the loop itself means only the former; by default its iterations can be parallelized. In order to force sequencing, the programmer modifies the interval of the loop (the range of values that "counts" the loop) with the seqoperator. So, rather than annotate code to get parallel compilation, we must annotate to get sequential compilation.

Fortress uses the idea of generators and reducers -- functions that produce and manipulate, respectively, data structures like sequences and trees -- as the basis of the program transformations from Fortress source code down to executablecode. There are many implementations for these generators and reducers, some that are sequential and some that are not.

From here Steele made a "deep dive" into how generators and reducers are used to implement parallelism efficiently. That discussion is way behind what I can write here. Besides, I will have to study the transformations more closely before I can explain them well.

As Steele wrapped up, he reiterated The Big Idea that guides the Fortress team: to expose algorithm and design decisions in libraries rather than bury them in the compiler -- but to bury them in the libraries rather than expose them in application code. It's an experiment that many of us are eager to see run.

One question from the crowd zeroed in on the danger of dialect. When library designers are able to create such powerful extensions, with different syntactic notations, isn't there a danger that different libraries will implement similar ideas (or different ideas) incompatibly? Yes, Steele acknolwedge, that is a real danger. He hopes that the Fortress community will grow with library designers thinking of themselves as language designers and so exercise restraint in the the extensions they make, and work together to create community standards.

I also learned about a new idea that I need to read about... the BOOM hierarchy. My memory is vagues, but the discussion involved considering whether a particular operation -- which, I can't remember -- is associative, commutative, and idempotent. There are, of course, eight possible combinations of these features, four of which are meaningful (tree, list, bag/multiset, and set). One corresponds to an idea that Steele termed a "mobile", and the rest are, in his terms, "weird". I gotta read more!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

October 24, 2006 11:31 PM

OOPSLA Day 1: Brenda Laurel on Designed Animism

Brenda Laurel

Brenda Laurel is well known in computing, especially the computer-human interaction computing, for her books The Art of Human-Computer Interface Design and the iconic Computers as Theatre. I have felt a personal connection to her for a few years, since an OOPSLA a few years ago when I bought Brenda's Utopian Entrepreneur, which describes her part in starting Purple Moon, a software company to produce empowering computer games for young girls. That sense of connection grew this morning as I prepared this article, when I learned that Brenda's mom is from Middletown, Indiana, less than an hour from my birthplace, and just off the road I drove so many times between my last Hoosier hometown and my university.

Laurel opened this year's OOPSLA as the Onward! keynote speaker, with a talk titled, "designed animism: poetics for a new world". Like many OOPSLA keynotes, this one covered a lot of ground that was new to me, and I can remember only a a bit -- plus what I wrote down in real time.

These days, Laurel's interests lie in pervasive, ambient computing. (She recently gave a talk much like this one at UbiComp 2006.) Unlike most folks in that community, her goal is not ubiquitous computing as primarily utilitarian, with its issues of centralized control, privacy, and trust. Her interest is in pleasure. She self-effacingly attributed this move to the design tactic of "finding the void", the less populated portion of the design space, but she need not apologize; creating artifacts and spaces for human enjoyment is a noble goal -- a necessary part of of our charter -- in its own right. In particular, Brenda is interested in the design of games in which real people are characters at play.

Dutch Windmill Near Amsterdam, Owen Merton, 1919

(Aside: One of Brenda's earliest slides showed this painting, "Dutch Windmill Near Amsterdam" by Owen Merton (1919). In finding the image I learned that Merton was the father of Thomas Merton, the Buddhist-inspired Catholic monk whom I have quoted here before. Small world!)

Laurel has long considered how we might extend Aristotle's poetics to understand and create interactive form. In the Poetics, Aristotle "set down ... an understanding of narrative forms, based upon notions of the nature and intricate relations of various elements of structure and causation. Drama relied upon performance to represent action." Interactive systems complicate matters relative to Greek drama, and ubiquitous computing "for pleasure" is yet another matter altogether.

To start, drawing on Aristotle, I think, Brenda listed the four kinds of cause of a created thing (at this point, we were thinking drama):

  • the end cause -- its intended purpose
  • the formal cause -- the platonic ideal in the mind of the creator that shaped the creation
  • the efficient cause -- the designer herself
  • the material cause -- the stuff out of which it is made, which constrains and defines the thing

In an important sense, the material and formal causes work in opposite directions with respect to dramatic design. The effects of the material cause work bottom-up from material to pattern on up to the abstract sense of the thing, while the effects of the formal cause work top-down from the ideal to components on to the materials we use.

Next, Brenda talked about plot structure and the "shape of experience". The typical shape is a triangle, a sequence of complications that build tension followed by a sequence of resolutions that return us to our balance point. But if we look at the plots of most interesting stories at a finer resolution, we see local structures and local subplots, other little triangles of complication and resolution.

(This part of the talk reminded of a talk I saw Kurt Vonnegut give at UNI almost a decade or so ago,in which he talked about some work he had done as a master's student in sociology at the University of Chicago, playfully documenting the small number of patterns that account for almost all of stories we tell. I don't recall Vonnegut speaking of Aristotle, but I do recall the humor in is own story. Laurel's presentation blended bits of humor with two disparate elements: an academic's analysis and attention to detail, and a child's excitement at something that clearly still lights up her days.)

One of the big lessons that Laurel ultimately reaches is this: There is pleasure in the pattern of action. Finding these parts is essential to telling stories that give pleasure. Another was that by using natural materials (the material causality in our creation), we get pleasing patterns for free, because these patterns grow organically in the world.

I learned something from one of her examples, Johannes Kepler's Harmonices Mundi, an attempt to "explain the harmony of the world" by finding rules common to music and planetary motion within the solar system. As Kepler wrote, he hoped "to erect the magnificent edifice of the harmonic system of the musical scale ... as God, the Creator Himself, has expressed it in harmonizing the heavenly motions." In more recent times, composers such as Stravinsky, deBussy, and Ravel have tried to capture patterns from the visual world in their music, seeking more universal patterns of pleasure.

This led to another of Laurel's key lessons, that throughout history artists have often captured patterns in the world on the basis of purely phenomenological evidence, which were later reified by science. Impressionism was one example; the discovery of fractal patterns in Jackson Pollock's drip projectories were another.

The last part of Laurel's talk moved on to current research with sensors in the ubiquitous computing community, the idea of distributed sensor networks that help us to do a new sort of science. As this science exposes new kinds of patterns in the about the world, Laurel hopes for us to capitalize on the flip side of the art examples before: to be able to move from science, to math, and then on to intuition. She would like to use what we learn to inform the creation of new dramatic structures, of interactive drama and computer games that improve the human condition -- and give us pleasure.

The question-and-answer session offered a couple of fun moments. Guy Steele asked Brenda to react to Marvin Minsky's claim that happiness is bad for you, because once you experience it you don't want to work any more. Brenda laughed and said, "Marvin is a performance artist." She said that he was posing with this claim, and told some stories of her own experiences with Marvin and Timothy Leary (!). And she is even declared a verdict in my old discipline of AI: Rod Brooks and his subsumption architecture are right, and Minsky and the rest of symbolic AI are wrong. Given her views and interests in computing, I was not surprised by her verdict.

Another question asked whether she had seen the tape of Christopher Alexander's OOPSLA keynote in San Jose. She hadn't, but she expressed a kinship in his mission and message. She, too, is a utopian and admitted to trying to affect our values with her talk. She said that her research through the 1980s especially had taught her how she could sell cosmetics right into the insecurities of teenage girls -- but instead she chose to create an "emotional rehearsal space" for them to grow and overcome those insecurities. That is what Purple Moon was all about!

As usual, the opening keynote was well worth our time and energy. As a big vision for the future, as a reminder of our moral center, it hit the spot. I'm still left to think how these ideas might affect my daily work as teacher and department leader.

(I'm also left to track down Ted Nelson's Computer Lib/Dream Machines, a visionary, perhaps revolutionary book-pair that Laurel mentioned. I may need the librarian's help for this one.)


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

September 21, 2006 5:58 PM

An Audience of One

What if they through a talk and no one came?

I went to a talk on teaching here today and was the only member of the audience. The speaker came, of course, and the organizer of the talk, too. But then there was just me. The speaker honored me by giving his talk anyway. During the session, a fourth person arrived, and he was half-audience and half-expert.

The talk was on Small Group Instructional Diagnosis (SGID), a technique that helps faculty receive information on how well a course is going. The technique resembles the writers' workshop used in the creative writing world and the software patterns community. In the course of a class period, a moderator -- a person with no connection to the students or course, and preferably not in a power relationship with the instructor -- poses three or four questions to the students and then works with them via group discussion to arrive at a consensus about what is working in the course and what could stand improvement. The moderator requires certain skills at guiding discussion and framing the points of consensus. The author -- the instructor -- is not present to hear the discussions; instead, the moderator meets with the instructor soon after the diagnosis to present the feedback and to discuss the course. Much like a PLoP workshop group, instructors often serve in round-robin as moderators of SGIDs for one another. SGIDs are usually done during the semester, after students have enough time to know the course and instructor but early enough that the professor can use the information to improve the course content, structure, delivery, etc.

Many instructors might think of this as useful only for "bad teachers" who need to get better. But I think that even the best instructors can get better. Getting feedback and using it to inform one's practice seems like a good idea for any instructor. The colleague who gave this presentation, a math professor, is widely recognized as one of the best teachers at my institution, and he has used SGIDs in his own courses. I can imagine having a SGID done in one of my courses, and I can also imagine offering this tool as a possibility to a faculty member who came to me looking for ways to improve their teaching. I can even imagine using the tool to diagnose a particular instance of a course -- not because I think that there is something intrinsically wrong in my approach, but because the particular mix of me, the course, and the student body in the course don't seem to be working.

The similarity between the SGID and a writer's workshop seemed strong. I'm thinking about how one might augment the process I was shown using ideas from PLoP workshops, such as the summary the workshop group does before moving on to "things we like" and "ways we think the author could improve the work".

Also much like the PLoP experience, this process requires that a teacher take the risk to have students discuss their work openly in front of a third party and be willing to listen to feedback and fold it back into the work. Many writers are uncomfortable with this idea, and I know that many, many university professors are uncomfortable with this idea. But getting better usually requires an element of risk, if only by allowing honest discussion to take place.

I'm glad I was the one person who showed up for the talk. I learned from the presentation, of course, but the discussion that took place afterward, in which the half-and-half latecomer described his teaching career and the role an SGID had in helping him earn tenure, was even more illuminating. He was a good storyteller.

By the way, there is still plenty of time to register for PLoP 2006, which is collocated in Portland this year with OOPSLA. I'm looking forward to both conferences, though I'm sad that I won't be able to run at Allerton Park before, during, and after PLoP!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

May 09, 2006 9:19 AM

A Weekend in Portland

OOPSLA 2006 logo

I was in Portland this weekend for the spring meeting of the OOPSLA 2006 conference committee. This is the meeting where we assemble the program for the conference, from technical papers to lightning talks to invited and keynote talks, from Onward! to the Educators' Symposium to DesignFest, from workshops to area of responsibility this year, tutorials. It looks like we will have 57 tutorials this year, covering a range of topics in the OOP community and out in the industrial software development community. It's a tough job to assemble the elements of such a program, which ranges over five days and encompasses affiliated events like GPCE and, for the first time ever this year, PLoP. Trying to schedule events in such a way as to minimize the number of times conference attendees say, "Rats! There are two things I want to see right now. In the session before lunch, there were none!" I suppose that, in some way, we'd be happy if every session created a conflict for attendees, but I'm not sure the attendees would like it so much!

As I've done in the past when chairing the 2004 and 2005 Educators' Symposia, I owe a great debt to my program committee of seven OOPSLA veterans. They did most of the heavy lifting in reading and evaluating all of the submissions we received. I had to make some tough calls at the end, but their input made that doable.

Some highlights from the weekend:

PDX, the Portland International Airport, has free wireless -- with better coverage than promised. Hurray!

Why is Onward! is a must-see? So that you can "cool your pinkies in the mud of the future". Maybe it's a must-see because the chairs of Onward! are the kind of people who say such things. I'm especially looking forward to Onward! films, which I had to step out of last year.

I am not sure that my students are ready for a web page about me that looks like this pictorial biography of past OOPSLA chair Douglas Schmidt. My favorite is this take on the old cartoon standard about the evolution of man:

the evolution of a programmer

You may recall me commenting on a particular sign I saw while running in Portland back at the fall meeting. Doesn't it always rain in Portland? Maybe not, it rained again both nights and mornings I was there this time. It was nice enough when I arrived Friday evening, if cool, and the sun had poked through the clouds Monday afternoon -- as we left.

At least it didn't rain on me while I ran. Unfortunately, I was only able to run my first morning in town. It was my first 12-miler in eight weeks or so, and felt pretty good. But, just as I did the second day at SIGCSE and the second day at ChiliPLoP I came down with some sort of respiratory thing that sapped all of my energy. So I took today off, and probably will tomorrow, too, just to get back to normal. I have a feeling that I won't be bragging about my mileage the year like I did at the end of 2005... I'm beginning to wonder about the pattern and what I can do make air travel workable again for me. I won't have a chance to test any hypothesis I develop until October, when I go to PLoP and OOPSLA.

Finally, on a less frivolous note, we spent a few minutes on Monday morning to plan a memorial for John Vlissides, whose passing I memorialized last winter. We want this memorial to be a celebration of all the ways John touched the lives of everyone he met. In an e-mail conversation last week, 2006 Educators' Symposium chair Rick Mercer pointed out a picture of John and me that I didn't know about from John's wiki, courtesy of Dragos Manolescu.

John Vlissides and Eugene Wallingford at the 2001 post-OOPSLA Hillside meeting

I remember that moment clearly. It has an OOPSLA connection, too, because it was taken at the annual fall meeting of the Hillside Group, which traditionally takes place the evening and morning after OOPSLA. (I've missed the last two, because the night OOPSLA ends is the traditional celebration dinner for the conference committee, and I've been eager to get home to see my family after a week on the round.)

John and I were part of a break-out group at that Hillside meeting on the topic of how to create a larger world in which to publish some of the work coming out of the PLoPs. Most academic conferences are places to publish novel work, and most pattern work is by definition not all that new -- it documents patterns we see in many existing code bases. The pattern as a literary work and teaching tool is itself novel, but that's just not what academic program conference committees are looking for.

Anyway, John and I were brainstorming. I don't remember what we produced in that session, but my clueless expression indicates that at that particular moment John was the one producing. That is not too surprising. yet he made me feel like an equal partner in the work. I guess I didn't hurt his impression of me too much. When he became general chair of OOPSLA 2004, he asked me to get involved with the conference committee for the first time, as his Educators' Symposium chair. Good memories. Thanks for the pointer to the photo, Rick. And thanks, Dragos, for taking the photo and sharing it.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

February 20, 2006 6:48 PM

Changing How People Think

Pascal Van Cauwenberghe writes a bit about agile development, lean production, and other views of software engineering. He recently quoted the Toyota Way Fieldbook as inspiration for how to introduce lean manufacturing as change. I think that educators can learn from Pascal's excerpt, too.

... we're more likely to change what people think by changing what they do, rather than changing what people do by changing what they think.

I can teach students about object-oriented programming, functional programming, or agile software development. I can teach vocabulary, definitions, and even practices and methodologies. But this content does not change learners "deeply held values and assumptions". When they get back into the trenches, under the pressure of new problems and time, old habits of thought take over. No one should be surprised that this is true for people who are not looking to change, and that is most people. But even when programmers want to practice the new skill, their old habits kick in with regularity and unconscious force.

The Toyota Way folks use this truth as motivation to "remake the structure and processes of organizations", with changes in thought becoming a result, not a cause. This can work in a software development firm, and maybe across a CS department's curriculum, but within a single classroom this truth tells us something more: how to orient our instruction. As an old pragmatist, I believe that knowledge is habit of thought, and that the best way to create new knowledge is to create new habits. This means that we need to construct learning environments in which people change what they do in practical, repeatable ways. Once students develop habits of practice, they have at their disposal experiences that enable them to think differently about problems. The ideas are no longer abstract and demanded by an outside agent; they are real, grounded in concrete experiences. People are more open to change when it is driven from within than from without, so this model increases the chance that the learner entertain seriously the new ideas that we would like them to learn.

In my experience, folks who try XP practices -- writing and running tests all the time, refactoring, sharing code, all supported by pair programming and a shared culture of tools and communication -- are more likely to "get" the agile methods than are folks to whom the wonderfulness of agile methods is explained. In the end, I think that this is true for nearly all learning.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 08, 2006 2:23 PM

Functional Programming Moments

I've been having a few Functional Programming Moments lately. In my Translation of Programming Languages course, over half of the students have chosen to write their compiler programs in Scheme. This brought back fond memories of a previous course in which one group chose to build a content management system in Scheme, rather than one of the languages they study and use more in their other courses. I've also been buoyed by reports from professors in courses such as Operating Systems that some students are opting to do their assignments in Scheme. These students seem to have really latched onto the simplicity of a powerful language.

I've also run across a couple of web articles worth noting. Shannon Behrens wrote the provocatively titled Everything Your Professor Failed to Tell You About Functional Programming. I plead guilty on only one of the two charges. This paper starts off talking about the seemingly inscrutable concept of monads, but ultimately turns to the question of why anyone should bother learning such unusual ideas and, by extension, functional programming itself. I'm guilty on the count of not teaching monads well, because I've never taught them at all. But I do attempt to make a reasonable case for the value of learning functional programming.

His discussion of monads is quite nice, using an analogy that folks in his reading audience can appreciate:

Somewhere, somebody is going to hate me for saying this, but if I were to try to explain monads to a Java programmer unfamiliar with functional programming, I would say: "Monad is a design pattern that is useful in purely functional languages such as Haskell.

I'm sure that some folks in the functional programming community will object to this characterization, in ways that Behrens anticipates. To some, "design patterns" are a lame crutch object-oriented programmers who use weak languages; functional programming doesn't need them. I like Behrens's response to such a charge (emphasis added):

I've occasionally heard Lisp programmers such as Paul Graham bash the concept of design patterns. To such readers I'd like to suggest that the concept of designing a domain-specific language to solve a problem and then solving that problem in that domain-specific language is itself a design pattern that makes a lot of sense in languages such as Lisp. Just because design patterns that make sense in Java don't often make sense in Lisp doesn't detract from the utility of giving certain patterns names and documenting them for the benefit of ... less experienced programmers.

His discussion of why anyone should bother to do the sometimes hard work needed to learn functional programming is pretty good, too. My favorite part addressed the common question of why someone should willingly take on the constraints of programming without side effects when the freedom to compute both ways seems preferable. I have written on this topic before, in an entry titled Patterns as a Source of Freedom. Behrens gives some examples of self-imposed cosntraints, such as encapsulation, and how breaking the rules ultimately makes your life harder. You soon realize:

What seemed like freedom is really slavery.

Throw off the shackles of deceptive freedom! Use Scheme.

The second article turns the seductiveness angle upside down. Lisp is Sin, by Sriram Krishnan, tells a tale being drawn to Lisp the siren, only to have his boat dashed on the rocks of complexity and non-standard libraries again and again. But in all he speaks favorably of ideas from functional programming and how they enter his own professional work.

I certainly second his praise of Peter Norvig's classic text Paradigms of AI Programming.

I took advantage of a long weekend to curl up with a book which has been called the best book on programming ever -- Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. I have read SICP but the 300 or so pages I've read of Norvig's book have left a greater impression on me than SICP. Norvig's book is definitely one of those 'stay awake all night thinking about it' books.

I have never heard anyone call Norvig's book the best of all programming books, but I have heard many folks say that about SICP -- Structure and Interpretation of Computer Programs, by Abelson and Sussman. I myself have praised Norvig's book as "one of my favorite books on programming", and it teaches a whole lot more than just AI programming or just Lisp programming. If you haven't studied, put it at or near the top of your list, and do so soon. You'll be glad you did.

In speaking of his growth as a Lisp programmer, Krishnan repeats an old saw about the progression of a Lisp programmer that captures some of the magic of functional programming:

... the newbie realizes that the difference between code and data is trivial. The expert realizes that all code is data. And the true master realizes that all data is code.

I'm always heartened when a student takes that last step, or show that they've already been there. One example comes to mind immediately: The last time I taught compilers, students built the parsing tables for the compiler by hand. One student looked at the table, thought about the effort involved in translating the table into C, and chose instead to write a program that could interpret the table directly. Very nice.

Krishnan's article closes with some discussion of how Lisp doesn't -- can't? -- appeal to all programmers. I found his take interesting enough, especially the Microsoft-y characterization of programmers as one of "Mort, Elvis, and Einstein". I am still undecided just where I stand on claims of the sort that Lisp and its ilk are too difficult for "average programmers" and thus will never be adoptable by a large population. Clearly, not every person on this planet is bright enough to do everything that everyone else does. I've learned that about myself many, many times over the years! But I am left wondering how much of this is a matter of ability and how much is a matter of needing different and better ways to teach? The monad article I discuss above is a great example. Monads have been busting the chops of programmers for a long time now, but I'm betting that Behrens has explained it in a way that "the average Java programmer" can understand it and maybe even have a chance of mastering Haskell. I've long been told by colleagues that Scheme was too abstract, too different, to become a staple of our students, but some are now choosing to use it in their courses.

Dick Gabriel once said that talent does not determine how good you can get, only how fast you get there. Maybe when it comes to functional programming, most of us just take too long to get there. Then again, maybe we teachers of FP can find ways to help accelerate the students who want to get good.

Finally, Krishnan closes with a cute but "politically incorrect analogy" that plays off his title:

Lisp is like the villainesses present in the Bond movies. It seduces you with its sheer beauty and its allure is irresistible. A fleeting encounter plays on your mind for a long, long time. However, it may not be the best choice if you're looking for a long term commitment. But in the short term, it sure is fun! In that way, Lisp is...sin."

Forego the demon temptations of Scheme! Use Perl.

Not.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

December 21, 2005 5:07 PM

Experiments in Art and Software

Double Planet, by Pyracantha

Electron Blue recently wrote about some of her experiments in art. As an amateur student of physics, she knows that these experiments are different the experiments that scientists most often perform. She doesn't always start with a "hypothesis", and when she gets done it can be difficult to tell if the experiment was a "success" or not. Her experiments are opportunities to try ideas, to see whether a new technique works out. Sometimes, that's easy to see, as when the paint of a base image dries with a grainy texture that doesn't fit the image or her next stage. Other times, it comes down to her judgment about balance or harmony.

This is quite unlike many science experiments, but I think it has more in common with science than may at first appear. And I think it is very much like what programmers and software developers do all the time.

Many scientific advances have resulted from what amounts to "trying things out", even without a fixed goal in mind. On my office wall, I have a wonderful little news brief called "Don't leave research to chance", taken from some Michigan State publication in the early 1990s. The article is about some work by Robert Root-Bernstein, an MSU science professor who in the 1980s spent time as a MacArthur Prize fellow studying creativity in the sciences. In particular, it lists ten ways to increase one's chances of serendipitously encountering valuable new ideas. Many of these are strict matters of technique, such as removing background "noise" that everyone else accepts or varying experimental conditions or control groups more widely than usual. But others fit the art experiment mold, such as running a reaction backward, amplifying a side reaction, or doing something else "unthinkable" just to see what happens. The world of science isn't always as neat as it appears from the outside.

And certainly we software developers explore and play in a way that an artist would recognize -- at least we do when we have the time and freedom to do so. When I am learning a new technique or language or framework, I frequently invoke the Three Bears Pattern that I first learned from Kent Beck via one of the earliest pedagogical patterns workshops. One of the instantiations of this pattern is to use the new idea everywhere, as often and as much as you can. By ignoring boundaries, conventional wisdom, and pat textbook descriptions of when the technique is useful, the developer really learns the technique's strengths and weaknesses.

I have a directory called software/playground/ where I visit when I just want to try something out. This folder is a living museum of some of the experiments I've tried. Some are as mundane as learning some hidden feature of Java interfaces, while others are more ambitious attempts to see just how far I can take the Model-View-Controller pattern before the resulting pain exceeds the benefits. Just opportunities to try an idea, to see how a new technique works out.

My own experience is filled with many other examples. A grad student and I learned pair programming by giving it a whirl for a while to see how it felt. And just a couple of weeks ago, on the plane to Portland for the OOPSLA 2006 fall planning meeting, I whipped up a native Ook! interpreter in Scheme -- just because. (There is still a bug in it somewhere... )

Finally, I suspect that web designers experiment in much the way that artists do when they have ideas about layout, design, and usability. The best way to evaluate the idea is often to implement it and see what real users think! This even fits Electron Blue's ultimate test of her experiments: How do people react to the work? Do they like it enough to buy it? Software developers know all about this, or should.

One of the things I love most about programming is that I have the power to write the code -- to make my ideas come alive, to watch them in animated bits on the screen, to watch them interacting with other people's data and ideas.

As different as artists and scientists and software developers are, we all have some things in common, and playful experimentation is one.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

November 28, 2005 12:01 PM

The Passing of a Friend

When I first attended PLoP, I was a rank novice with patterns. But like most everyone else, I had read Design Patterns -- or at least put it on my bookshelf, like everyone else -- and was just a bit in awe of the Gang of Four. Pretty soon I learned that Ralph Johnson and John Vlissides were two of the nicest and most helpful people I around. I also met Erich Gamma a time or two and found him to be a good guy, though I never interacted all that much with him. I've never had the pleasure of meeting Richard Helm.

John Vlissides

Within a couple of years, John asked me to chair PLoP 2000. I like to have a decent understanding of something like this before I get started, and John sat with me to patiently answer questions and teach me some software patterns history. He also gave me a lot of encouragement. The conference turned out well.

Then a couple of years later, John approached me with another request: to chair the Educators Symposium at OOPSLA 2004. Again, I had been attending OOPSLA and the Educators Symposium for a few years, even helping on a few symposium committees, but I had never considered taking on the chair, which seemed like much more work and responsibility. Again, John offered me a lot of encouragement and offered to help me in any way he could in his capacity as conference chair -- including giving me his complimentary registration to OOPSLA 2003, so that I could attend the 2004 kick-off planning meeting and begin working with my colleagues to assemble the next year's program. 2003 was a tight money year for my department and me, and John's kindness made it possible for me to attend.

The 2004 Educators Symposium went pretty well, I think we can say. I've certainly talked about it enough here, beginning with this entry. I owed much of its success to John's encouragement and support. When I floated the idea of asking Alan Kay to keynote at the symposium, John said, "Dream big. I'll bet you can do it." Then, when Alan won the Turing Award, John worked to bring the Turing Award lecture to OOPSLA -- but all the while protecting my "coup" at having persuaded Alan to speak at OOPSLA 2004 in the first place. I'd've been happy to have Alan speak at OOPSLA under any circumstances, but I appreciated how John just assumed that I deserved some attention for my efforts and that, under his care, I would receive it.

John was always like that. He was a quiet leader, a person who treated each person with dignity and respect and who, as a result, could make things happen through the team he built.

He was also always a very good writer and scholar. The articles that ended up in his Pattern Hatching went a long way toward making design patterns more accessible to folks who had been a bit swamped by the GoF book. They also taught us a bit about how a good programmer thinks when he writes code.

I am filled with a deep sadness to know that John died at his home on Thanksgiving day, after more than a year and a half battling a brain tumor. He battled silently, with strength and courage derived from his abiding faith in God. My prayers are with his family, especially his wife and children.

As someone said in an announcement of John's death, the essence of John's greatness lay not in his technical accomplishments but in his humanity. He was a friend, a teacher, a colleague, and a mentor to everyone with whom he came into contact. He made everyone around him a better scholar and a better person.

Though I never did anything to deserve it, John was a friend and a mentor to me. I will miss him.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

November 18, 2005 9:37 PM

Teaching as Subversive Inactivity

Two enjoyable talks in one week -- a treat!

On Tuesday, I went to the 2005 CHFA Faculty Excellence Award lecture. (CHFA is UNI's College of Humanities and Fine Arts.) The winner of the award was Roy Behrens, whose name long-time readers of this blog may recognize from past entries on a non-software patterns of design and 13 Books. Roy is a graphic arts professor at my university, and his talk reflected both his expertise in design and the style that contributed to his winning an award for faculty excellence. He didn't use PowerPoint in that stultifying bullet-point way that has afflicted the science and technology for the last decade or more... They used high-resolution images of creative works and audio to create a show that amplified his words. They also demonstrated a wonderful sense of "visual wit".

The title of the talk was teaching as a SUBVERSIVE INACTIVITY: a miscellany, in homage to Neil Postman's famous book. When he was asked to give this talk, he wondered what he should talk about -- how to teach for 34 years without burning out? He decided to share how his teaching style has evolved away from being the center of attention in the classroom toward giving students the chance to learn.

The talk opened with pivotal selections from works that contributed to his view on teaching. My favorite from the bunch came from "The Cult of the Fact" by Liam Hudson, a British psychologist: The goal of the teacher is to

... transmit an intellectual tradition of gusto, and instill loyalty to it, ...

Behrens characterized his approach to teaching in terms of Csikszentmihalyi's model of Flow: creativity and productivity happen when the students' skills are within just the right range of the challenges given to them.

(After seeing this talk, I'm almost afraid to use my homely line art in a discussion of it. Roy's images were so much better!)

He called this his "Goldilocks Model", the attempt to create an environment for students that maximizes their chance to get into flow.

What followed was a collage of images and ideas. I enjoyed them all. Here are three key points about teaching and learning from the talk.

Aesthetic and Anesthetic

What Csikszentmihalyi calls flow is roughly comparable to what we have historically called "aesthetic". And in its etymological roots, the antonym of 'aesthetic' is anesthetic. What an interesting juxtaposition in our modern language!

In what ways can the atmosphere of our classrooms be anesthetic?

extreme similarity ... HUMDRUM ... monotony
extreme difference ... HODGEPODGE ... mayhem

We often think of boredom as a teaching anesthetic, but it's useful to trace this back to the possibility that the boredom results from a lack of challenge. Even more important is to remember that too much challenge, too much activity, what amounts to too much distraction also serves as an anesthetic. People tend to tune out when they are overstimulated, as a coping mechanism. I am guessing that when I bore students the most, it's more likely to be from a mayhem of ideas than a monotony. ("Let's sneak in one more idea...)

Behrens is a scholar of patterns, and he has found it useful to teach students patterns -- linguistic and visual -- suitable to their level of development, and then turn them lose in the world. Knowing the patterns changes our vision; we see the world in a new way, as an interplay of patterns.

Through patterns, students see style and begin to develop their own. 'Style' is often maligned these days as superficial, but the idea of style is essential to understanding designs and thinking about creating. That said, style doesn't determine quality. One can find quality in every genre of music, of painting. There is something deeper than style. Teaching our principles of programming languages course this semester as I am, I hope that my students are coming to understand this. We can appreciate beautiful object-oriented programs, beautiful functional programs, beautiful logic programs, and beautiful procedural programs.

Creativity as Postmodern

Behrens didn't use "postmodern", but that's my shorthand description of his idea, in reference to ideas like the scrapheap challenge.

During the talk, Behrens several times quoted Arthur Koestler's The Act of Creation. Here's one:

The creative process is an "unlikely marriage of cabbages and kings -- of previously unrelated frames of reference or universes of discourse -- whose union will solve the previously insoluble problem." -- Koestler

Koestler's "cabbages and kings" is an allusion to a nonsense poem in Alice in Wonderland. (Remember what Alan Perlis said about "Alice": The best book on programming for the layman ...; but that's because it's the best book on anything for the layman.") Koestler uses the phrase because Carroll's nonsense poem is just the sort of collage of mismatched ideas that can, in his view, give rise to creativity.

Humans don't create anything new. They assemble ideas from different contexts to make something different. Creativity is a bisociation, a "sort crossing", as opposed to analytic intelligence, which is an association, a "sort-matching".

We have to give students the raw material they need to mix and match, to explore new combinations. That is why computer science students should learn lots of different programming languages -- the more different, the better! They should study lots of different ideas, even in courses that are not their primary interest: database, operating systems, compilers, theory, AI, ... That's how we create the rich sea of possibilities from which new ideas are born.

Problems, Not Solutions

If we train them to respond to problems, what happens when the problem giver goes away? Students need to learn to find and create problems!

In his early years, Behrens feared giving students examples of what he wanted, at the risk of "limiting" their creativity to what they had seen. But examples are critical, because they, too, give students the raw material they need to create.

His approach now is to give students interesting and open problems, themes on which to work. Critique their products and ideas, frequently and openly. But don't sit by their sides while they do things. Let them explore. Sometimes we in CS tend hold students' hands too much, and the result is often to turn what is fun and creative into tedious drudgery.

I'm beginning to think that one of the insidious ingredients in students' flagging interest in CS and programming is that we have taken the intellectual challenge out of learning to program and replaced it with lots of explanation, lots of text talking about technical details. Maybe our reasons for doing so seemed on the mark at the time -- I mean, C++ and Java are pretty complex -- but the unintended side effects have been disastrous.

----

I greatly enjoyed this talk. One other good thing came out of the evening: after 13 years working on the same campus, I finally met Roy, and we had a nice chat about some ideas at the intersection of our interests. This won't be the last time we cross paths this year; I hope to present a paper at his conference Camouflage: Art, Science and Popular Culture conference, on the topic of steganography.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

November 15, 2005 8:51 PM

Popularizing Science through Writing and Teaching

I have an interest in writing, both in general as a means for communication and in particular as it relates to the process of programming. So I headed over to the Earth Science department yesterday for a talk on popular science writing called "Words, Maps, Rocks: One Geologist's Path". The speaker was Marcia Bjornerud of Lawrence University, who recently published the popular geology book Reading the Rocks: The Autobiography of the Earth. The Earth Science faculty is using Reading the Rocks as reader in one of their courses, and they asked Dr. Bjornerud to speak on how she came to be a geologist and a popularizer of science.

Bjornerud took a roundabout way into science. As a child, she had no desire to be a scientist. Her first loves were words and maps. She loved the history of words, tracing the etymology of cool words back to their origin in European languages, then Latin or Greek, and ultimately back to the source of their roots. The history of a word was like a map through time, and the word itself was this rich structure of now and then. She also loved real maps and delighted in the political, geographical, and temporal markings that populated them. Bjornerud told an engaging story about a day in grade school when snow created a vacation day. She remembers studying the time zones on the map and learning that at least two places had no official time zone: Antarctica and Svalborg, Norway.

These reminiscences probably strike a chord in many scientists. I know that I have spent many hours poring over maps, just looking at cities and open spaces and geopolitical divisions, populations and latitudes and relative sizes. I remember passing time in an undergraduate marketing class by studying a large wall map of the US and first realizing just how much bigger Iowa (a state I had never visited but would one day call home) was than my home state of Indiana (the smallest state west of the Appalachian Mountains!) I especially love looking at maps of the same place over time, say, a map of the US in 1500, 1650, 1750, 1800, and so on. Cities grow and die; population moves inexorably into the available space, occasionally slowing at natural impediments but eventually triumphing. And words -- well, words were why I was at this talk in the first place.

Bjornerud loved math in high school and took physics at the suggestion of friends who pointed out that the calculus had been invented in large part in order to create modern physics. She loved the math but hated the physics course; it was taught by someone with no training in the area who acknowledged his own inability to teach the course well.

It wasn't until she took an introductory college geology course that science clicked for her. At first she was drawn to the words: esker, alluvium, pahoehoe, ... But soon she felt drawn to what the words name. Those concepts were interesting in their own right, and told their own story of the earth. She was hooked.

We scientists can often relate to this story. It may apply to us; some of us were drawn to scientific ideas young. But we certainly see it in our friends and family members and people we meet. They are interested in nature, in how the world works, but they "don't like science". Why? Where do our schools go wrong? Where do we as scientists go wrong? The schools are a big issue, but I will claim that we as scientists contribute to the problem by not doing a good job at all of communicating to the public why we are in science. We don't share the thrill of doing science.

A few years ago, Bjornerud decided to devote some of her professional energy to systematic public outreach, from teaching Elderhostel classes to working with grade schoolers, from writing short essays for consumption by the lay public to her book, which tells the story of the earth through its geological record.

To write for the public, scientists usually have to choose a plot device to make technical ideas accessible to non-scientists. (We agile software developers might think of this as the much-maligned metaphor from XP.)

Bjornerud used two themes to organize her book. The central theme is "rocks as text", reading rocks like manuscripts to reveal the hidden history of the earth. More specifically, she treats a rock as a palimpsest, a parchment on which a text was written and then scraped off, to be written on again. What a wonderful literary metaphor! It can captivate readers in a day when the intrigue of forensic science permeates popular culture.

Her second theme, polarities, aims more at the micro-structure of her presentation. She had as an ulterior motive, to challenge the modern tendency to see dichotomy everywhere. The world is really a tangled mix of competing concepts in tension. Among the polarities Bjornerud explores are innovation versus conservation (sound familiar?) and strength versus weakness.

Related to this motive is a desire -- a need -- to instill in the media and the public at larger an appetite for subtlety. People need to know that they can and sometimes must hold two competing ideas in their minds simultaneously. Science is a halting journey toward always-tentative conclusions.

These themes transfer well to the world of software. The tension between competing forces is a central theme driving the literature of software patterns. Focusing on a dichotomy usually leads to a sub-optimal program; a pattern that resolves the dichotomy can improve it. And the notion of "program as text" is a recurring idea. I've written occasionally about the value in having students read programs as they learn to write them, and I'm certainly not the first person to suggest this. For example, Owen Astrachan once wrote quite a bit on apprenticeship learning through reading master code (see, for example, this SIGCSE paper). Recently, Grady Booch blogged On Writing, in which he suggested "a technical course in selected readings of software source code".

Bjornerud talked a bit about the process of writing, revising, finding a publisher, and marketing a book. Only one idea stood out for me here... Her publisher proposed a book cover that used a photo of the Grand Canyon. But Bjornerud didn't want Grand Canyon on her cover; the Grand Canyon is a visual cliche, particularly in the world of rocks. And a visual cliche detracts from the wonder of doing geology; readers tune out when they see yet another picture of the Canyon. We are all taught to avoid linguistic cliches like the plague, but how many of us think about cliches in our other media? This seemed like an important insight.

Which programs are the cliches of software education? "Hello, World", certainly, but it is so cliche that it has crossed over into the realm of essential kitsch. Even folks pitching über-modern Ruby show us puts "Hello, World." Bank account. Sigh, but it's so convenient; I used it today in a lecture on closures in Scheme. In the intro Java world, Ball World is the new cliche. These trite examples provide a comfortable way to share a new idea, but they also risk losing readers whose minds switch off when they see yet another boring example they've seen before.

In the question-and-answer session that followed the talk, Bjornerud offered some partial explanations for where we go wrong teaching science in school. Many of us start with the premise that science is inherently interesting, so what's the problem?

  • Many science teachers don't like or even know science. They have never really done science and felt its beauty in their bones.

    This is one reason that, all other things being equal, an active scholar in a discipline will make a better teacher than someone else. It's also one of the reasons I favor schools of education that require majors in the content area to be taught (Michigan State) or that at least teach the science education program out of the content discipline's college (math and science education at UNI).

  • We tend explain the magic away in a pedantic way. We should let students discover ideas! If we tell students "this is all there is to it", we hide the beauty we ourselves see.

  • Bjornerud stressed the need for us to help students make a personal connection between science and their lives. She even admitted that we might help our students make a spiritual connection to science.

  • Finally, she suggested that we consider the "aesthetic" of our classrooms. A science room should be a good place to be, a fun place to engage ideas. I think we can take this one step further, to the aesthetic of our instructional materials -- our code, our lecture notes, our handouts and slides.

The thought I had as I left the lecture is that too often we don't teach science; we teach about science. At that point, science becomes a list of facts and names, not the ideas that underlie them. (We can probably say the same thing about history and literature in school, too.)

Finally, we talked a bit about learning. Can children learn about science? Certainly! Children learn by repetition, by seeing ideas over and over again at increasing degrees of subtlety as their cognitive maturity and knowledge level grow. Alan Kay has often said the same thing about children and language. He uses this idea as a motivation for a programming language like Smalltalk, which enables the learner to work in the same language as masters and grow in understanding while unfolding more of the language as she goes. His groups work on eToys seeks to extend the analogy to even younger children.

Most college students and professionals learn in this way, too. See the Spiral pedagogical pattern for an example of this idea. Bjornerud tentatively offered that any topic -- even string theory!?, can be learned at almost any level. There may be some limits to what we can teach young children, and even college students, based on their level of cognitive development, their ability to handle abstractions. But for most topics most of the time -- and certainly for the basic ideas of science and math -- we can introduce even children to the topic in a way they can appreciate. We just have to find the right way to pitch the idea.

This reminds me, too, of Owen Astrachan and his work on apprenticeship mentioned above. Owen has since backed off a bit from his claim that students should read master code, but not from the idea of reading code itself. When he tried his apprenticeship through reading master code, he found that students generally didn't "get it". The problem was that they didn't yet have the tools to appreciate the code's structures, its conventions and its exceptions, its patterns. They need to read code that is closer to their own level of programming. Students need to grow into an appreciation of master code.

Talks like this end up touching on many disparate issues. But a common thread runs through Bjornerud's message. Science is exciting, and we scientists have a responsibility to share this with the world. We must do so in how we teach our students, and in how we teach the teachers of our children. We must do so by writing for the public, engaging current issues and helping the citizenry to understand how science and technology are changing the world in which we live, and by helping others who write for the public to appreciate the subtleties of science and to share the same through their writing.

I concur. But it's a tall order for a busy scientist and academic. We have to choose to make time to meet this responsibility, or we won't. For me, one of my primary distractions is my own curiosity -- that which makes us a scientist in the first place drives us to push farther and deeper, to devote our energies to the science and not to the popularizing of it. Perhaps we are doomed to the G. H. Hardy's conclusion in his wonderful yet sad A Mathematician's Apology: Only after a great mind has outlived its ability to contribute to the state of our collective knowledge can -- should? will? -- it turn to explaining. (If you haven't read this book, do so soon! It's a quick read, small and compact, and it really is both wonderful and sad.)

But I do not think we are so doomed. Good scientists can do both. It's a matter of priorities and choice.

And, as in all things, writing matters. Writing well can succeed where other writing fails.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Software Development, Teaching and Learning

November 08, 2005 3:00 PM

An Index to the OOPSLA Diaries

I have now published the last of my entries intended to describe the goings-on at OOPSLA 2005. As you can see from both the number and the length of entries I wrote, the conference provided a lot of worthwhile events and stimulated a fair amount of thinking. Given the number of entries I wrote, and the fact that I wrote about single days over several different entries and perhaps several weeks, I thought that some readers might appreciate a better-organized index into my notes. Here it is.

Of course, many other folks have blogged on the OOPSLA'05 experience, and my own notes are necessarily limited by my ability to be in only one place at a time and my own limited insight. I suggest that you read far and wide to get a more complete picture. First stop is the OOPSLA 2005 wiki. Follow the link to "Blogs following OOPSLA" and the conference broadsheet, the Post-Obvious Double Dispatch. In particular, be sure to check out Brian Foote's excellent color commentary, especially his insightful take on the software devolution in evidence at this year's conference.

Now, for the index:

Day 1

Day 2

Day 3

Day 4

Day 5

This and That

I hope that this helps folks navigate my various meanderings on what was a very satisfying OOPSLA.

Finally, thanks to all of you who have sent me notes to comment on this postings. I appreciate the details you provide and the questions you ask...

Now, get ready for OOPSLA 2006.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Patterns, Software Development, Teaching and Learning

November 07, 2005 7:30 PM

OOPSLA Day 2: A Panel of the Direction of CS Education

The final item on the Educators Symposium program this year was a panel discussion on the future of computer science education. It was called Are We Doomed? Reframing the Discussion, in partial gest following last SIGCSE's panel debate Resolved: "Objects Early" Has Failed. After that session, Owen Astrachan commented, "We're doomed." What did he mean by this? Despite our own declarations that this is The Age of Information and that computer science is a fundamental discipline for helping the world to navigate and use the massive amount of information now being collected, we still teach CS courses in essentially the same way we always have. We still use toy examples in toy domains that don't really matter to anyone, least of all of students. We still teach our introductory courses as 100% old-style programming, with barely a nod to users, let alone to the fact that sophisticated consumers of computing grow increasingly independent of us and our inward focus.

Owen Astrachan

This summer, Owen said it this way:

We have the human genome project, we have Google, we have social networks, we have contributions to many disciplines, and our own discipline, but we argue amongst ourselves about whether emacs is better than vi, Eclipse better than IDEA, C++ better than Java or Scheme, or what objects-first really means.

I'm sorry, but if we don't change what we talk about amongst ourselves, we are doomed to a niche market while the biologists, the economists, the political scientists, etc., start teaching there own computational/modeling/programming/informatics courses. Nearly everyone turns to math for calculus, nearly everyone turns to math/stats for statistics. These are nearly universally acknowledged as basic and foundational for *lots* of disciplines. In the age of information nearly no discipline at a large scale requires computer science, certainly not programming as we teach it.

Owen isn't as pessimistic as the provocative "We're doomed" sounds; he simply wants to cause us to think about this sea change and begin to make a change ourselves.

I decided that this would make a great closing for my second Educators Symposium. Last year, my first symposium opened with Alan Kay challenging us all to set a higher bar for ourselves -- in computer science, and in computer science education. This year, my second symposium would close with a challenge to reinvent what we do as a discipline.

As in so many things, the panel did not turn out quite the way I had planned. First of all, Owen wasn't able to be at OOPSLA after all, so we were without our intellectual driving force. Then, when the panel went live, discussion on the panel went in a different direction than I had planned. But it had its good points nonetheless. The panel consisted of Robert Biddle, Alistair Cockburn, Brian Marick, and Alan O'Callaghan. I owe special thanks to Alistair and Alan, who joined us on relatively short notice.

As moderator, I had hoped to pen a wonderfully humorous introduction for for each of the panelists, to loosen things up before we dropped the gloves and got serious about changing the face of computer science. Perhaps I should have commissioned a master to ghostwrite, for in my own merely mortal hands my dream went unfulfilled. I did have a couple of good lines to use. I planned to introduce Robert as the post-modern conscience of the Educators Symposium, maybe with a visual bow to one of his previous Onward! presentations. For Brian, my tag line was to be "the panelist most likely to quote Heidegger -- and make you love him anyway". But I came up short for Alistair and Alan. Alistair's paper on software development as cooperative game playing was one possible source of inspiration. For Alan, all I could think was, "This guy has a higher depth/words ratio than most everyone I know". In the end, I played it straight and we got down to business rather quickly.

I won't try to report the whole panel discussion, as I got sucked into it and didn't take elaborate notes. In general, rather than focusing on how CS is being reinvented and how CS education ought to be reinvented, it took a turn toward metaphors for CS education. I will describe what was for me the highlight of the session and then add a couple of key points I remember.

Robert Biddle

The highlight for me was Robert's presentation, titled "Deprogramming Programming". It drew heavily on the themes that he and James Noble have been pitching at recent Onward! performances, in particular that much of what we take as accepted wisdom in software development these days is created by us and is, all too often, just wrong.

He started with a quote from Rem Koolhaus and Bruce Mau's "S, M, L, XL":

Sous le pavé, la plage.
(Under the paving stone, the beach.)

There is something beneath what we have built as a discipline. We do not program only our computers... We've programmed ourselves, in many wrong ways, and it's time to undo the program.

Narcissus

The idea that there is a software crisis is a fraud. There isn't one now, and there wasn't one when the term 'software engineering' was coined and became a seemingly unavoidable part of our collective psyche. Robert pointed out that in Greek mythology Narcissus fell in love not with himself but with his reflection. He believes that the field of software engineering has done the same, fallen in love with an image of itself that it has created. We in CS education are often guilty of the same offense.

Robert then boldly asserted that he loves his job as a teacher of computing and software development. If we look under the pavement, we will see that we developed a lot of useful, effective techniques for teaching students to build software: study groups, role play, and especially case studios and studios. I have written before about my own strong belief in the value of case studios and software studios, so at this point I nearly applauded.

Finally:

The ultimate goal of computer science is the program.

This quote is in someways antithetical to the idea Owen and I were basing the panel on (which is one reason I wanted Robert to be on the panel!), but it also captures what many folks believe about computing. I am one of them.

That certainly doesn't do justice to the stark imagery and text that constituted Robert's slides, nor to the distinctive verbal oratory that Robert delivers. But it does capture some of the ideas that stuck with me.

The rest of the panel presentations were good, and the discussion afterward ranged far and wide, with a recurring them of how we might adopt a different model for teaching software development. Here are a few points that stood out:

  • Brian offered two of ideas of interest: demonstration a lá the famed on-line demo of using Ruby on Rails to build a blog engine fifteen minutes, and education patterned on that his wife received and dishes out as a doctor of veterinary medicine.

  • Alan said that we in CS tend to teach the anatomy of a language, not how to use a language. He and I have discussed this idea before, and we both believe that patterns -- elementary or otherwise -- are a key to changing this tendency.

  • Dave West chimed in from the audience with a new statement of computer science's effect on the world reminiscent of his morning presentation: "We are redefining the world in which all of us work and live."

  • Two ideas that should be a bigger part of how we teach software development are a focus on useful things and study of existing work. Various people have been using these ideas in various forms for a while now, and we have uncovered some of the problems hidden behind their promise. For example, students aren't often ready to read programs that are *too* good very quickly; they simply don't appreciate their goodness until they have developed a better sense of taste. But if we framed more of our teaching efforts around these ideas and worked to compensate for their shortcomings, we would probably be better off than doing The Same Old Thing.

All in all, the panel did not go where I had intended for it to go. Of course, Big Design Up Front can be that way. Sometimes you have to take into account what your stakeholders want. In my case, the stakeholders were the panelists and the audience, with the audience playing the role of pseudo-customer. Judging from the symposium evaluations, many folks enjoyed the panel, so maybe it worked out all right after all.

Of course, what I had hoped for the panel was to challenge folks in the audience to feel uneasy about the direction of the discipline, to dare to think Big Thoughts about our discipline. I don't think we accomplished that. There will be more opportunities in the future.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 05, 2005 2:14 PM

Beautiful Hacks Live On

As PragDave recently wrote, the Ruby Extensions Project contains a "absolutely wonderful hack". This hack brought back memories and also reminded me of a similar but more ambitious project going on in the Ruby world.

First of all, here's the hack... One of Ruby's nice features is its collections' internal iterators. Instead of writing a for-loop to process every item in a collection, you send the collection a map message with a one-argument block as its argument. PragDave's example is to convert all the strings in an array uppercase, and the standard Ruby code is

names.map { |name| name.upcase }

The map method applies the block -- effectively a nameless method -- to every item in names.

This is a very nice feature, but after you program in Ruby you want more... Isn't writing that block more work than I should have to do? Why can't I just pass upcase, which is a one-argument method, as the argument to map? Because upcase isn't a method or a procedure; it's a symbol that names a method.

As PragDave reports, the Ruby Extensions hack adds a method to the Symbol class that allows Symbols to be coerced to a procedure object at run-time. The result is that we can now write the following:

names.map(&:upcase)

That's more succinct and avoids the need to create a new procedure object on the fly.

Now the memory... I distinctly remember the afternoon I learned the same hack in Smalltalk, back in the summer of 1988. You see, Smalltalk's collections also offer an array of internal iterators that eliminates the need to write most for-loops, and these iterators take blocks as arguments.

I was just learning Smalltalk and loved this idea -- it was my first exposure to object-oriented programming and the power of blocks, which are like functional programming's higher-order procedures. I was working on some code to sort a collection. In Smalltalk, collections also respond to a sort: message, whose argument is a two-argument block for comparing the values. So, to sort a collection of strings in ascending order, I could write:

names sort: [ x y | x < y ]

But soon I wanted more... Why can't I just pass < to sort:? The same reason I can't do it in Ruby: < isn't a block or a method; it is a symbol that names a method.

While figuring this out, I learned how sort: works. It turns around and sends the block a value:value: message with pairs of values from the collection as the two arguments. Hmm, what if a Symbol could respond to value:value: and do the right thing? I added a one-line method to the Symbol class, and they could:

value: anObject value: otherObject
      ^anObject perform: self with: otherObject

(perform: behaves something like Scheme's eval procedure.)

And now I could write the wonderfully compact

names sort: <

I soon added the one-argument form to Symbol, too, and began to write new code with abandon.

I felt pretty smug at the time, for having created something so beautiful. But I soon learned that I had merely rediscovered a hack that every Smalltalk programmer either reinvents or sees in someone else's code. That didn't diminish the beauty of my idea, but it did extinguish my smugness.

It turns out that this new hack is not only more compact and more convenient to write, but it runs faster than the block alternative, too. The callback on the symbol doesn't require the creation of a block on the fly, which requires memory allocation and the whole bit.

Finally, for the more ambitious project... This cool hack points out that Ruby and Smalltalk method names are symbols, and without some coercion they are not evaluated to their method values in ordinary use. What if messages were first-order objects in these languages? If so, then I can writing message sends that take other messages as arguments, thus creating higher-order messages. If you are interested in this idea, check out Nat Pryce's recent efforts to add higher-order messaging to Ruby, if you haven't already seen it. Nat gives a link to the paper that laid the foundation for this idea and steps you through the value of higher-order messages and their implementation in Ruby. It's another absolutely wonderful hack.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

November 04, 2005 8:34 AM

OOPSLA Day 2: Ward Cunningham on Seeking and Exploiting Simplicity

Ward Cunningham

The keynote address for this year's Educators' Symposium was given by Ward Cunningham, one of the software folks I admire most. Of course, he's admired by many folks, including not surprisingly by the folks who organized the Wiki Symposium launched by and collocated with OOPSLA this year. As a result, Ward's keynote to our group had to be moved from its traditional first slot of the day to the slot immediately after lunch. This gave the keynote a different feel, because we all had a morning's worth of activities in which to hear Ward's words. (You can read a summary of Ward's Wiki Symposium keynote, titled "The Crucible of Creativity".)

I introduced Ward by telling the audience how I first encountered his work. At AAAI 1995 in Montreal, I was discussing some of my ideas on teaching elementary patterns with a colleague from Wales. He said, "I have a book for you to review..." You see, he was one of the editors of the journal Expert Systems, and he had received a book for review that he didn't know what to do with. It was about software patterns, and he figured that I'd be an okay reviewer. He was probably encouraged to think this by the fact that none of his other AI friends seemed to be interested.

The book was Pattern Languages of Program Design, and it changed my life. I had never been exposed to the nascent software patterns community, and this book introduced me to a lot of ideas and, ultimately, people who have played an important role in my academic work since.

One of the papers in PLoPD-1 affected me immediately. It was about the CHECKS pattern language, and it was written by Ward Cunningham. These patterns were so simple, so obvious, yet as I read them I learned something about maintaining the integrity of the data in my programs. As I looked for more of Ward's work, I soon learned that what attracted me to CHECKS was a hallmark of Ward's: the ability to recognize simple ideas that offered unexpected depth and to explore that depth, patiently and humbly. So many of Ward's contributions to standard practice have been the result of a similar process: CRC cards, patterns, wiki, extreme programming, test-first design, and FIT among them.

I asked Ward to keynote the Educators Symposium because I admired his work but because I hoped that he could teach us educators -- especially me -- a little bit of his secret gift. Maybe I could nurture the gift in myself, and maybe even pass on something to my students.

Ward opened his talk with a reminiscence, too. As an electrical engineering major in college, he learned about Maxwell's equations from a Professor Simpson. The professor taught the equations with passion, and he expected his students to appreciate their beauty. Much of their beauty lay in their simplicity, in how that brought so many important facets together into such a small number of straightforward equations. Professor Simpson also loved the work of Richard Feynman, who himself appreciated simplicity and wrote with humanity about what he learned.

One of Ward's first slides was The Big Slide, the take-home point of his talk. He characterized his way of working as:

Ward's big slide: familiar + simplicity -> something new

Immerse yourself in a pool of ideas. Come to know them. Make them your own. Then look for some humble idea lying around, something that we all understand or might. Play with this idea to see whether it gives rise to something unexpected, some new ability that changes how we see or work with the pool of familiar ideas you are swimming in.

I have experienced the dangers inherent in looking for a breakthrough idea in the usual ways. When we look for Big Ideas, too often we look for Big Ideas. But they are hard to find. Maybe we can't recognize them from our current vantage point; maybe they are out of scale with the ideas we have right now.

Ward pointed out another danger: Too often, we look for complete ideas when a simple but incomplete idea will be useful enough. (Sounds like You Aren't Gonna Need It!)

Sometimes, we reach a useful destination by ignoring things we aren't supposed to ignore. Don't worry about all the things you are supposed to do, like taking your idea through to its logical conclusion right away, or polishing it up so that everyone can see how smart you are. Keep the ideas simple, and develop just what you need to move forward.

Ward pointed out that one thing we educators do is the antithesis of his approach: the project due date. It forces students to "get done", to polish up the "final version", and to miss opportunities to explore. This is one of the good things behind longer-term projects and undergraduate research -- they allow students more time to explore before packaging everything up in a neat little box.

How can we honor the simple in what we do? How can we develop patience? How can we develop the ability to recognize the simple idea that offers more?

Ward mentioned Kary Mullis's Dancing Naked in the Mind Field as a nice description of the mindset that he tries to cultivate. (You may recall my discussion of Mullis's book last year.) Mullis was passionate, and he had an immediate drive to solve a problem that no one thought mattered much. When he showed his idea to his colleagues, they all said, "Yeah, that'd work, but so what?". So Mullis gave in to the prevailing view and put his idea on the shelf for a few months. But he couldn't shake the thought that his simple little not-much of an idea could lead to big things, and eventually he returned to the idea and tinkered a little more... and won a Nobel Prize.

Extreme Programming grew out of the humble practices of programmers who were trying to learn how to work in the brave new image of Smalltalk. Ward is happy that P creates a work environment that is safe for the kind of exploration he advocates. You explore. When you learn something, you refactor. XP says to do the simplest thing that could possibly work, because you aren't gonna need it.

Many people have misunderstood this advice to mean do something simplistic, something stupid. But it really means that you don't have to wait until you understand everything before you do anything. You can do useful work by taking small steps. These principles encourage programmers to seriously consider just what is the simplest thing that could possibly work. If you don't understand everything about your domain and your task, at least you can do this simplest thing now, to move forward. When you learn more later, you won't have over-invested in ideas you have to undo. But you will have been making progress in the meantime.

(I don't think that Ward actually said all of these words. They are my reconstruction of what he taught me during his talk. If I have misrepresented his ideas in any way, the fault is mine.)

Ward recalled first getting Smalltalk, which he viewed as a "sketchpad for the thinking person to write spike solutions". Have an idea? Try to program it! Everything you need or could want to change is there before you, written in the same language you are using. He and Kent realized that they now had a powerful machine and that they should "program in a powerful way". Whenever in the midst of programming they slowed down, he would ask Kent, "What is the simplest thing that could possibly work?", and they would do that. It got them over the hump, let them regain their power and keep on learning.

This practice specializes his Big Slide from above to the task of programming:

Ward's big slide: familiar + small step -> learn something

Remember: You can't do it all at once. Ride small steps forward.

This approach to programming did not always follow the most efficient path to a solution, but it always made progress -- and that's more than they could say by trying to stay on the perfect path. The surprise was that the simple thing usually turned out to be all they needed.

Ward then outlined a few more tips for nurturing simple ideas and practices:

Practice that which is hard.

... rather than avoiding it. Do it every day, not just once. An example from software development is schema evolution. It's too hard to design perfect schema up front, so design them all the time.

Hang around after the fact.

After other people have explored an area heavily and the conventional wisdom is that all of the interesting stuff has been done, hang around. Tread in well-worn tracks, looking for opportunities to make easier what is hard. He felt that his work on objects was a good example.

Connect with lower level abstractions.

That's the beauty of Smalltalk -- so many levels of abstraction, all written in the same language, all connected in a common way.

Seek a compact expression.

In the design of programming languages, we often resist math's greatest strength -- the ability to create compact expressions -- in favor of uniformity. Smalltalk is two to four times more compact than Java, and Ward likes that -- yet he knows that it could be more compact. What would Smalltalk with fewer tokens feel like?

Reverse prevailing wisdom.

Think the opposite of what everyone says is the right way, or the only way. You may not completely reverse how you do what you do, but you will be able to think different thoughts. (Deconstruction reaches the technical world!)

From the audience, Joe Bergin pointed out his personal favorite variant of this wisdom (which, I think, ironically turns Ward's advice back on itself): Take a good idea and do it to the hilt. This is, of course, how Kent Beck initially described XP -- "Turn the knobs up to 10."

Simplicity is not universal but personal, because it builds upon a person's own pool of familiar ideas. For example, if you don't know mathematics, then you won't be able to avail yourself of the simplicities it offers. (Or want, or know to want to.)

At this point, Ward ended his formal talk and spent almost an hour demonstrating some of these ideas in the testbed of his current hobby project, the building of simple computers to drive an equally simple little CRT. I can't do justice to all he showed us during this phase of his keynote, but it was remarkable... He created his own assembly language and then built a little simulator for it in Java. He created macros. He played with simple patterns on the screen, and simple patterns in the wiring of his little computer board. He hasn't done anything to completion -- his language and simulator enable him to do only what he has done so far. But that has been good enough to learn some neat ideas about the machines themselves and about the programs he's written.

While I'm not sure anything concrete came out of this portion of his talk, I could see Ward's joy for exploration and his fondness for simple questions and just-good-enough solutions while playing.

We probably should all have paid close attention to what Ward was doing because, if the recent past has taught us anything, it is that in five years we will all be doing what Ward is playing with today.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

October 31, 2005 7:19 PM

"Mechanistic"

In my recent post on Gerry Sussman's talk at OOPSLA, I quoted Gerry Sussman quoting concert pianist James Boyk, and then commented:

A work of art is a machine with an aesthetic purpose.

(I am uncomfortable with the impression these quotes give, that artistic expression is mechanistic, though I believe that artistic work depends deeply on craft skills and unromantic practice.)

Thanks to the wonders of the web, James came across my post and responded with to my parenthetical:

You may be amused to learn that fear of such comments is the reason I never said this to anyone except my wife, until I said it to Gerry! Nevertheless, my remark is true. It's just that word "machine" that rings dissonant bells for many people.

I was amused... I mean, I am a computer scientist and an old AI researcher. The idea of a program, a machine, being beautiful or even creating beauty has been one of the key ideas running through my entire professional life. Yet even for me the word "machine" conjured up a sense that devalued art. This was only my initial reaction to Sussman's sentiment, though. I also felt an almost immediate need to mince my discomfort with a disclaimer about the less romantic side of creation, in craft and repetition. I must be conflicted.

James then explained the intention underlying his use of the mechanistic reference in way that struck close to home for me:

I find the "machine" idea useful because it leads the musician to look for, and expect to find, understandable structures and processes in works of music. This is productive in itself, and at the same time, it highlights the existence and importance of those elements of the music that are beyond this kind of understanding.

This is an excellent point, and it sheds light on other domains of creation, including software development. Knowing and applying programming patterns helps programmers both to seek and recognize understandable structures in large programs and to recognize the presence and importance of the code that lies outside of the patterns. This is true even -- especially!? -- for novice programmers, who are just beginning to understand programs and their structure, and the process of reading and writing them. Much of the motivation for work on the use of elementary patterns in instruction, as we try to help learn to comprehend masses of code that at first glance may seem but a jumble but which in fact bear a lot of structure within them. Recognizing code that is and isn't part of recurring structure, and understanding the role both play, is an essential skill for the novice programmer to learn.

Folks like Gerry Sussman and Dick Gabriel do us all a service by helping us to overcome our discomfort when thinking of machines and beauty. We can learn something about science and about art.

Thanks to James for following up on my post with his underlying insight!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 27, 2005 7:56 PM

OOPSLA Day 5: Grady Booch on Software Architecture Preservation

Grady Booch, free radical

Grady Booch refers to himself as an "IBM fellow and free radical". I don't know if 'free radical' part of his job description or only self-appellation, but it certainly fits his roving personality. He is a guy with many deep interests and a passion for exploring new lands.

His latest passion is Handbook of Software Architecture, a project that many folks thinks is among the most important strategic efforts for the history and future of software development.

Booch opened his invited talk at OOPSLA by reminding everyone that "classical science advances via the dance between quantitative observation and theoretical construction." The former is deliberate and intentional; the latter is creative and testable. Computer science is full of empirical observation and the construction of theories, but in the world of software we often spend all of time building artifacts and not enough time doing science. We have our share of theories, about process and tools, but much of that work is based on anecdote and personal experience, not the hard, dispassionate data that reflects good empirical work.

Booch reminisced about a discussion he had with Ralph Johnson at the Computer Museum a few years ago. They did a back-of-envelope calculation that estimated the software industry had produces approximately 1 trillion lines of code in high-level languages since the 1950s -- yet little systematic empirical study had been done of this work. What might we learn from digging through all that code? One thing I feel pretty confident of: we'd find surprises.

In discussing the legacy of OOPSLA, Booch mentioned one element of the software world launched at OOPSLA that has taken seriously the attempt to understand real systems: the software patterns community, of which Booch was a founding father. He hailed patterns as "the most important contribution of the last 10-12 years" in the software world, and I imagine that his fond evaluation rests largely on patterns community's empirical contribution -- a fundamental concern for the structure of real software in the face of real constraints, not the cleaned up structures and constraints of traditional computer science.

We have done relatively little introspection into the architecture of large software systems. We have no common language for describing architectures, no discipline that studies software in a historical sense. Occasionally, people publish papers that advance the area -- one that comes to mind immediately is Butler Lampson's Hints for Computer System Design -- but these are individual efforts, or ad hoc group efforts.

The other thing this brought to my mind was my time in an undergraduate architecture program. After the first-year course, every archie took courses in the History of Housing, in which students learned about existing architecture, both as historical matter and to inform current practice. My friends became immersed in what had been done, and that certainly gave them a context in which to develop their own outlook on design. (As I look at the program's current curriculum, I see that the courses have been renamed History of Architecture, which to me replaces the rich flavor of houses for the more generic 'architecture', even if it more accurately reflects the breadth of the courses.)

Booch spent a next part of his talk comparing software architecture to civil architecture. I can't do justice to this part of his talk; you should read the growing volume of content on his web site. One of his primary distinctions, though, involved the different levels of understanding we have about the materials we use. The transformation from vision to execution in civil systems is not different in principle from that in software, but we understand more about the physical materials that a civil architect uses than we do about a software developer's raw material. Hence the need to study existing software systems more deeply.

Civil architecture has made tremendous progress over the years in its understanding of materials, but the scale of its creations has not grown commensurately other what the ancients built. But the discipline has a legacy of studying the works of the masters.

Finally he listed a number of books that document patterns in physical and civil systems, including The Elements of Style -- not the Strunk and White version -- and the books of Christopher Alexander, the godfather of the software patterns movement.

Booch's goal is for the software community to document software architectures in as great detail, both for history's sake and for the patterns that will help us create more and more beautiful systems. His project is one man's beginning, and an inspirational one at that. In addition to documenting classic systems such as MacPaint, he aims to preserve our classic software as well. That will enable us to study and appreciate it in new ways as our understanding of computing and software grow.

He closed his talk with inspiration but also a note of warning... He told the story of contacting Edsger Dijkstra to tell him about the handbook project and seek his aid in the form of code and papers and other materials from Dijkstra's personal collection. Dijkstra supported the project enthusiastically and pledged materials from his personal library -- only to die before the details had been formalized. Now, Booch must work through the Dijkstra estate in hopes of collecting any of the material pledged.

We are a young discipline, relatively speaking, but time is not on our side.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development

October 20, 2005 6:46 PM

OOPSLA This and That, Part 2

As always, OOPSLA has been a constant font of ideas. But this year's OOPSLA seems to have triggered even more than its usual share. I think that is a direct result of the leadership of Dick Gabriel and Ralph Johnson, who have been working for eighteen months to create a coherent and focused program. As much as I have already written this week -- and I know that some of my articles have been quite long; sorry for getting carried away... -- I have plenty of raw material to keep me busy for a while, including the rest of the Educators Symposium, Gerald Sussman's invited talk, my favorite neologism of the week, and the event starting as I type this: Grady Booch's conference-closing talk on his ambitious project to build a handbook of software architecture.

For now, I'd like to share just a few ideas from the two panels I attended in the middle part of this fine last day of OOPSLA.

The Echoes panel was aimed at exploring the echoes of the structured design movement of the late 1970s. It wasn't as entertaining or as earth-shaking as it might have been given its line-up of panelists, but I took away two key points:

  • Kent Beck said that he recently re-read Structured Design and was amazed how much of the stuff we talk about today is explained in that book. I remember reading that book for the first time back in 1985, after reading Structured Analysis and System Specification in my software engineering senior sequence. They shaped how I thought about software construction.

    I plan to re-read both books in the next year.

  • Grady Booch said that no one reads code, not like folks in other disciplines read the literature of their disciplines. And code is in many ways the real literature that we are creating. I agree with Grady and have long thought about how the CS courses we teach could encourage students to read real programs -- say, in operating systems, where students can read Linux line by line if they want. Certainly, I do this my compilers course, but not with a non-trivial program. (Well, my programming languages students usually read a non-trivial interpreter, a Scheme interpreter written in Scheme modeled on McCarthy's original Lisp interpreter. That program is small but not trivial. It is the Maxwell equations of computing.)

    I am going to think about how to work this idea more concretely into the courses I teach in the next year.

Like Echoes, the Yoshimi Battles the Pink Robots panel -- on the culture war between programmers and users -- didn't knock my socks off, but Brian Foote was in classic form. I don't think that he was cast in the role of Yoshimi, but he defended the role of the programmer qua programmer.

  • His position statement quoted Howard Roark, the architect in Ayn Rand's The Fountainhead: "I don't build in order to have clients. I have clients in order to build."

    I immediately thought of a couple of plays on the theme of this quote:

    I don't teach to have students. I have students to teach.
    I don't blog to have readers. I have readers to blog.

  • Brian played on words, too, but not Howard Roark's. He read, in his best upper-crust British voice, the lyrics of "I Write the Code", with no apologies at all to Barry Manilow.
    I am hacker, and I write the code.

  • And this one was Just Plain Brian:
    You know the thing that is most annoying about users is that they have no appreciation for the glory, the grandeur, or the majesty of living inside the code. It is my cathedral.

Oh: on his way into the lecture to give his big talk, Grady Booch walked by, glanced at my iBook, and said, "Hey, another Apple guy. Cool."

It's been a good day.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

October 20, 2005 3:57 PM

OOPSLA Day 5: Martin Fowler on Finding Good Design

Sadly... The last day of OOPSLA is here. It will be a good day, including Onward! Films (or Echoes -- could I possibly skip a panel with Kent and so many Big Names?) (later note: I didn't), Lightning Talks (or the Programmers versus Users panel) (later note: I am at this panel now), and finally, of course, Grady Booch. But then it's over. And I'll be sad.

On to Martin or, as he would say, Mah-tin. Ralph Johnson quoted Brian Foote's epigrammatic characterization of Martin: an intellectual jackal with good taste in carrion. Soon, Ralph began to read his official introduction of Martin, written by ... Brian. It was a play on the location of conference, in Fashion Valley. In the fashion world, good design is essential, and we know our favorite designers by one name: Versace, Halston, Kent, Ward -- and Mah-tin. Martin lives a life full of beautiful models, and he is "a pretty good writer, for an Englishman".

On came Martin. He roamed the full stage, and then sat down in a cushy chair.

When asked to keynote at a conference like OOPSLA, one is flush with pride. Then you say 'yes', which is your first mistake. What can you say? Keynoters are folks who make great ideas, who invent important stuff. Guy's talks was the best. But I'm not a creator or deep thinker, says... I'm that guy who goes to a fashion show, steals the ideas he sees, knocks off the clothing, and then mass-markets them at the mall. I just want to know where we are, says Martin; predicting the future is beyond me.

To shake things up, he called on George Platts to the stage. Those of you who have attended a PLoP conference know George as a "lateral thinking consultant" and games leader. Earlier, Martin had asked him to create a game for the whole crowd, designed to turn us into intellectual jackals. For this game, George drew his inspiration from the magnificent book Life of Pi. (Eugene says: If you have not read this book, do it. Now.) George read a list of animal sounds from one page in the book and told each of us to choose one. He then told each of us to warm up by saying our own name in that fashion. (I grunted "Eugene"). He then asked everyone whose first name started with A to stand and do it for the crowd. Then I and R; D, M, V; G, P, Y; ... and finally E, N, and W. Finally, the whole room stood and hissed, grunted, growled "OOPSLA".

Back came Martin to the forefront, for the talk itself.

Martin's career is aimed at answering the question, "What is good design?" He seeks an answer that is more than just fuzzy words and fuzzy ideas.

At the beginning of his career, Martin was an EE. In that discipline, designers drawing a diagram that delineate a product and then passes it on to the person who constructs the thing. So when Martin moved on to software engineering, which had adopted this approach. But soon he came to reject this approach. Coding as construction fails. He found that designs -- other people's design and his own -- always turned out to be wrong. Eventually, he met folks he recognized as good designers who design and code at the same time. Martin came to see that the only way to design well is to program at the same time, and the only way to program well is to design at the same time.

That's the philosophy underlying all of Martin's work.

How do we characterize good designs? A set of principles.

One principle central to all good designs is to eliminate duplication. He remembers when Kent told him this, but at the time Martin dismissed it as not very deep. But then he saw that there is much more to this than he first\ thought. It turns out that when you eliminate duplication, your designs end up good, almost irrespective of anything else. He noted that this has been pattern in his life: see an important idea, dismiss it, and eventually come around. (And then write a book about it. :-)

Another principle: orthogonality, the notion that different parts of a system should do different kinds of things.

Another principle: separate the user interface from the guts of the program. One way to approach this is to imagine you have to add a new UI to your system. Any code you would have to duplicate is in the wrong place.

Philosophy. Principles. What is the next P?

Patterns -- the finding of recurring structures. Martin turned to Ralph Johnson, who has likened pattern authors to Victorian biologists, who went off to investigate a single something in great detail -- cataloging and identifying and dissecting and classifying. You learn a lot about the one thing, but you also learn something bigger. Consider Darwin, who ultimately created the theory of natural selection.

The patterns community is about finding that core something which keeps popping in a slightly different way all over the place. Martin does this in his work. He then did a cool little human demonstration of one of the patterns from his Patterns of Enterprise Application Architecture, an event queue. In identifying and naming the pattern, he found so many nice results, including some unexpected ones, like the ability to rewind and reply transactions.

Patterns people "surface" core ideas, chunk them up into the right scale, and identify when and when not to use them.

Why isn't the patterns community bigger? Martin thinks it should be bigger, and in particular should be a standard part of every academic's life! We academics should go out, look at big old systems for a couple of months, and try to make sense of them.

At the end of the session, James Noble pointed out two reasons why academic programmers don't do this: they do not receive academic credit for such activity, and they do not have access to the large commercial systems that they really need to study. On the first point, Martin agreed. This requires a shift in the academic culture, which is hard to do. (This issue of academic culture, publishing, and so on came up again at the Echoes panel later in the morning.) Martin volunteered to bellow in that direction whenever and wherever asked. On the second point, he answered with an emphatic 'yes!' Open source software opens a new avenue for this sort of work...

Philosophy. Principles. Patterns. What is the last P?

Practices. His favorite is, of course, refactoring. He remembers watching Kent Beck editing a piece of code of Martin Fowler code. Kent kept making these small, trivial changes to the code, but after each step Martin found himself saying, "Hmm, that is better." The lightbulb went off again.

He thanked John Brant and Don Roberts, Bill Opdyke and Ralph Johnson, and Ward and Kent. It's easy to write good books when other people have done the deep thinking.

He then pointed to Mary Beth Rosson's project that asks students to fixing code as a way to learn. (Eugene thinks: But that's not refactoring, right?) Refactoring is a good practice for students as they learn. "Here is some code. Smell it. Come back when it's better."

My students had better get ready...

Another practice that Martin lives and loves is test-driven design. Of course,it embodies the philosophy he began this talk with: You design and program at the same time.

And thus endeth the lesson.

In addition to the comment on academics and patterns, James Noble asked another question. (Martin lamented, "First I was Footed, and now I am Nobled." If you know Foote and Noble, you can imagine why this prospect would have a speaker a bit on edge.) James played it straight, except for a bit of camera play with the mic and stage, and pointed out that separating the UI ought causes an increase in the total amount of code -- he claimed 50%. How is that better? Martin: I don't know. Folks have certainly questioned this principle. But then, we often increase duplication in order to make it go away; maybe there is a similarity here?

Or, as Martin closed, maybe he is just wrong. If so, he'll learn that soon enough.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

October 19, 2005 8:17 PM

More on Safety and Freedom in the Extreme

giving up liberty for safety

In my entry on Robert Hass's keynote address, I discussed the juxtaposition of 'conservative' and 'creative', the tension between the desire to be safe and the desire to be free, between memory and change. Hass warned us against the danger inherent in seeking safety, in preserving memory, to an extreme: blindness to current reality. But he never addressed the danger inherent in seeking freedom and change to the exclusion of all else. I wrote:

There is a danger in safety, as it can blind us to the value of change, can make us fear change. This was one of the moments in which Hass surrendered to a cheap political point, but I began to think about the dangers inherent in the other side of the equation, freedom. What sort of blindness does freedom lead us to?

giving up safety for liberty

During a conversation about the talk with Ryan Dixon, it hit me. The danger inherent in seeking freedom and change to an extreme untethered idealism. Instead of "Ah, the good old days!", we have, "The world would be great if only...". When we don't show proper respect to memory and safety, we become blind in a different way -- to the fact that the world can't be the way it is in our dreams, that reality precludes somehow our vision.

That doesn't sound so bad, but people sometimes forget not to include other people in their ideal view. We sometimes become so convinced by our own idealism that we feel a need to project it onto others, regardless of their own desires. This sort of blindness begins to look in practice an awful lot like the blindness of overemphasizing safety and memory.

Of course, when discussing creative habits, we need to be careful not to censor ourselves prematurely. As we discussed at Extravagaria, most people tend toward one extreme. They need encouragement to overcome their fears of failure and inadequacy. But that doesn't mean that we can divorce ourselves from reality, from human nature, from the limits of the world. Creativity, as Hass himself told us, thrives when it bumps into boundaries.

Being creative means balancing our desire for safety and freedom. Straying too far in either way may work in the short term, but after too long in either land we lose something essential to the creative process.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 18, 2005 4:04 PM

OOPSLA Day 3: Robert Hass on Creativity

Robert Hass, former poet laureate of the US

With Dick Gabriel and Ralph Johnson leading OOPSLA this year, none of us were that the themes of the conference were creativity and discovery. This theme presented itself immediately in the conference's opening keynote speaker, former poet laureate Robert Hass. He gave a marvelous talk on creativity.

Hass began his presentation by reading a poem (whose name I missed) from Dick's new chapbook, Drive On. Bob was one of Dick's early teachers, and he clearly reveled in the lyricism, the rhythm of the poem. Teachers often form close bonds with their students, however long or short the teaching relationship. I know the feeling from both sides of the phenomenon.

He then described his initial panic at thought of introducing the topic of creativity to a thousand people who develop software -- who create, but in a domain so far from his expertise. But a scholar can find ways to understand and transmit ideas of value wherever they live, and Hass is not only a poet but a first-rate scholar.

Charles Dickens burst on scene with publication of The Pickwick Papers. With this novel, Dickens essentially invented the genre of the magazine-serialized novel. When asked how he created a new genre of literature, he said simply, "I thought of Pickwick."

I was immediately reminded of something John Gribble said in his talk at Extravagaria on Sunday: Inspiration comes to those already involved in the work.

Creativity seems to happen almost with cause. Hass consulted with friends who have created interesting results. One solved a math problem thought unsolvable by reading the literature and "seeing" the answer. Another claimed to have resolved the two toughest puzzles in his professional career by going to sleep and waking up with the answer.

So Hass offered his first suggestion for how to be creative: Go to sleep.

Human beings were the first animals to trade instinct for learning. The first major product of our learning was our tools. We made tools that reflected what we learned about solving immediate problems we faced in life. These tools embodied the patterns we observed in our universe.

We then moved on to broader forms of patterns: story, song, and dance. These were,according to Hass, the original forms of information storage and retrieval, the first memory technologies. Eventually, though, we created a new tool, the printing press, that made these fundamentally less essential -- less important!? And now the folks in this room contribute to the ultimate tool, the computer, that in many ways obsoletes human memory technology. As a result, advances in human memory tech have slowed, nearly ceased.

The bulk of Hass's presentation explored the interplay between the conservative in us (to desire to preserve in memory) and the creative in us (the desire to create anew). This juxtaposition of 'conservative' and 'creative' begets a temptation for cheap political shots, to which even Hass himself surrendered at least twice. But the juxtaposition is essential, and Hass's presentation repeatedly showed the value and human imperative for both.

Evolutionary creativity depends on the presence having a constant part and a variable part, for example, the mix of same and different in an animal's body, in the environment. The simultaneous presence of constant and variable is the basis of man's physical life. It is also the basis of our psychic life. We all want security and freedom, in an unending cycle Indeed, I believe that most of us want both all the time, at the same time. Conservative and creative, constant and variable -- we want and need both.

Humans have a fundamental desire for individuation, even while still feeling a oneness with our mothers, our mentors, the sources of our lives. Inspiration, in a way, is how a person comes to be herself -- is in part a process of separation.

"Once upon a time" is linguistic symbol, the first step of the our separation from the immediate action of reading into a created world.

At the same time, we want to be safe and close to, free and faraway. Union and individuation. Remembering and changing.

Most of us think that most everyone else is more creative than we are. This is a form of the fear John Gribble spoke about on Sunday, one of the blocks we must learn to eliminate from our minds -- or at least fool ourselves into ignoring. (I am reminded of John Nash choosing to ignore the people his mind fabricates around him in A Beautiful Mind.)

Hass then told a story about the siren song from The Odyssey. It turns out that most of the stories in Homer's epics are based in "bear stories" much older than Homer. Anyway, Odysseus's encounter with the sirens is part of a story of innovation and return, freedom on the journey followed by a return to restore safety at home. Odysseus exhibits the creativity of an epic hero: he ties himself to the mast so that he can hear the sirens' song without having to take ship so close to the rocks.

According to Hass, in some versions of the siren story, the sirens couldn't sing -- the song was only a sailors' legend. But they desire to hear the beautiful song, if it exists. Odysseus took a path that allowed him both safety and freedom, without giving up his desire.

In preparing for this talk,hass asked himself, "Why should I talk to you about creativity? Why think about it all?" He identified at least four very good reasons, the desire to answer these questions:

  • How can we cultivate creativity in ourselves?
  • How can we cultivate creativity in our children?
  • How can we identify creative people?
  • How can we create environments that foster creativity?

So he went off to study what we know about creativity. A scholar does research.

Creativity research in the US began when academic psychologists began trying to measure mental characteristics. Much of this work was done at the request of the military. As time went by, the number of characteristics, perhaps in correlation of research grants awarded by the government. Creativity is, perhaps, correlated with salesmanship. :-) Eventually, we had found several important characteristics, including that there is little or no correlation between IQ and creativity. Creativity is not a province of the intellectually gifted.

Hass cited the research of Howard Gardner and Mihaly Csikszentmihalyi (remember him?), both of whom worked to identify key features of the moment of a creative change, say, when Dickens thought to publish a novel in serial form. The key seems to be immersion in a domain, a fascination with domain and its problem and possibilities. The creative person learns the language of the domain and sees something new. Creative people are not problems solvers but problem finders.

I am not surprised to find language at the center of creativity! I am also not surprised to know that creative people find problems. I think we can save something even stronger, that creative people often create their own problems to solve. This is one of the characteristics that biases me away from creativity: I am a solver more than a finder. But thinking explicitly about this may enable me to seek ways to find and create problems.

That is, as Hass pointed out earlier, one of the reasons for thinking about creativity: ways to make ourselves more creative. But we can use the same ideas to help our children learn the creative habit, and to help create institutions that foster the creative act. He mentioned OOPSLA as a social construct in the domain of software that excels at fostering creative. It's why we all keep coming back. How can we repeat the process?

Hass spoke more about important features of domains. For instance, it seems matter how clear the rules of the domain are at the point that a person enters it. Darwin is a great example. He embarked on his studies at a time when the rules of his domain had just become fuzzy again. Geology had recently expanded European science's understanding of the timeline of the earth; Linnaeus had recently invented his taxonomy of organisms. So, some of the knowledge Darwin needed was in place, but other parts of the domain were wide open.

The technology of memory is a technology of safety. What are the technologies of freedom?

Hass read us a funny poem on story telling. The story teller was relating a myth of his people. When his listener questioned an inconsistency in his story, the story teller says, "You know, when I was a child, I used to wonder that..." Later, the listener asked the same question again, and again, and each time the story teller says, "You know, when I was a child, I used to wonder that..." When he was a child, he questioned the stories, but as he grew older -- and presumably wiser -- he came to accept the stories as they were, to retell them without question.

We continue to tell our stories for their comfort. They make us feel safe.

They is a danger in safety, as it can blind us to the value of change, can make us fear change. This was one of the moments in which Hass surrendered to a cheap political point, but I began to think about the dangers inherent in the other side of the equation, freedom. What sort of blindness does freedom lead us to?

Software people and poets have something in common, in the realm of creativity: We both fall in love with patterns, with the interplay between the constant and the variable, with infinite permutation. In computing, we have the variable and the value, the function and the parameter, the framework and the plug-in. We extend and refactor, exposing the constant and the variable in our problem domains.

Hass repeated an old joke, "Spit straight up and learn something." We laugh, a mockery of people stuck in same old patterns. This hit me right where I live. Yesterday at the closing panel of the Educators' Symposium, Joe Bergin said something that I wrote about a while back: CS educators are an extremely conservative lot. I have something to say about that panel, soon...

First safety, then freedom -- and with it the power to innovate.

Of course, extreme danger, pressure, insecurity can also be the necessity that leads to the creative act. As is often the case, opposites turn out to be true. As Thomas Mann said,

A great truth is a truth whose opposite is also a great truth.

Hass reminds us that there is agony in creativity -- a pain at stuckness, found in engagement with the world. Pain is unlike pleasure, which is homeostatic ("a beer and ballgame"). Agony is dynamic, ceasing to cling to safe position. There is always an element of anxiety, consciousness heightened at the moment of insight, gestalt in face of incomplete pattern.

The audience asked a couple of questions:

  • Did he consult only men in his study of creativity? Yes, all but his wife, who is also a poet. She said, "Tell them to have their own strangest thoughts." What a great line.

  • Is creativity unlimited? Limitation is essential to creativity. If our work never hits an obstacle, then we don't know when it's over. (Sounds like test-driven development.) Creativity is always bouncing up against a limit.

I'll close my report with how Hass closed the main part of his talk. He reached "the whole point of his talk" -- a sonnet by Michelangelo -- and he didn't have it in his notes!! So Hass told us the story in paraphrase:

The pain is unbearable, paint dripping in my face, I climb down to look at it, and it's horrible, I hate it, I am no painter...

It was the ceiling of the Sistine Chapel.

~~~~~

UPDATE (10/20/05): Thanks again to Google, I have tracked down the sonnet that Hass wanted to read. I especially love the ending:

Defend my labor's cause,
good Giovanni, from all strictures:
I live in hell and paint its pictures.

-- Michelangelo Buonarroti

I have felt this way about a program before. Many times.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

October 16, 2005 9:52 PM

OOPSLA Day 1: The Morning of Extravagaria

OOPSLA 2005 logo

OOPSLA has arrived, or perhaps I have arrived at OOPSLA. I almost blew today off, for rest and a run and work in my room. Some wouldn't have blamed me after yesterday, which began at 4:42 AM with a call from Northwest Airlines that my 7:05 AM flight had been cancelled, included my airline pilot missing the runway on his first pass at the San Diego airport, and ended standing in line for two hours to register at my hotel. But I dragged myself out of my room -- in part out of a sense of obligation to having been invited to participate, and in part out of a schoolboy sense of propriety that I really ought to go to the events at my conferences and make good use of my travels.

My event for the day was an all-day workshop called Extravagaria III: Hunting Creativity. As its title reveals, this workshop was the third in a series of workshops initiated by Richard Gabriel a few years ago. Richard is motivated by the belief that computer science is in the doldrums, that what we are doing now is mostly routine and boring, and that we need a jolt of creativity to take the next Big Step. We need to learn how to write "very large-scale programs", but the way we train computer scientists, especially Ph.D. students and faculty, enforce a remarkable conservatism in problem selection and approach. The Extravagaria workshops aim to explore creativity in the arts and sciences, in an effort to understand better what we mean by creativity and perhaps better "do it" in computer science.

The workshop started with introductions, as so many do, but I liked the twist that Richard tossed in: each of us was to tell what was the first program we ever wrote out of passion. This would reveal something about each of us to one another, and also perhaps recall the same passion within each storyteller.

My first thought was of a program I wrote as a high school junior, in a BASIC programming course that was my first exposure to computers and programs. We wrote all the standard introductory programs of the day, but I was enraptured with the idea of writing a program to compute ratings for chessplayers following the Elo system used for chessplayers. This was much more complex than the toy problems I solved in class, requiring input in the form of player ratings and a crosstable showing results of games among the players and output in the form of updated ratings for each player. It also introduced new sorts of issues, such as using text files to save state between runs and -- even more interesting to me -- the generation of an initial set of ratings through a mechanism of successive approximations, process that may never quite converge unless we specified an epsilon larger than 0. I ultimately wrote a program of several hundred lines, a couple of orders of magnitude larger than anything I had written before. And I cared deeply about my program, the problem it solved, and its usefulness to real people.

I enjoyed everyone else's stories, too. They reminded us all about the varied sources of passion, and how a solving a problem can open our eyes to a new world for us to explore. I was pleased by the diversity of our lot, which included workshop co-organizer John Gribble, a poet friend of Richard's who has never written a program; Rebecca Rikner, the graphic artist who designed the wonderful motif for Richard's book Writers' Workshops and the Work of Making Things, and Guy Steele, one of the best computer scientists around. The rest of us were computer science and software types, including one of my favorite bloggers, Nat Pryce. Richard's first passionate program was perhaps a program to generate "made-up words" from some simple rules, to use in naming his rock-and-roll-band. Guy offered three representative, if not first, programs: a Lisp interpreter written in assembly language, a free verse generator written in APL, and low chart generator written in RPG. This wasn't the last mention of APL today, which is often the sign of a good day.

Our morning was built around an essay written by John Gribble for the occasion, called "Permission, Pressure, and the Creative Process". John read his essay, while occasionally allowing us in the audience to comment on his remarks or previous comments. John offered as axioms two beliefs that I share with him:

  • that all people are creative, that is, possess the potential to act creatively, and
  • that there is no difference of kind between creativity in the arts and creativity in the sciences.

What the arts perhaps offer scientists is the history and culture of examining the creative process. We scientists and other analytical folks tend to focus on product, often to the detriment of how well we understand how we create them.

John quoted Stephen King from his book On Writing, that the creator's job is not to find good ideas but to recognize them when they come along. For me, this idea foreshadows Ward Cunningham's keynote address at tomorrow's Educators' Symposium. Ward will speak on "nurturing the feeble simplicity", on recognizing the seeds of great ideas despite their humility and nurturing them into greatness. As Brian Foote pointed out later in the morning, this sort of connection is what makes conferences like OOPSLA so valuable and fun -- plunk yourself down into an idea-rich environment, soak in good ideas from good minds, and your own mind has the raw material it needs to make connections. That's a big part of creativity!

John went on to assert that creativity isn't rare, but rather so common that we are oblivious to it. What is rare is for people to act on their inspirations. Why do we not act? We have so low an opinion of our selves that we figure the inspiration isn't good enough or that we can't do it justice in our execution. Another reason: We fear to fail, or to look bad in front of our friends and colleagues. We are self-conscious, and the self gets in the way of the creative act.

Most people, John believes, need permission to act creatively. Most of us need external permission and approval to act, from friends or colleagues, peers or mentors. This struck an immediate chord with me in three different relationships: student and teacher, child and parent, and spouse and spouse. The discussion in our workshop focused on the need to receive permission, but my immediate thought was of my role as potential giver of permission. My students are creative, but most of them need me to give them permission to create. They are afraid of bad grades and of disappointing me as their instructor; they are self-conscious, as going through adolescence and our school systems tend to make them. My young daughters began life unself-conscious, but so much of their lives are about bumping into boundaries and being told "Don't do that." I suspect that children grow up most creative in an environment where they have permission to create. (Note that this is orthogonal to the issue of discipline or structure; more on that later.) Finally, just as I find myself needing my wife's permission to do and act -- not in the henpecked husband caricature, but in the sense of really caring about what she thinks -- she almost certainly feels the need for *my* permission. I don't know why this sense that I need to be a better giver of permission grew up so strong so quickly today, but it seemed like a revelation. Perhaps I can change my own behavior to help those around me feel like they can create what they want and need to create. I suspect that, in loosing the restrictions I project onto others, I will probably free myself to create, too.

When author Donald Ritchie is asked how to start writing, he says, "First, pick up your pencil..." He's not being facetious. If you wait for inspiration to begin, then you'll never begin. Inspiration comes to those already involved in the work.

Creativity can be shaped by constraints. I wrote about this idea six months or so ago in an entry named Patterns as a Source of Freedom. Rebecca suggested that for her at least constraints are essential to creativity, that this is why she opted to be a graphic designer instead of a "fine artist". The framework we operate in can change, across projects or even within a project, but the framework can free us to create. Brian recalled a song by the '80s punk-pop band Devo called Freedom Of Choice:

freedom of choice is what you got
then if you got it you don't want it
seems to be the rule of thumb
don't be tricked by what you see
you got two ways to go
freedom from choice is what you want

Richard then gave a couple of examples of how some artists don't exercise their choice at the level of creating a product but rather at the level of selecting from lots of products generated less self-consciously. In one, a photographer for National Geographic, put together a pictorial article containing 22 pictures selected from 40,000 photos he snapped. In another, Francis Ford Coppolla shot 250 hours of film in order to create the 2-1/2 hour film Apocalypse Now.

John then told a wonderful little story about an etymological expedition he took along the trail of ideas from the word "chutzpah", which he adores, to "effrontery", "presumptuous", and finally "presumption" -- to act as if something were true. This is a great way to free oneself to create -- to presume that one can, that one will, that one should. Chutzpah.

Author William Stafford had a particular angle he took on this idea, what he termed the "path of stealth". He refused to believe in writer's block. He simply lowered his standards. This freed him to write something and, besides, there's always tomorrow to write something better. But as I noted earlier, inspiration comes to those already involved in the work, so writing anything is better than writing nothing.

As editor John Gould once told Stephen King, "Write with the door closed. Revise with the door open." Write for yourself, with no one looking over your shoulder. Revise for readers, with their understanding in mind.

Just as permission is crucial to creativity, so is time. We have to "make time", to "find time". But sometimes the work is on its own time, and will come when and at the rate it wants. Creativity demands that we allow enough time for that to happen! (That's true even for the perhaps relatively uncreative act of writing programs for a CS course... You need time, for understanding to happen and take form in code.)

Just as permission and time are crucial to creativity, John said, so is pressure. I think we all have experienced times when a deadline hanging over our heads seemed to give us the power to create something we would otherwise have procrastinated away. Maybe we need pressure to provide the energy to drive the creative act. This pressure can be external, in the form of a client, boss, or teacher, or internal.

This is one of the reasons I do not accept late work for a grade in my courses; I believe that most students benefit from that external impetus to act, to stop "thinking about it" and commit to code. Some students wait too long and reach a crossover point: the pressure grows quite high, but time is too short. Life is a series of balancing acts. The play between pressure and time is, I think, fundamental. We need pressure to produce, but we need time to revise. The first draft of a paper, talk, or lecture is rarely as good as it can be. Either I need to give myself to create more and better drafts, or -- which works better for me -- I need to find many opportunities to deliver the work, to create multiple opportunities to create in the small through revision, variation, and natural selection. This is, I think, one of the deep and beautiful truths embedded in extreme programming's cycle "write a test, write code, and refactor".

Ultimately, a professional learns to rely more on internal pressure, pressure applied by the self for the self, to create. I'm not talking about the censoriousness of self-consciousness, discussed earlier, which tells us that what we produce isn't good enough -- that we should not act, at least in the current product. I'm talking about internal demands that we act, in a particular way or time. Accepting the constraints of a form -- say, the line and syllable restrictions of haiku, or the "no side effects" convention of functional programming style -- puts pressure on us to act in a way, whether it's good or bad. John gave us two other kinds of internal pressure, ones he applies to himself: the need to produce work to share at his two weekly writers' workshops, and the self-discipline of submitting work for publication every month. These pressures involve outside agents, but they are self-imposed, and require us do something we might otherwise not.

John closed with a short inspiration. Pay attention to your wishes and dreams. They are your mind's way of telling you to do something.

We spent the rest of the morning chatting as a group on whatever we were thinking after John's talk. Several folks related an experience well-known to any teacher: someone comes to us asking for help with a problem and, in the act of explaining the problem to us they discover the answer for themselves. Students do this with me often. Is the listener essential to this experience, or could we just ask if we were speaking to someone? I suspect that another person is essential for this to work for the learner, both because having a real person to talk to makes us explain things (pressure!) and because the listener can force us to explain the problem more clearly ("I don't understand this yet...")

A recurring theme of the morning was the essential interactivity of creativity, even when the creator works fundamentally alone. Poets need readers. Programmers need other folks to bounce ideas off of. Learners need someone to talk to, if only to figure things out for themselves. People can be sources of ideas. They can also be reflectors, bouncing our own ideas back at us, perhaps in a different form or perhaps the same, but with permission to act on them. Creativity usually comes in the novel combination of old ideas, not truly novel ideas.

This morning session was quite rewarding. My notes on the whole workshop are, fittingly, about half over now, but this article has already gotten quite long. So I think I'll save the afternoon sessions for entries to come. These sessions were quite different from the morning, as we created things together and then examined our processes and experiences. They will make fine stand-alone articles that I can write later -- after I break away for a bite at the IBM Eclipse Technology Exchange reception and for some time to create a bit on my own for what should take over my mind for a few hours: tomorrow's Educators' Symposium, which is only twelve hours and one eight- to ten-mile San Diego run away!


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 23, 2005 7:43 PM

Proof Becky Hirta isn't Doug Schmidt

... is right here.

I'm not saying that Doug doesn't have a social life; for all I know, he's a wild and crazy guy. But he has the well-earned reputation of answering e-mail at all hours of the day. I've sent him e-mail at midnight, only to have an answer hit my box within minutes. His grad students all tell me they've had the same experience, only at even more nocturnal hours. They are in awe. I'm a mix of envious and relieved.

I don't know why this was the first thought that popped into my head when I read Becky's post just now. Maybe it has something to do with the fact that I'm sitting in my office at 7:30 PM on a Friday evening. (Shh. Don't tell my students.)


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

September 21, 2005 8:22 PM

Two Snippets, Unrelated?

First... A student related this to me today:

But after your lecture on mutual recursion yesterday, [another student] commented to me, "Is it wrong to think code is beautiful? Because that's beautiful."

It certainly isn't wrong to think code is beautiful. Code can be beautiful. Read McCarthy's original Lisp interpreter, written in Lisp itself. Study Knuth's TeX program, or Wirth's Pascal compiler. Live inside a Smalltalk image for a while.

I love to discover beautiful code. It can be professional code or amateur, open source or closed. I've even seen many beautiful programs written by students, including my own. Sometimes a strong student delivers something beautiful as expected. Sometimes, a student surprises me by writing a beautiful program seemingly beyond his or her means.

The best programmers strive to write beautiful code. Don't settle for less.

(What is mutual recursion, you ask? It is a technique used to process mutually-inductive data types. See my paper Roundabout if you'd like to read more.)

The student who told me the quote above followed with:

That says something about the kind of students I'm associating with.

... and about the kind of students I have in class. Working as an academic has its advantages.

Second... While catching up on some blog reading this afternoon, I spent some time at Pragmatic Andy's blog. One of his essays was called What happens when t approaches 0?, where t is the time it takes to write a new application. Andy claims that this is the inevitable trend of our discipline and wonders how it will change the craft of writing software.

I immediately thought of one answer, one of those unforgettable Kent Beck one-liners. On a panel at OOPSLA 1997 in Atlanta, Kent said:

As speed of development approaches infinity, reusability becomes irrelevant.

If you can create a new application in no time flat, you would never worry about reusing yesterday's code!

----

Is there a connection between these two snippets? Because I am teaching a course in programming languages course this semester, and particularly a unit on functional programming right now, these snippets both call to mind the beauty in Scheme.

You may not be able to write networking software or graphical user interfaces using standard Scheme "out of the box", but you can capture some elegant patterns in only a few lines of Scheme code. And, because you can express rather complex computations in only a few lines of code, the speed of development in Scheme or any similarly powerful language approaches infinity much faster than does development in Java or C or Ada.

I do enjoy being able to surround myself with the possibility of beauty and infinity each day.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 09, 2005 12:44 PM

Missing PLoP

PLoP 2005 Banner

This being Friday, September 9, I have to come to grips with the fact that I won't be participating in PLoP 2005. This is the 12th PLoP, and it would have been my 10th, all consecutively. PLoP has long been one of my favorite conferences, both for how much it helps me to improve my papers and my writing and for how many neat ideas and people I encounter there. Last year's PLoP led directly to four blog entries, on patterns as autopsy, patterns and change, patterns and myth, and the wiki of the future, not to mention a few others indirectly later. Of course, I also wrote about the wonderful running in and around Allerton Park, the pastoral setting of PLoP. I will dearly miss doing my 20-miler there this weekend...

PLoP 2005 Banner

Sadly, events conspired against me going to PLoP this year. The deadline for submissions fell at about the same time as the deadline for applications for the chair of my department, both of which fell around when I was in San Diego for the spring planning meeting for OOPSLA 2005. Even had I submitted, I would have had a hard time doing justice to paper revisions through the summer, as I learned the ropes of my new job. And now, new job duties make this a rather bad time to hop into the car and drive to Illinois for a few days. (Not that I wasn't tempted early in the week to drive down this morning to join in for a day!)

I am not certain if other academics feel about some of the conferences they attend the way I feel about PLoP, OOPSLA, and ChiliPLoP, but I do know that I don't feel the same way about other conferences that I attend, even ones I enjoy. The PLoPs offer something that no other conference does: a profound concern for the quality of technical writing and communication more generally. PLoP Classic, in particular, has a right-brain feel unlike any conference I've attended. But don't think that this is a result of being overrun by a bunch of non-techies; the conference roster is usually dominated by very good minds from some of the best technical outfits in the world. But these folks are more than just techies.

Unfortunately I'm not the only regular who is missing PLoP this year. The attendee list for 2005 is smaller than in recent years, as was the number of submissions and accepted papers. The Hillside Group is considering the future of PLoP and ChiliPLoP in light of the more mainstream role of patterns in the software world and the loss of cache that comes when one software buzzword is replaced with another. Patterns are no longer the hot buzzword -- they are a couple of generations of buzzwords beyond that -- which changes how conferences in the area need to be run. I think, though, that it is essential for us to maintain a way for new pattern writers to enter the community and be nurtured by pattern experts. It is also essential that we continue to provide a place where we care not only about the quality of technical content but also about the quality of technical writing.

On the XP mailing list this morning, Ron Jeffries mentioned the "patterns movement":

In the patterns movement, there was the notion of "forces". The idea of applying a pattern or pattern language was to "balance" the forces.

I hope Ron's use of past tense was historical, in the sense that the patterns movement first emphasized the notion of forces "back then", rather than a comment that the patterns movement is over. The heyday is over, but I think we software folks still have a lot to learn about design from patterns.

I am missing PLoP 2005. Sigh. I guess there's always ChiliPLoP 2006. Maybe I can even swing EuroPLoP next year.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

June 29, 2005 8:40 AM

Learning from the Masters

This spring I was asked to participate on a panel at XP2005, which recently wrapped up in the UK. This panel was on agile practices in education, and as you may guess I would have enjoyed sharing some of my ideas and learning from the other panelists and from the audience. Besides, I've not yet been to any of the agile software development conferences, and this seemed like a great opportunity. Unfortunately, work and family duties kept me home for what is turning out to be a mostly at-home summer.

In lieu of attending XP2005, I've enjoyed reading blog reports of the goings-on. One of the highlights seems to have been Laurent Bossavit's Coding Dojo workshop. I can't say that I'm surprised. I've been reading Laurent's blog, Incipient(thoughts), for a while and exchanging occasional e-mail messages with him about software development, teaching, and learning. He has some neat ideas about learning how to develop software through communal practice and reflection, and he is putting those ideas into practice with his dojo.

The Coding Dojo workshop inspired Uncle Bob to write about the notion of repeated practice of simple exercises. Practice has long been a theme of my blog, going back to one of my earliest posts. In particular, I have written several times about relatively small exercises that Joe Bergin and I call etudes, after the compositions that musicians practice for their own sake, to develop technical skills. The same idea shows up in an even more obviously physical metaphor in Pragmatic Dave's programming katas.

The kata metaphor reminds us of the importance of repetition. As Dave wrote in another essay, students of the martial arts repeat basic sequences of moves for hours on end. After mastering these artificial sequences, the students move on to "kumite", or organized sparring under the supervision of a master. Kumite gives the student an opportunity to assemble sequences of basic moves into sequences that are meaningful in combat.

Repeating programming etudes can offer a similar experience to the student programmer. My re-reading of Dave's article has me thinking about the value of creating programming etudes at two levels, one that exercises "basic moves" and one that gives the student an opportunity to assemble sequences of basic moves in the context of a more open-ended problem.

But the pearl in my post-XP2005 reading hasn't been so much the katas or etudes themselves, but one of the ideas embedded in their practice: the act of emulating a master. The martial arts student imitates a master in the kata sequences; the piano student imitates a master in playing Chopin's etudes. The practice of emulating a master as a means to developing technical proficiency is ubiquitous in the art world. Renaissance painters learned their skills by emulating the masters to whom they were apprenticed. Writers often advise novices to imitate the voice or style of a writer they admire as a way to ingrain how to have a voice or follow a style. Rather than creating a mindless copycat, this practice allows the student to develop her own voice, to find or develop a style that suits their unique talents. Emulating the master constrains the student, which frees her to focus on the elements of the craft without the burden of speaking her own voice or being labeled as "derivative".

Uncle Bob writes of how this idea means just as much in the abstract world of software design:

Michael Feathers has long pondered the concept of "Design Sense". Good designers have a "sense" for design. They can convert a set of requirements into a design with little or not effort. It's as though their minds were wired to translate requirements to design. They can "feel" when a design is good or bad. They somehow intrinsically know which design alternatives to take at which point.

Perhaps the best way to acquire "Design Sense" is to find someone who has it, put your fingers on top of theirs, put your eyeballs right behind theirs, and follow along as they design something. Learning a kata may be one way of accomplishing this.

Watching someone solve a kata in a workshop can give you this sense. Participating in a workshop with a master, perhaps as programming partner, perhaps as supervisor, can, too.

The idea isn't limited to software design. Emulating a master is a great way to learn a new programming language. About a month ago, someone on the domain-driven design mailing list asked about learning a new language:

So assuming someone did want to want to learn to think differently what would you go with? Ruby, Python, Smalltalk?

Ralph Johnson's answer echoed the importance of working with a master:

I prefer Smalltalk. But it doesn't matter what I prefer. You should choose a language based on who is around you. Do you know somebody who is a fan of one of these languages? Could you talk regularly with this person? Better yet, could you do a project with this person?

By far the best way to learn a language is to work with an expert in it. You should pick a language based on people who you know. One expert is all it takes, but you need one.

The best situation is where you work regularly with the expert on a project using the language, even if it is only every Thursday night. It would be almost as good if you would work on the project on your own but bring code samples to the expert when you have lunch twice a week.

It is possible to learn a language on your own, but it takes a long time to learn the spirit of a language unless you interact with experts.

Smalltalk or Scheme may be the best in some objective (or entirely subjective!) sense, but unless you can work with an expert... it may not the right language for you, at least right now.

As a student programmer -- and aren't we all? -- find a person to whom you can "apprentice" yourself. Work on projects with your "master", and emulate his style. Imitate not only high-level design style but also those little habits that seem idiosyncratic and unimportant: name your files and variables in the same way; start your programming sessions with the same rituals. You don't have to retain all of these habits forever, and you almost certainly won't. But in emulating the master you will learn and internalize patterns of practice, patterns of thinking, and, yes, patterns of design and programming. You'll internalize them through repetition in the context of real problems and real programs, which give the patterns the richness and connectedness that make them valuable.

After lots of practice, you can begin to reflect on what you've learned and to create your own style and habits. In emulating a master first, though, you will have a chance to see deeper into the mind and actions of someone who understands and use what you see to begin to understand better yourself, without the pressure of needing to have a style on your own yet.

If you are a computer scientist rather than a programmer, you can do much the same thing. Grad students have been doing this as long as there have been grad students. But in these days of the open-source software revolution, any programmer with a desire to learn has ample opportunity to go beyond the Powerbook on his desk. Join an open-source project and interact with a community of experts and learners -- and their code.

And we still have open to us an a more traditional avenue, in even greater abundance, literature. Seek out a writer whose books and articles can serve in an expert's stead. Knuth, Floyd, Beck, Fowler... the list goes on and on. All can teach you through their prose and their code.

Knowing and doing go hand in hand. Emulating the masters is an essential part of the path.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

June 02, 2005 1:46 PM

On "Devoid of Content"

A couple of days ago, blog reader Mike McMillan sent me a link to Stanley Fish's New York Times op-ed piece, Devoid of Content. Since then, several of my CS colleagues have recommended this article. Why all the interest in an opinion piece written by an English professor?

The composition courses that most university students take these days emphasize writing about something: current events, everyday life, or literature. But Fish's freshman composition class does something much different. He asks students to create a new language, "complete with a syntax, a lexicon, a text, rules for translating the text and strategies for teaching your language to fellow students". He argues that the best way to learn how to write is to have a deep understanding of "a single proposition: A sentence is a structure of logical relationships." His students achieve this understanding by having to design the mechanisms by which sentences represent relationships, such as tense, number, manner, mood, and agency.

Fish stakes out a position that is out of step with contemporary academia: Learning to write is about form, not content. Content is not only the not the point; it is a dangerous delusion that prevents students from learning what they most need.

Content is a lure and a delusion, and it should be banished from the classroom. Form is the way.

Fish doesn't say that content isn't important, only that it's should not be the focus of learning to write. Students learn content in their science courses, their social science courses, and their humanities courses -- yes, even in their literature courses.

(I, for one, am pleased to see Fish distinguish the goals of the composition courses taught in English departments from the goals of the literature courses taught there. Too many students lose interest in their comp courses when they are forced to write about, oh, a poem by Edna St. Vincent Millay. Just because a student doesn't connect with twentieth-century lyric poetry doesn't mean that he shouldn't or can't learn to write well.)

So, how is Fish's argument relevant to a technical audience? If you have read my blog much, you can probably see my interest in the article. I like to read books about writing, to explore ways of writing better programs. I've also written a little about the role form plays in evaluating technical papers and unleashing creativity. On the other side of the issue, though, I have several times recently about the role of real problems and compelling examples in learning to program. My time at ChiliPLoP 2005 was spent working with friends to explore some compelling examples for CS1.

In the context of this ongoing discussion among CS educators, one of my friends sloganized Fish's position as "It's not the application, stupid; it's the BNF."

So, could I teach my freshman computer programming class after Fish's style? Probably not by mimicking his approach note for note, but perhaps by adopting the spirit of his approach.

We first must recognize that freshman CS students are usually in a different intellectual state from freshman comp students. When students reach the university, they may not have studied tense and mood and number in much detail, but they do have an implicit understanding of language on which the instructor can draw. Students at my university already know English in a couple of ways. First, they speak the language well enough to participate in everyday oral discourse. Second, they know enough at least to string together words in a written form, though perhaps not well enough to please Fish or me.

My first-year programming students usually know little or nothing about a programming language, either as a tool for simple communication or in terms of its underlying syntactic structures. When Fish's students walk into his classroom, he can immediately start a conversation with them, in a rich language they share. He can offer endless example sentences for his students to dissect, to rearrange, to understand in a new way. These sentences may be context-free, but they are sentences.

In a first-year programming course, instructors typically have to spiral our dissection of programs with the learning of new language features and syntax. The more complex the language, the wider and longer the spiral must be.

Using a simple computer language might make an approach like Fish's work in a CS1 course. I think of the How to Design Programs project in these terms. Scheme is simple enough syntactically that the authors can rather quickly focus on the structure of programs, much as Fish focuses on the structure of sentences. The HtDP approach emphasizes form through its use of BNF definitions and "design recipes". However, I don't get the sense that HtDP removes content from the classroom so much as it removes it from the center of attention. Felleisen et al. still try to engage their students with examples that might interest someone.

So, I think that we may well be able to teach introductory programming in the spirit of Fish's approach. But is it a good idea? How much of the motivation to learn how to program springs from the desire to do something particular? I do not know the answer to this question, but it lies at the center of the real problems/compelling examples discussion.

In an unexpected twist of fate, I was thumbing through Mark Guzdial's new book, Introduction to Computing and Programming with Python: A Multimedia Approach, and read the opening sentences of its preface:

Research on computing education clearly demonstrates that one doesn't just "learn to program." One learns to program something [5,20], and the motivation to do that something can make the difference between learning and not learning to program [7].

(The references are to papers on situated learning of the sort Seymour Papert has long advocated.)

I certainly find myself in the compelling problems camp these days and so am heartened by Guzdial's quote, and the idea embodied in his text. But I also feel a strong pull to find ways to emphasize the forms that will help students become solid programmers. That pull is the essence of my interest in documenting elementary programming patterns and using them to gain leverage in the classroom.

Regardless of how directly we might use Fish's approach to teach first-year courses in programming, I am intrigued by what seems to me to be a much cleaner connection between his ideas and the CS curriculum, the traditional Programming Languages course! I'll be teaching our junior/senior level course in languages this fall, and it seems that I could adopt Fish's course outline almost intact. I could walk in on Day 1 and announce that, by the end of the semester, each group of students will have created a new programming language, complete with a syntax, a set of primitive expressions, rules for translating programs, and the whole bit. Their evolving language designs would serve as the impetus for exploring the elements of language at a deeper level, touching all the traditional topics such as bindings, types, scope, control structures, subprograms, and so on. We could even implement our growing understanding in a series of increasingly complex interpreters that extract behavior from syntactic expressions.

Actually, this isn't too far from the approach that I have used in the past, based on the textbook Essentials of Programming Languages. I'll need to teach the students some functional programming in Scheme first, but I could then turn students loose to design and implement their own languages. I could still use the EOPL-based language that I call Babel as my demonstration language in class.

School's barely out for the summer, and I'm already jazzed by a new idea for my fall class. I hope I don't peak too soon. :-)

As you can see, there are lots of reasons that Fish's op-ed piece has attracted the attention of CS folks. It's about language and learning to use it, which is ultimately what much of computer science and software development are about.

Have you heard of Stanley Fish before? I first ran into him and his work when I read a blog entry by Brian Marick on the role the reader plays in how we write code and comments. Brian cited Fish's work on reader-response criticism and hypothesized an application of it to programming. You may have encountered Fish through Brian's article, too, if you've read one of my oldest blog entries. I finally checked the book by Fish that Brian recommended so long ago out of the library today -- along with another Fish book, The Trouble with Principle, which pads my league-leading millions total. (This book is just for kicks.)


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 31, 2005 5:50 PM

Agile Moments from Primitive Obsessions

[ See what happens when I start talking? I can't shut up. :-) ]

My previous entry brought to mind two related ideas. Think of these as more Agile Moments.

The Delgado Codex

I ran across a neat little example of reflective practice on the major league baseball diamond in The Scholarly Rigor of Carlos Delgado. Carlos is the first baseman for the Florida Marlins in U.S. baseball's National League. He apparently has the agile habit of recording detailed information about every one of his at-bats in a notebook he keeps with him in the dugout. By collecting this data, he is able to derive feedback from his results and use that to improve his future at-bats.

As the article points out, most professional sports teams -- at least in America -- record all of their performances these days and then mine the film for information they can use to do better next time out. Delgado, "like a medieval Benedictine at Monte Cassino", is one of the few major leaguers who still use the ancient technologies of pencil and paper to personally track his own performances. The act of writing is valuable on its own, but I think that as important is the fact that Delgado reflects on his performance immediately after the event, when the physical sensations of the performance are still fresh and accessible to his conscious mind.

How is this related to my previous post? I think that we programmers can benefit from such a habit. If we recorded the smells that underlay our refactorings for a month or even a week, we would all probably have a much better sense of our own tendencies as programmers, which we could then feed back into our next code. And, if we shared our experiences, we might develop an even more comprehensive catalog of smells and refactorings as a community. If it's good enough for Donald Knuth, it ought to work for me, too.

Agility works for Delgado, one of the better offensive players in all of baseball. Knowing about his habit, I'm even happier to have him as a member of my rotisserie baseball team, the Ice Nine. :-)

Intentional Duplication

The Zen Refactoring thread on the XP mailing list eventually came around to the idea deliberately creating code duplication. The idea is this: It is easier first to write code that duplicates other code and then to factor the duplication out later than it is to write clean code first or to refactor first and then add the new code.

I operate in this way most of the time. It allows me to add a new feature to my code immediately, with as little work as possible, without speculating about duplication that might occur in the future. Once I see the actual duplication, I make it go away. Copy-and-paste can be my friend.

This technique is one way that you can refactor your code toward suitable domain abstractions away from primitive data. When you run into a situation where you find yourself handling a tolerance in multiple places, Extract Class does the trick. This isn't a foolproof approach, though, as Chris Wheeler pointed out in his article. What happens when you have references to mass-as-an-int in zillions of places and only then does the client say, "Um, we allow tolerances of 0.1g"? Good design and good refactoring tools are your friend then.

In the same thread, Ron Jeffries commented:

> I have also sometimes created the duplication, and then
> worked to make the duplicated code as similar as possible
> before removing the duplication. Does anyone else do this?

I once saw Kent Beck do that in a most amazing way, but I haven't learned the trick of making code look similar prior to removing duplication; would love to see an example.

Now I have fodder for a new blog entry: to write up a simple example of this technique that I use in my freshman-level OOP programming course. It's a wonderful example of how a little refactoring can make a nice design fall out naturally.

How is this related to my previous post? Duplication is one of my pet code smells, though I often create it willingly, with the intention of immediately factoring it out. Like Primitive Obsession, though you have to learn to strike a proper balance between too little and too much. Just right is hard to find sometimes.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 31, 2005 4:58 PM

Primitive Obsession and Balance

Recently, Chris Wheeler posted a thoughtful blog entry called My Favourite Smells, which described only one smell, but one every OOP programmer knows deep in his soul: using a primitive data type instead of a domain-specific type. James Shore calls this smell Primitive Obsession, and Kevin Rutherford calls it Primitive Feature Envy.

Chris has been justifiably lauded for starting a conversation on this topic. I find this smell in my programs and even in many of the programs I read from good OOP programmers. When programmers are first learning to write object-oriented programs, the tendency to write code in terms only of the primitive data types is hard to shake. Who needs that Piece class, when a simple integer flag taking values from 1 to 4 will do just fine? (Heck, we can always use Hungarian Notation to make the semantics clear. :-) When students come to see the value of that Piece class, they begin to see the value of OOP.

That said, we have to be careful to remember that this smell is not an absolute indicator. There is an inherent tension between the desire to create classes to model the domain more closely and the desire to do the simplest thing that could possibly work. If I try to wrap every piece of primitive data in its own class, I can very quickly create a lot of unnecessary code that makes my program worse, not better. My program looks like it has levels of abstraction that simply aren't there.

That's not the sort of thing that Chris, James, and Kevin are advocating, of course. But we need to learn how to strike the proper balance between Primitive Obsession and Class Obsession, to find the abstractions that make up our domain and implement them as such. I think that this is one of the reasons that Eric Evans' book Domain-Driven Design is so important: it goes a long way toward teaching us how to make these choices. Ultimately, though, you can only grok this idea through experience, which usually means doing it wrong many times in both directions so that your mind learns the patterns of good designs. As Chris', James', and Kevin's articles all point out, most of us start with a predilection to err on the side of primitives.

One way to push yourself to learn this balance is to use the Three Bears pattern first described by Kent Beck to create situations in which you confront the consequences of going too far one way or the other. I think that this can be another one of those programming etudes that help you become a better programmer, a lá the Polymorphism Challenge that Joe Bergin and I used at SIGCSE 2005, in which we had folks rewrite some standard programs using few or zero if statements.

In order to feel the kind of effects that Chris describes in his article, you have to live with your code for a while, until the specification changes and you need that tolerance around your masses. I think that the folks running the ICFP 2005 programming contest are using a wonderful mechanism to gauge a program's ability to respond to change. They plan to collect the contestants' submissions and then, a couple of weeks later, introduce a change in the spec and require the contestants to adapt their programs on an even shorter deadline. What a wonderful idea! This might be a nice way to help students learn the value of adaptable code. Certainly, Primitive Obsession is not much of a problem if you never encounter the need for an abstraction.

Ron Jeffries posted a message to the XP discussion list earlier today confessing that one of his weaknesses as a programmer is a tendency to create small classes too quickly. As an old Smalltalker, he may be on the opposite end of the continuum from most folks. But we need a good name for the smell that permeates his code, too. My "Class Obsession" from above is rather pedestrian. Do you have better name? Please share... Whatever the name, I suspect that Ron's recognition of his tendency makes it easier for him to combat. At least he knows to pay attention.

Chris's article starts with the idea of favorite smells but then settles on one. It occurs to me that I should tell you my personal favorite smell. I suspect that it relates to duplication; my students certainly hear that mantra from me all the time. I will have to think a bit...


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 17, 2005 8:31 AM

A Few Good Abstractions

I just read Joel Spolsky's latest essay, Making Wrong Code Look Wrong. I'm all for Joel's goal of writing code that reveals the writer's intention, of making code transparent to the reader. And I'm sympathetic to his concerns about operator overloading in C++ and leaky abstractions. But the following paragraph exemplifies why I find his C-at-all-costs mentality somewhat uncomfortable:

If you read Simonyi's paper closely, what he was getting at was the same kind of naming convention as I used in my example above where we decided that us meant "unsafe string" and s meant "safe string". They're both of type string. The compiler won't help you if you assign one to the other and Intellisense won't tell you bupkis. But they are semantically different; they need to be interpreted differently and treated differently and some kind of conversion function will need to be called if you assign one to the other or you will have a runtime bug. If you're lucky.

Does anyone else think that Joel needs a really good abstraction here? If my program deals with two different kinds of string -- objects that are semantically different -- then I'd like to reflect that in more than a cryptic two-letter prefix. Classes may seem like overkill for these kinds of thing, but enums and Ada subtypes can be wonderful aids. Then, instead of relying solely on the programmer to read and interpret the prefixes, we could let our compilers and refactoring tools help us.

Intention-revealing names are an important part of transparent code, but we should use languages and techniques that support our efforts, too.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

April 09, 2005 3:40 PM

Patterns Books, But No Software To Be Seen

On the flight home from ChiliPLoP, I read a review of a patterns book -- though neither the book's title nor the review itself ever used 'patterns'. And it isn't a software book; it's on graphic design. The book is Problem Solved: A Primer for Design and Communication, by Michael Johnson.

The review begins with a quote from the book that distinguishes artists from designers:

[While] a fine art student can get away with creating his or her own problems to solve, a communications student is usually handed someone else's, with a looming deadline thrown in.

Designers create, but what they create is usually circumscribed by the needs of a client, and the act of creation itself is circumscribed by limitations of time and budget, style and material. Of course, limitations don't eliminate the need for creativity. In fact, they may enhance creativity. But working in the presence of limitations means that designers are also problem solvers.

Problem Solved addresses the needs of designer-as-problem-solver by compiling "a typology ... of kinds of problems" and identifies "trustworthy, time-saving means to address those problems." Each section of the book focuses on a particular kind of communication problem, such as how to use a historical style in a "fresh" way or how to resolve ethical dilemmas. Each problem is

summed up by a memorable heading that (consistent with the samples shown) is both surprising and suitable.

This sounds like a patterns catalog, and the headings and photo illustrations that lead its sections sound a lot like the iconic photos that Alexander uses to evoke his patterns.

I'd think that graphic design is an area ripe for mining patterns. I also wonder to what extent Alexander's work has had an effect in design disciplines outside of architecture and software. One of my favorite non-software patterns projects was a crossover: Rosemary Michelle Simpson's work on patterns of information structure indices. (Rosemary is a master indexer and created the indices for many of the best-known patterns and java books.)

I read this book review in the Autumn 2004 issue of Ballast Quarterly Review. The editor, publisher, and primary writer of Ballast is Roy Behrens, a design professor at the University of Northern Iowa. This little journal is a pastiche of passages from books, witty quotes, curious illustrations, and book reviews, with bias toward the arts. Check it out -- it'll only cost you a few postage stamps.

Oh, and this issue of Ballast did review a book with the word 'pattern' in the title: DPM: Disruptive Pattern Material. DPM is an encyclopedia of camouflage in nature, the military, and culture. The title of this book is just what I want some of my own work to be... :-)


Posted by Eugene Wallingford | Permalink | Categories: Patterns

April 07, 2005 7:49 AM

Software in Negative Space

Drawing on the Right Side of the Brain

I started college as an architecture major. In the first year, architecture students took two courses each semester: Studio and Design Communications Media. The latter had a lofty title, but it focused on the most basic drawing skills needed in order to communicate ideas visually in architecture. At the time, we all considered it a freehand art class, and on the surface it was. But even then I realized that I was learning more.

The textbook for DCM was Betty Edwards's Drawing on the Right Side of the Brain. It is about more than drawing as a skill; it is also about drawing out one's creativity. Studying this book, for the first time I realized that what we do in life depends intimately on what we see.

Often, when people try to draw a common object, such as a chair, they don't really draw the chair in front of them but rather some idealized form, a Chair, seen in their mind's eye. This chair is like a Platonic ideal, an exemplar, that represents the idea of a chair but is like no chair in particular. When the mind focuses on this ideal chair, it stops seeing the real chair in front of it, and the hands go onto cruise control drawing the ideal.

The great wonder in Edwards's book was that I could learn to see the objects in front of me. And, in doing so, I could learn to see things differently.

Drawing on the Right Side of the Brain introduces a number of techniques for seeing an object, for the first time or in a new way, with exercises aimed at translating this new vision into more realistic depictions of those items on paper or canvas.

a freehand rendition of the Japanese character 'ma'

I recently found myself thinking again about one of the techniques that Edwards taught me, in the realm of software. Alan Kay has often said that we computer computer scientists focus so intently on the objects in our object-oriented programming that we miss something much more important: the space between the objects. He speaks of the Japanese word ma, which can refer to the "interstitial tissue" in the web of relationships that make up a complex system. On this view, the greater value in creating software lies in getting the ma right.

This reminds me of the idea of negative space discussed in Edwards's book. One of her exercises asked the student to draw a chair. But, rather than trying to draw the chair itself, the student is to draw the space around the chair. You know, that little area hemmed in between the legs of the chair and the floor; the space between the bottom of the chair's back and its seat; and the space that is the rest of the room around the chair. In focusing on these spaces, I had to actually look at the space, because I don't have an image in my brain of an idealized space between the bottom of the chair's back and its seat. I had to look at the angles, and the shading, and that flaw in the seat fabric that makes the space seem a little ragged.

In a sense, the negative space technique is merely a way to trick one's mind into paying attention to the world in a situation when it would prefer to lazily haul out a stock "kitchen chair" image from its vault and send it to the fingers for drawing. But the trick works! I found myself able to draw much more convincing likenesses than I ever could before. And, if we trick ourselves repeatedly, we soon form a new habit for paying attention to the world.

This technique applies beyond the task of drawing, though. For example, it proves quite useful in communicating more effectively. Often, what isn't said is more important than what is said. The communication lies in the idea-space around the words spoken, its meaning borne out in the phrasing and intonation.

The trigger for this line of thought was my reading an entry in Brad Appleton's blog:

... the biggest thing [about software design] that I learned from [my undergraduate experience] was the importance of what I believe Christopher Alexander calls "negative space", only for software architecture. I glibly summarized it as

There is no CODE that is more flexible than NO Code!

The "secret" to good software design wasn't in knowing what to put into the code; it was in knowing what to leave OUT! It was in recognizing where the hard-spots and soft-spots were, and knowing where to leave space/room rather than trying to cram in more design.

Software designers could use this idea in different ways. Brad looks at the level of design and code: Leave room in a design, rather than overspecifying every behavior and entity that a program may need. But this sense of negative space is about what to leave out, not what to put in. The big surprise I had when using Edwards's negative space technique was that it helped me put the right things into my drawings -- by focusing on their complement.

I often think that the Decorator design pattern embodies negative space: Rather than designing new classes for each orthogonal behavior, define objects as behaviors with receptacles into which we can plug other objects. The negative space in a Decorator is what makes the Decorator powerful; it leaves as many details uncommitted as possible while still defining a meaningful behavior. I suppose that the Strategy pattern does the same sort of thing, turned inside out.

Maybe we can take this idea farther. What would it be like to design a program not as a set of objects, or a set of functions, but as Kay's ma? Rather than design actors, design the spaces in between them. Interfaces are a simple form of this, but I think that there is something deeper here. What if all we defined were an amorphous network of messages which some nebulous agents were able to snatch and answer? Blackboard architectures, once quite common in artificial intelligence, work something like this, but the focus there is still on creating knowledge sources, not their interactions. (Indeed, that was the whole point!)

Even crazier: what it would be like to design software not by focusing on the requirements that our users give us, but on the "negative space" around them? Instead of adding stuff to a concoction that we build, we could carve away the unwanted stuff from a block of software, the way a sculptor creates a statue. What would be our initial block of granite or marble? What would the carving feel like?

Whether any of these farfetched ideas bears fruit, thinking about them might be worthwhile. If Alan Kay is right, then we need to think about them.

Edwards's negative space technique makes for a powerful thinking strategy. Like any good metaphor, it helps us to ask different questions, ones that can help us to expose our preconceptions and subconscious biases.

And it could be of practical value to software designers, too. The next time you are stumped by a design problem, focus on the negative space around the thing you are building. What do you see?


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

April 04, 2005 3:58 PM

Agile Methods in the Dining Room

I've written about using agile software principles in my running. Now Brian Marick applies the idea of Big Visible Charts as a motivational technique for losing a few pounds.

The notion that practices from agile software development work outside of software should not surprise us too much. Agile practices emphasize individuals and interactions, doing things rather than talking about things, collaboration and communication, and openness to change. They reflect patterns of organization and interaction that are much bigger than the software world. (This reminds me of an idea that was hot a few years ago: design patterns occur in the real world, too.)

Oh, and good luck, Brian! I know the feeling. Two years ago, I had broken the 190-pound barrier and, despite recreational jogging, felt soft and out of shape. By exercising more and eating less (okay, exercising a lot more and eating a lot less), I returned to the healthier and happier 160-pound range. I'll keep an eye on your big visible chart.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Running, Software Development

March 30, 2005 12:45 PM

ChiliPLoP, The Missing Days

While posting my last entry, I realized that my blog gives the impression that ChiliPLoP'05 lasted only one day. Actually, it lasted from Tuesday evening through Friday's lunch. The reason I have not written more is that the conference kept me busy with the work I was there to do!

Our hot topic group had a productive three days. On Days 2 and 3, we continued with our work on the examples we began on Day 1, refining the code for use by students and discussing how to build assignments around them.

As the conference drew to its close on the last morning, our discussions turned to the broader vision of what a great new first course would be like. By the end of CS1, traditional curricula expose students to lots of complexity in terms of algorithmic patterns: booleans, selection, repetition, collections, and the like. We would like to create a course that does the same for object-oriented programming. Such a course would emphasize: objects, relationships among them, and communication. Concepts like delegation, containment, and substitution would take on prominent roles early in such a course.

Current textbooks that put objects front and center fail to take this essential step. For example, they may make a date class, but they don't seem to use Dates to then build more complex objects, such as weeks or calendars, or to let lots of date objects collaborate to solve an interesting problem. Most of the programming focus remains in implementing methods, using the familiar algorithmic patterns.

Our desire to do objects right early requires us to find examples for which the OO style is natural and accessible to beginners. Graphical and event-driven programs make it possible, even preferable, to use interesting objects in interesting ways early on, with no explicit flow of control. But what other domains are suitable for CS1, taking into account both the students' academic level and their expectations about the world of computers?

Our desire also means that we must frame compelling examples in an object-oriented way, rather than an algorithmic way. For instance, I can imagine all sorts of interesting ways to use the Google API examples I worked on last week to teach loops and collections, including external and internal iterators. But this can easily become a focus on algorithm instead of objects. We need to find ways to inject algorithmic patterns into solutions that are fundamentally based on objects, not the other way around.

The conference resulted in a strong start toward our goals. Now, our work goes on...


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

March 30, 2005 10:44 AM

Patterns as a Source of Freedom

The problem about art is not finding more freedom,
it's about finding obstacles.
-- Richard Rogers

I started teaching introductory programming back in the fall of 1992. Within a year or so, I found myself drawn to a style of teaching that would eventually be called patterns. At the time, I spoke of 'patterns' and 'templates', and it took a while for me to realize that the power of patterns lay more in understanding the forces that shape a pattern as in the individual patterns themselves. But even in their rawest, most template-like form, simple patterns such as Guarded Linear Search were of great value to my students.

Back when I first starting talking about these ideas with my colleagues, one of them almost immediately expressed reservations. Teaching patterns too early, he argued, would limit students' freedom as they learned to write programs, and these limitations would stunt their development.

But my experience was exactly the opposite. Students told me that they felt more empowered, not less, but the pattern-directed style, and their code improved. One of the biggest fears students have is of the blank screen: They receive a programming assignment. They sit down in front of a computer, open a text editor, and ... then what? Most intro courses teach ways to think about problems and design solution, but they tend to be abstract guidelines. Even when students are able to apply them, they eventually reach the moment at which they have to write a line of code ... and then what?

The space of all possible programs that students can write overwhelms many novices. The most immediate value in knowing some programming patterns is that the patterns constrain the space to something more reasonable. If the problem deals with processing items in a collection, then students need only recognize that the problem is a search problem to focus in on a few standard patterns for solving such problems. And each pattern gave them concrete advice for writing actual code to implement the pattern -- along with some things to think about as they customized the code to their specific situation.

So, rather than limiting novice programmers, patterns freed them. They could now proceed with some confidence in the task of writing code.

But I think that patterns do more than help programmers become competent. I think that they help already competent programmers become creative.

You can't create unless you're willing to subordinate the creative impulse to the constriction of a form.
-- Anthony Burgess

Just as too much freedom can paralyze a novice with an overabundance of possibility, too much freedom can inhibit the competent programmer from creating a truly wonderful solution. Sticking with a form requires the programmer to think about resources and relationships in a way that unconstrained design spaces do not. This certainly seems to be the case in the arts, where writers, visual artists, and musicians use form as a vehicle for channeling their creative impulses.

Because I am an academic computer scientist, writing is a big part of my job, too. I learned the value of subordinating my creative impulse to the constriction of a form when I began to write patterns in the ways advocated with the patterns community. I've written patterns in the Gang-of-Four style, the PoSA style, the Beck form, and a modified Portland Form. When I first tried my hand at writing some patterns of knowledge-based systems, I *felt* constricted. But as I became familiar with the format I was using, I began to ask myself how best to express some idea that didn't fit obviously into the form. Often, I found that the reason my idea didn't fit into the form very well is that I didn't have a clear understanding of the idea itself! Holes in my written patterns usually indicated holes in my thinking about the software pattern. The pattern form guided me on the path to a better understanding of content that I thought I already knew well.

Constraining ourselves to a form is but one instance of a more general heuristic for creating. Sometimes, the annoying little problems we encounter when doing a task can play a similar role. I love this quote from Phillip Windley, reporting some of the lessons that 37signals learned building Basecamp:

Look at the problems and come up with creative solutions. Don't sweep them under the rug. Constraints are where creative solutions happen.

Constraints are where creative solutions happen.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

March 24, 2005 11:29 AM

Day 1 at ChiliPLoP: Examples on the Web

Our first day at ChiliPLoP was a success. On our first evening in Carefree, my Hot Topic group decided to focus its energy on examples dealing with programs that interact over a network, when possible with services that provide large access to current or compelling data. We alternately worked individually and in pairs on programs that access the current time from the official clock of the US, interact with Google web services, process streaming ML data, and interact with other students via Rendezvous. In just one day, we wrote some simple programs that are within reach of students learning to program in a first CS course.

We kept each other honest with continuous devil's advocacy. In particular, we are sensitive to the fact that examples of this sort run the risk of scaring many CS1 instructors. They are nontraditional but, more, they carry the impression of being complex and "cutting edge", attributes that don't usually work in CS1. If the instructors don't know much about the Google API or Rendezvous, how can we expect students to get them?

We decided that we will have to engage this possibility actively. First, we resolved not to use the term "web services" when describing any of our examples, even little-w "web services", in order not to alarm folks before they have a chance to see the simplicity of the programs we create and the value they have for working with students.

Second, we know that we will have to be quite clear that learning about Rendezvous or the Google API is not the end goal of these examples. The goal is to help students learn how to write programs in an object-oriented language. We will need to package our examples with curricular materials that identify the programming elements and design ideas they best teach. This packaging is for instructors, not students, though as a result the students will benefit by encountering examples at the right point in their development as programmers.

I am again reminded that a big part of our job in the elementary patterns community is educating computer science faculty, as much or more so than educating students!

At the end of our day yesterday, we had a nice no-technology discussion out on the veranda of the conference center about our day's work. Devil's advocacy led us into a challenging and sometimes frustrating discussion about the role of "magic" in learning. Some in our group were concerned that the Rendezvous example will look too much like magic to the students, who write very simple objects that interact with very complex objects that we supply. Others argued that students working with objects in a more realistic context is a good thing and, besides, students see magic all the time when learning a language. Java's System.out.println() is serious magic -- bad magic, many of Java-in-CS1's critics would say. But let's be honest: Pascal's writeln() was serious magic, too. Ahh, those critics often say, it's simpler magic (no System.out abstraction) and besides, it's part of the base language. Hence the frustration for me in this discussion, and in so many like it.

If too much magic is a deal breaker, then I do not think we can we do compelling examples with our students. These days, compelling examples require rich context, and rich context early in the course requires some magical objects.

But I don't think magic is a deal breaker. The real issue is gratuitous magic versus essential magic. In Java,

public static void main( String[] args )

is gratuitous magic. Interacting with powerful and interesting objects is essential magic. It is what object-oriented programming is all about! It's also what can make an example compelling to students raised in an era of instant messaging and Amazon and eBay. What we as instructors need to do is to build suitable boundaries around complexity so that students can explore without fear and ultimately open up the boxes and see that what's inside isn't magic at all but simply more complex examples of the same thing that students have been learning all along: powerful objects sending messages to one another.

Right now, we are all expanding on our examples from yesterday. Later, I hope that we have a chance to mine some of the elementary patterns that show up across our examples. We are at a PLoP conference after all!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

January 24, 2005 10:01 AM

I Go To Extremes

Brian Button recently posted a long blog entry on his extreme refactoring of the video store class from Martin Fowler's Refactoring. Brian "turned [his] refactoring knob up to about 12" to see what would happen. The result was extreme, indeed: no duplication, even in loops, and methods of one line only. He concluded, "I'm not sure that I would ever go this far in real life, but it is nice to know that I could."

Turning the knobs up in this way is a great way to learn something new or to plumb for deeper understanding of something you already know. A few months ago, I wrote of a similar approach to deepening one's understanding off polymorphism and dynamic dispatch via a simple etude: Write a particular program with a budget of n if-statements or less, for some small value of n. As n approaches 0, most programmers -- especially folks for whom procedural thinking is still the first line of attack -- find this exercise increasingly challenging.

Back at ChiliPLoP 2003, we had a neat discussion along these lines. We were working with some ideas from Karel the Robot, and someone suggested that certain if-statements are next to impossible to get rid of. For example, How can we avoid this if?

     public class RedFollowingRobot
     {
         public void move()
         {
             if (nextSquare().isRed())
                 super.move();
         }
     }

public class Square { public boolean isRed() { ... } }

But it's possible... Joe Bergin suggested a solution using the Strategy pattern, and I offered a solution using simple dynamic dispatch. You can see our solutions at this wiki page. Like Brian said in his article, I'm not sure that I would ever go this far in real life, but it is nice to know that I could. This is the essence of the powerful Three Bears pattern: Often, in taking an idea too far, we learn best the bounds of its application.

Joe and I like this idea so much that we are teaching a workshop based on it, called The Polymorphism Challenge (see #33 there) at SIGCSE next month. We hope that spending a few hours doing "extreme polymorphism" will help some CS educators solidify their OO thinking skills and get away from the procedural way of thinking that is for many of them the way they see all computational problems. If you have any procedural code that you think simply must use a bunch of if-statements, send it to me, and Joe and I will see if it makes a good exercise for our workshop!


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 03, 2005 11:56 AM

Name It

Happy New Year!

I've run into the same idea twice in the last couple of weeks, in two different guises. I take that as a hint from the universe that I should be thinking about this as I prepare for the upcoming semester. The idea is: Name it.

First, at a public event during Advent, a friend was talking about how to raise strong, self-aware children. She said, "You know when your children do something good. Name it for them." Children should know that what they are doing has a name. Names are powerful for thinking and for communication. When an act has a name, the act *means* something.

Then I was ran across a column by Ron Rolheiser that comes at the issue from the other side, from where our feeling doesn't seem as good. The article is called Naming Our Restlessness and talks about how he felt after entering his religious order as a young man of 17. He was restless and wondered if that was a sign that he should be doing something else. But then someone told him that he was merely restless.

His simple, honest naming of what we were feeling introduced us to ourselves. We were still restless, but now we felt better, normal, and healthy again. A symptom suffers less when it knows where it belongs.

Often, people can better accept their condition when it has a name. Knowing that what they feel is normal, normal enough to have a name and be a part of the normal conversation, frees a person from the fear of not knowing what's wrong with them. Sometimes, there's nothing wrong with you! And even when there is, when the name tells us that something horrible is wrong, even then a name carries great power. Now we know what is wrong. If there is something we can do to fix the problem, we have to know what the problem is. And even when there is nothing we can do to fix the problem, yes, even then, a name can bring peace. "Now I know."

What does this mean for teaching? Name when students do something good. If they discover a pattern, tell them the name. When they adopt a practice that makes them better programmers, tell them the name of the practice. If they do something that isn't so good, or when they are uneasy about something in the course, name that, too, so that they can go about the business of figuring out what to do next. And help them figure it out!

I think this is a sign for me to write some patterns this semester...


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

December 20, 2004 10:47 AM

Improving the Vocabulary of Educators through Patterns

Education blogger Jenny D. calls for a language of the practice of educators:

We have to start naming the things we do, with real names that mean the same thing to all practitioners.

...

My intent in what I do is to move the practice of teaching into an environment that resembles medicine. Where practices and procedures are shared, and the language to discuss them is also shared. Where all practices and procedures can and will be tailored to meet specific situations, even though practitioners work hard to maintain high standards and transparency in what they do.

This sounds kind of silly, I know. But what we've got know is a hodgepodge of ideas and practices, teachers working in isolation in individual classrooms, and very little way to begin to straighten out and share the best practices and language to describe it.

This doesn't sound silly to me. When I first started as a university professor, I was surprised the extent to which educators tend to work solo. There's plenty of technical vocabulary for the content of my teaching but little or no vocabulary for the teaching itself. The education establishment seems quite content that educators, their classrooms, and their students are all so different that we cannot develop a standard way to describe or develop what we do in a classroom. I think that much of this reluctance stems from the history of teacher education, but I also suspect that now there is an element of political agenda involved.

Part of what I've done in my years as a faculty member is to work with other to develop the vocabulary we need to talk about teaching computer science. On the technical side, I have long been interested in documenting the elementary patterns that novices must learn to become effective programmers. Elementary patterns give students a vocabulary for talking about the programs they are writing, and a process for attacking problems. But they also give instructors a vocabulary for talking about the content of their courses.

On the teaching side, I have been involved in the Pedagogical Patterns community, which aims to draw on the patterns ideas of architect Christopher Alexander to document best practices in the practice of teaching and learning. Patterns such as Active Student, Spiral, and Differentiated Feedback capture those nuggets of practice that good teachers use all the time, the approaches that separate effective instruction from hit-or-miss time in the classroom. Relating these patterns in a pattern language builds a rich context around the patterns and gives instructors a way to think about designing their instruction and implementing their ideas.

Pedagogical patterns are the sort of thing that Jenny D. is looking for in educational proctice more generally.

In a follow-up post, she extends her analysis of how professionals in other disciplines practice by applying skills to implement protocols. You'll notice in these blog entries the good use of metaphor I blogged about last time. Jenny uses the practices in other professions to raise the question, "Why don't educators have a vocabulary for discussing what they do?" She hasn't suggested that the vocabulary of educators should be like medicine's vocabulary or any other, at least not yet.

I wish her luck in convincing other educators of the critical value of her research program. Every teacher and every student can benefit from laying a more scientific foundation for the everyday practice of education.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

December 09, 2004 5:05 PM

Learning from Failure

I have a couple of things on my blogging radar, but this quote from Nat Pryce just launched a new train of thought. Maybe that's because I gave my last lecture of the semester an hour and a half ago (hurray!) and am in an introspective mood about teaching now that the pressure is off for a while.

One of the many interesting topics that Dave [Snowden] touched on [in his talks at XP Day 4 was how people naturally turn to raw, personal stories rather than collated research when they want to learn how to solve a problem. Furthermore, people prefer negative stories that show what to avoid, rather than positive stories that describe best or good practices.

This "struck a chord" with Nat, whose blog is called Mistaeks I Hav Made [sic]. It also struck a chord with me because I spend a lot of my time trying to help students learn to design programs and algorithms.

When I teach patterns, whether of design or code, I often try to motivate the pattern with an example that helps students see why the pattern matters. The decorator pattern is cool and all, but until you feel what not using it is like in some situations, it's sometimes hard to see the point. I think of this teaching approach as "failure driven", as the state of mind for learning the new idea arises out of the failure of using the tools already available. It's especially useful for teaching patterns, whose descriptions typically include the forces that make the pattern a Good Thing in some context. Even when the pattern description doesn't include a motivating example, the forces give you clues on how to design one.

I taught the standard junior/senior-level algorithms course for the first time in a long, long time this semester, and I tried to bring a patterns perspective to algorithm design. This was helped in no small part by discussions with David Ginat, who is interested in documenting algorithm patterns. (One of my favorites of his is the Sliding Delta pattern.) But I think some of most effective work in the algorithms class semester -- and also some of the most unstructured -- was done by giving the students a game to play or a puzzle to solve. After they tried to figure out the nugget at the heart of the winning strategy, we'd discuss. We had lots of failures and ultimately, I think, some success at seeing invariants and finding efficient solutions. The students were sometimes bummed by their inability to see solutions immediately, but I assured them that the experience of trying and failing was what would give rise to the ability to solve problems.

(I still have more work to do finding the patterns in the algorithms we designed this semester, and then writing them up. Busy, busy!)

Negative stories -- especially the student's own stories -- do seem to be the primary catalyst for learning. But I have to be careful to throw in some opportunities to create positive stories, too, or students can become discouraged. The "after" picture of the pattern isn't enough; they need to phase a new problem and feel good because they can solve it. I suppose that one of the big challengers we teachers face is striking the right balance between the failures that drive learners forward and the successes that keep them wanting to be on the road.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

December 03, 2004 10:11 AM

The Theory of Patches

Recently, I ran across an interesting article by David Roundy called The Theory of Patches, which is actually part of the manual for Darcs, a version-control alternative to CVS and the like. Darcs uses an inductive definition to define the state of a software artifact in the repository:

So how does one define a tree, or the context of a patch? The simplest way to define a tree is as the result of a series of patches applied to the empty tree. Thus, the context of a patch consists of the set of patches that precede it.

Starting with this definition, Roundy goes on to develop a set of light theorems about patches, their composition, and their merging. Apparently these theorems guide the design and implementation of Darcs. (I haven't read that far.)

The reason I found this interesting was the parallel between repeated application of patches to an empty tree and

  • the repeated application of patterns to a design, leading to piecemeal growth of a system, and
  • the agile methods' cyclic process of extension and refactoring.

This led me to think about how an IDE such as Eclipse might have version control build right into it. Each time the programmer adds new behavior or applies a refactoring, the change can be recorded as a patch, a diff to the previous system. The theory of patches needed for a version control system like Darcs would be directly useful in implementing such a system.

I used to program a lot in Smalltalk, and Smalltalk's .changes file served as a lightweight, if incomplete, version control system. I suppose that a Smalltalker could implement something more complete rather easily. Someone probably already has!

Thanks to Chad Fowler's bookmarks for the link.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

September 21, 2004 6:28 PM

A Big Anniversary

We are coming upon a major anniversary in the worlds of object-oriented programming, patterns, and CS publications... Design Patterns first appeared at OOPSLA 1994. Most CS books, especially ones that appeal to a wide popular audience, have a pretty short shelf life. Occasionally, a new classic comes along that accompanies a change in how we work -- or ushers in the change. Design Patterns is such a book.

It came out at a time when industry was embracing the idea of object-oriented programming in C++, but many programmers just didn't know much about OOP. Where was the flexibility it offered? How could achieve that flexibility in C++, a powerful but rigid languages. Design Patterns showed us.

Design Patterns has taught a few generations of programmers about OOP and the common patterns that occur in flexible, extensible OO designs. It's still going strong these days. Amazon lists it at #788 on its bestsellers list, higher even than more more recent classics such as Refactoring and not all that far behind much more recent soon-to-be classics such as Code Complete, Second Edition. Even as the world has moved from C++ to Java, Design Patterns remains an essential reading for OO programmers.

To what can we attribute its success and durability? There has been much discussion in the software patterns community about the book's shortcomings (it isn't a pattern language, its pattern format is hard to write in, ...), but we can't ignore the fact that it teaches well material that is essential to programmers. I learn a little something new every time I read it.

This book gave visibility to the then-nascent software patterns community, which has led to many wonderful books and papers that teach us the principles of topics as diverse as configuration management, use case development, assembly language programming for microcontrollers, organizational management, and user interface design.

It also spawned an eponym for the book's authors, the Gang of Four, and a corresponding new TLA, GoF.

OOPSLA 2004 isn't going to let this anniversary slip by without a tribute, and a little fun to boot. The GoF 10th Anniversary Commemorative will feature a recent entry in the GoF-inspired library, Design Dating Patterns, by Solveig Haugland. That's a real book, all right -- check out the book's web site for more. The OOPSLA web site lists this session as a "social event", which makes some sense, given the book's content. But I expect that a few of the techies who show up will do so harboring a secret wish to learn something new that they can use. Heck, I'm married, and I plan to. If these design patterns teach their content as well as Design Patterns, then we will all learn something. And Ms. Haugland should be hailed for her achievement!


Posted by Eugene Wallingford | Permalink | Categories: Patterns

September 10, 2004 11:03 PM

And Now Starring...

I was in a skit tonight. I don't think I've had a role in a skit not created by one of my daughters since 1985 or so, when I teamed with one of best friends and the cutest girl I went to college with on a skit in our Business Law class.

The skit was put on by Mary Lynn Manns and Linda Rising to introduce their forthcoming book, Fearless Change: Patterns for Introducing New Ideas. They wrote a short play as a vehicle for presenting a subset of their pattern language at XP 2004, which they reprised here. I had two lines as an Early Adopter, a careful, judicious decision maker who needs to see concrete results before adopting a technology, but who is basically open to change. Wearing a construction paper hat, I pulled off my lines without even looking at my script. Look for the video at a Mr. Movies near you.

My favorite part of the skit that didn't involve me was this Dilbert comic that opened the show.

PLoP makes you do crazy things.

Our writers' workshop was excellent. Ralph Johnson and I had two papers on our elementary patterns work at ChiliPLoP. They are the first concrete steps toward a CS1 textbook driven by patterns and an agile approach from the beginning. The papers were well-received, and we gathered a lot of feedback on what worked well and what could stand some improvement. Now we're talking about how to give the writing project a boost of energy. I'll post links to the papers soon.


Posted by Eugene Wallingford | Permalink | Categories: General, Patterns

September 09, 2004 5:36 PM

Pattern as Autopsy

PLoP opened with a wonderful session led by Ward Cunningham and Norm Kerth on the history of the software patterns community. I've heard many of the community's creation stories, yet the collaborative telling was great fun. I even contributed a bit on my first PLoP in 1996. And I picked up a few new tidbits of interest:

  • The first pattern from A Pattern Language used by the Hillside Group in the famous afternoon design session that gave rise to its name: Circulation Realms.

  • Tom DeMarco used to give out copies of Alexander's Notes on the Synthesis of Form to students in his Structured Design tutorials and short courses. Ward said that DeMarco recognized the importance of Alexander's work earlier than most software people; he just never took his recognition to the next level, which was patterns.

  • Alexander said that, to find the essential patterns of cities, neighborhoods, and towns, he had to look to the cities of Europe, which embodied centuries of wisdom about living spaces. What was the equivalent source for patterns of object-oriented programs in the early 1990s? Why, the Smalltalk image!

However, I think that the most important idea that I left the session with is one that I'm still thinking on. One of the participants commented that The Nature of Order was Alexander's effort to get beyond a fundamental error underlying his work on patterns. Paraphrased:

Patterns are an autopsy of a successful system. An autopsy can do many things of value, but it doesn't tell us how to build a living thing.

That's what a pattern language strives to do, of course. But Alexander's experience applying his pattern language in Mexicali was only a mixed success. The language generated livable spaces that were more "funky" than beautiful. The Nature of Order seeks to identify the fundamental principles that give rise to harmony and beauty in created things, that give rise to patterns in a particular time and place and community.

I think we still need to discover the pattern languages embodied in our great programs. But "pattern as autopsy" is such an evocative phrase that I'll surely puzzle over it more until I have a better idea of where the boundary between dissection and creation lies.


Posted by Eugene Wallingford | Permalink | Categories: Patterns

August 29, 2004 4:29 PM

A New Rule of Three

Back in the beginning, the patterns community created the Rule of Three. This rule stated that every pattern should have three known uses. It was a part of the community's conscious effort to promote the creation of literature on well-tested solutions that had grown up in the practicing software world. Without the rule, leaders of the community worried that academics and students might re-write their latest theories into pattern form and drown out the new literature before it took root. These folks were not anti-academic; indeed many were and are academics. But they knew that academics had many outlets for their work, and they recognized the need to nurture a new community of writers among practicing software developers.

The Rule of Three was a cultural rule, though, and not a natural law. Christopher Alexander, in The Timeless Way of Building, wrote that patterns could be drived from both practice and theory. As the patterns community matured, the rule became less useful as a normative mechanism. In recent years, Richard Gabriel has encouraged pattern writers to look beyond the need for three known uses when creating new pattern languages. The most important thing is the role of each pattern in the language to create a meaningful whole. Good material is worth writing.

I returned to Gerald Weinberg's The Secrets of Consulting this weekend and ran across a new Rule of Three. I propose that pattern writers consider adopting Weinberg's Rule of Three, which he gives in his chapter on "seeing what's not there":

If you can't think of three things that might go wrong with your plans, then there's something wrong with your thinking.

At writers workshops, it's not uncommon to read a pattern which, according to its author, has no negative consequences. Apply the pattern and -- poof! -- all is well with the world. The real world doesn't usually work this way. If I use a Decorator or Mutual Recursion, then I still have plenty to think about. A pattern resolves some forces, but not others; or perhaps it resolves the primary forces under consideration but creates new ones within the system.

If you are writing a pattern, try to think of three negative consequences. You may not find three, but if you can't find any then either you are not thinking far enough or your pattern isn't one; it's a law of the universe.

Authors can likewise use this rule as a reminder to develop their forces more completely. If a pattern addresses few forces, then the reader will rightly wonder if the "problem" is really a problem at all. Or, if all the forces point in one direction, then the problem doesn't seem all that hard to solve. The solution is implicit in the problem.

Weinberg offers this rule as a general check on the breadth and depth of one's thinking, and it's a good one. But I think it also offers pattern writers, new and experienced alike, a much needed reminder that patterns are rarely so overarching that we can't find weaknesses in their armor. And looking for these weaknesses will help authors understand their problems better and write more convincing and more useful pattern languages.

"Okay, Eugene, I'm game. How exactly do I do that?" Advice about the results of thinking are not helpful when it's the actual thinking that's your problem. Weinberg offers some general advice, and I'll share that in an upcoming entry. I'll also offer some advice drawn from my own experience and from experienced writers who've been teaching us at patterns conferences.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

August 27, 2004 2:18 PM

By Any Other Name ... Not!

Everyone knows that using good names in code is essential. What more is there to say? Apparently, plenty. I've encountered discussions of good and bad names several times in the last few weeks.

Not too surprisingly, the topic of names came up during the first week of my introductory OOP course. I gave my students a program to read, one that is larger than they've read in preceding courses and that uses Java classes they've never used. We talked about how good names can make a program understandable even to readers who don't know the details of how an object works. The method names in a class such as java.util.Hashtable are clear enough that, along with the context in which they were used, students could understand the class that uses them.

Often, issues that involve names are really about something bigger. Consider these:

  • Jason Marshall writes about a design anti-pattern that he calls Implied Structure. Sometimes, developers need to create an object with a descriptive name in order to communicate knowledge they possess but that their program doesn't express. The new object, even if it is light on behavior, captures a valuable domain abstraction. The name communicates the abstraction to readers of the code, and it also allows the compiler to enforce meaningful values for variables.

  • Nat Pryce recently wrote about methods that take arguments which change the meaning of the method. In particular, Nat is concerned with arguments that negate the method's meaning, such as:

    void assertButtonIsEnabled(String buttonName, boolean isEnabled)

    When a client passes false as the second argument with this message, the result is that the button is disabled. Such code is almost impossible to follow, because you have to pay attention to the guilty argument in order to get even a high-level understanding of the code.

    Avoiding this situation is relatively easy. Create a second method with a name that says what the method means:

    void assertButtonIsEnabled(String buttonName)
    void assertButtonIsDisabled(String buttonName)

    Not only is Nat spot on with his analysis, but he has come up with a wonderfully evocative name for this anti-pattern, at least for those of us of a certain age: the Wayne's World Method. This name may be too specific in time and culture to have a long life, but I just love it.

  • Finally, OOPSLA 2004 has been much on my mind lately, with all my work on the Educators Symposium. I always look forward to OOPSLA's invited talks and keynote addresses, and this year one in particular has caught my eye, Ward Cunningham's Objects, Patterns, Wiki and XP: All Systems of Names. I am a big fan of Ward and his work. He seems always to be in the vanguard of what everyone will thinking about in a few years. According to the promotional buzz, Ward's talk will "unravel" his experience "building tools for the not quite tangible".

    I'm not sure exactly what Ward will talk about, and I can't wait to find out. But the title hints that he will follow the thread of names through several of his contributions to our industry. Just think about the role names play in connecting the tissue of a pattern language, the fabric of ideas that is a wiki -- and the domain model that comprises an object-oriented program. And perhaps names play a more central role? I look forward to Vancouver.

Important ideas have a depth that we can explore in thousands of ways, never quite the same. Names are one of them.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

July 30, 2004 12:26 PM

Patterns that Break the Rules

I ran across an old 'thought piece' (*) a couple of days ago that reminded me of a discussion that was common at my early PLoPs: how some design patterns 'break the rules' of object-oriented design.

Among the GoF patterns, Bridge and Singleton are often cited as patterns that violate some of the principles of OOD, or at least the heuristics that usually indicate good designs.

On the other hand, some of the GoF patterns don't violate OOD design principles. Indeed, they seem to specialize or at least illustrate of OOD design principles. Strategy, Adapter, and State come to my mind. I remember a comment that Bobby Woolf made to me at PLoP'96 or '97 when he came across such a pattern in a workshop: "That's not a pattern. It's just object-oriented programming."

Of course, if a particular configuration of classes and methods recurs in a particular context, then acknowledging that recurrence has value has a tool for teaching and learning. What is obvious to one reader is not obvious to every reader. (Brian Marick wrote an interesting note on how this applies to comments in code.) Such a pattern also has value as a part of a pattern language that documents something bigger.

Back in those days, Jim Coplien was just beginning to talk about the idea of symmetries, symmetry breaking, and their relationship to what is and isn't a pattern. I wish I could remember some of the pragmatic criteria he used for classifying the GoF patterns in this way, but my memory fails me.

One can see this idea at work with patterns of imperative programming, too. Control structures break the "symmetry" of sequential processing, while some looping patterns illustrate common design rules (e.g., Process All Items) and others violate more general rules (e.g., Loop and a Half).

The most important point made in this thought piece was a pedagogical one, made in the conclusion: that relating patterns to more general design rules and heuristics helps students learn both better. This should be obvious to us, but like many patterns needs to be said explicitly every once in a while. Relating patterns to more general design rules and heuristics creates a richer context for both. It helps students learn to make better design decisions through a deeper understanding of the forces at play.

(*) Joseph Lang et al., "Object-Oriented Programming and Design Patterns", SIGCSE Bulletin 33(4):68-70, December 2001.


Posted by Eugene Wallingford | Permalink | Categories: Patterns

July 21, 2004 5:38 PM

Algorithmic Patterns

The most recent SIGCSE Bulletin (Volume 36, Number 2, June 2004) reached my mailbox yesterday. Several articles caught my eye, and I'll probably write each about them over the next week or so.

I turned first to David Ginat's article on algorithmic patterns. David and I had a nice chat about elementary patterns during a lunch at SIGCSE 2003, so I knew of his interest in how the ideas of patterns might help us to understand algorithm design better, or at least to help us help our students to understand it better.

What is an algorithmic pattern? Ginat distinguishes between the mathematical features of an algorithm, such as boundaries on a sequence of search ranges, and the computational features of how it might processes data. For Ginat, and algorithmic pattern combines mathematical features with a certain computational "theme". For example, binary partition is an algorithmic pattern whose computational theme is "processing by halves" and whose mathematical core is a log n bound on the number of partitions considered.

The bulk of this paper illustrates the value of thinking in terms of an algorithmic pattern, using the Ginat's example of the sliding delta pattern. A sliding delta is an on-the-fly accumulator that enables an algorithm to compute a value of interest by updating the accumulator as it encounters new data values. This counting pattern shows up in many forms, from computing the winner of an election to determining the maximally covered point in a collection of ranges. Ginat defines this pattern as a computational theme-mathematical core pair and illustrates its application to several seemingly disparate problems.

The assertion that struck closest to home for me is that students who don't know about the sliding delta pattern are almost certainly doomed to implement naive and incredibly inefficient solutions to common problems in computing. In the best possible world, a student may implement a sliding delta, but only after going through laborious effort to discover the idea.

This is one of the driving ideas behind the elementary patterns project. Students learn best when they learn the patterns of a domain, because these are the structures they will want to build into their algorithms, their designs, and their code. I design my courses on design and programming around the elementary patterns students need to know. I'm even designing my fall algorithms course around what Ginat calls algorithmic patterns.

But to me, they are just patterns. Calling them 'algorithmic' patterns, to distinguish them from 'design' patterns and 'coding' patterns, does serve a useful purpose, I suppose. It may help his readers avoid any negative connotations they have associated with the older terms. ("Design patterns are too complex for my students." "Coding patterns are just idioms.")

Recognizing algorithmic patterns as patterns can help us to make them more useful. Ginat's pattern form lacks a couple of elements considered essential by the patterns community. Their absence accounts for my only misgiving about this paper: Just showing students patterns isn't enough. They need to learn when to use the pattern.

For examples, how do I know when to use a sliding delta? What features of a problem push me toward a sliding delta? What features push me away from the pattern? When I use a sliding delta, what new problems may result, and what other patterns could help me resolve them?

Context. Forces. Resulting context. Defining these for the sliding delta pattern would make it a lot more useful to its readers. They are essential knowledge for any algorithm designer: how to balance the requirements of the problem and its context with the features of the pattern.

Ginat laments that students often don't apply sliding delta to problems like his Dot Connections and Crocodiles. He suggests that introducing students to the pattern and occasionally categorizing problems solved by the pattern will be enough to help them learn to apply the pattern to new problems. He does hint at more, though -- the instructor should "underline the effective role" of the pattern. Perhaps we can abstract from this information some wisdom for recignizing a pattern's effectiveness beforehand?

I hope that my suggestion for improvement doesn't give the wrong impression. I like Ginat's idea a lot. I think this paper is a contribution to CS education, and I hope he develops this area further. I do hope, though, that he'll document the forces that drive algorithm design. They are the key to design -- and to learning how to design. As a Chinese adage holds, the Master paints not the created thing but the forces that created it.


Posted by Eugene Wallingford | Permalink | Categories: Patterns