May 18, 2016 11:27 AM

Confusion is the Sweat of Learning

That's a great line from Rhett Allain in Telling You the Answer Isn't the Answer:

Students are under the impression that when they are stuck and confused, they are doing something wrong. Think of it this way. What if you went to the gym to work out but you didn't get sweaty and you weren't sore or tired? You would probably feel like you really didn't get any exercise. The same is true for learning. Confusion is the sweat of learning.

As I read the article, I was a little concerned that Allain's story comes from a physics course designed for elementary education majors... Shouldn't education majors already know that learning is hard and that the teacher's job isn't simply to give answers? But then I realized how glad I am that these students have a chance to learn this lesson from a teacher who is patient enough to work through both the science and the pedagogy with them.

One of the biggest challenges for a teacher is designing workouts that ride along a thin line: confusing students just enough to stretch them into learning something valuable, without working them so hard that they become disheartened by the confusion and failure. This is hard enough to do when working with students individually. The larger and more diverse a class is, the more the teacher has to start shooting for the median, designing materials that work well enough often enough for most of the students and then doing triage with students on both ends of the curve.

Another is helping students learn to appreciate the confusion, perhaps even relish it. The payoff is worth the work.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 06, 2016 2:25 PM

Important Sentences about Learning Styles

From Learning styles: what does the research say?:

... it seems to me that the important "takeaway" from the research on learning styles is that teachers need to know about learning styles if only to avoid the trap of teaching in the style they believe works best for them. As long as teachers are varying their teaching style, then it is likely that all students will get some experience of being in their comfort zone and some experience of being pushed beyond it. Ultimately, we have to remember that teaching is interesting because our students are so different, but only possible because they are so similar. Of course each of our students is a unique individual, but it is extraordinary how effective well-planned group instruction can be.

Variety is an essential element of great teaching, and one I struggle to design into my courses. Students need both mental comfort and mental challenge. Changing pace every so often allows students to focus in earnest for a while and then consolidate knowledge in slower moments. Finally, changing format occasionally is one way to keep things interesting, and when students are interested, they are more willing to work. And that's where most learning comes from: practice and reflection.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 03, 2016 4:19 PM

Pair Programming is a Skill to be Learned

David Andersen offers a rather thorough list of ways that academics might adapt Google's coding practices to their research. It's a good read; check it out! I did want to comment on one small comment, because it relates to a common belief about pair programming:

But, of course, effective pair programming requires a lot of soft skills and compatible pairs. I'm not going to pretend that this solution works everywhere.

I don't pretend that pair programming works everywhere, either, or that everyone should adopt it, but I often wonder about statements like this one. Andersen seems to think that pair programming is a good thing and has helped him and members of his team's to produce high-quality code in the past. Why downplay the practice in a way he doesn't downplay other practices he recommends?

Throughout, the article encourages the use of new tools and techniques. These tools will alter the practice of his students. Some are complex enough that they will need to be taught, and practiced over some length of time, before they become an effective part of the team's workflow. To me, pair programming is another tool to be learned and practiced. It's certainly no harder than learning git...

Pair programming is a big cultural change for many programmers, and so it does require some coaching and extended practice. This isn't much different than the sort of "onboarding" that Andersen acknowledges will be necessary if he is to adopt some of Google's practices successfully in his lab upon upon his return. Pair programming takes practice and time, too, like most new skills.

I have seen the benefit of pair programming in an academic setting myself. Back when I used to teach our introductory course to freshmen, I had students pair every week in the closed lab sessions. We had thirty students in each section, but only fifteen computers in our lab. I paired students in a rotating fashion, so that over the course of fifteen weeks each student programmed with fifteen different classmates. We didn't use a full-on "pure" XP-style of pairing, but what we did was consistent with the way XP encourages pair programming.

This was a good first step for students. They got to know each other well and learned from one another. The better students often helped their partners in the way senior developers can help junior developers progress. In almost all cases, students helped each other find and fix errors. Even though later courses in the curriculum did not follow up with more pair programming, I saw benefits in later courses, in the way students interacted in the lab and in class.

I taught intro again a couple of falls ago after a long time away. Our lab has twenty-eight machines now, so I was not forced to use pair programming in my labs. I got lazy and let them work solo, with cross-talk. In the end, I regretted it. The students relied on me a lot more to help them find errors in their code, and they tended to work in the same insulated cliques throughout the semester. I don't think the group progressed as much as programmers, either, even though some individuals turned out fine.

A first-year intro lab is a very different setting than a research lab full of doctoral students. However, if freshmen can learn to pair program, I think grad students can, too.

Pair programming is more of a social change than a technical change, and that creates different issues than, say, automated testing and style checking. But it's not so different from the kind of change that capricious adherence to style guidelines or other kinds of code review impose on our individuality.

Are we computer scientists so maladjusted socially that we can't -- or can't be bothered -- to learn the new behaviors we need to program successfully in pairs? In my experience, no.

Like Andersen, I'm not advocating that anyone require pair programming in a lab. But: If you think that the benefits of pair programming exceed the cost, then I encourage you to consider having your research students or even your undergrads use it. Don't shy away because someone else thinks it can't work. Why deprive your students of the benefits?

The bottom line is this. Pair programming is a skill to be learned, like many others we teach our students.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 30, 2016 11:02 AM

Thinking About Untapped Extra Credit Opportunities

I don't usually offer extra credit assignments in my courses, but I did this semester. Students submitted their last scheduled homework early in the last week of classes: an interpreter for a small arithmetic language with local variables. There wasn't enough time left to assign another homework, but I had plenty of ideas for the next version of our interpreter. Students had just learned how to implement both functions and mutable data, so they could add functions, true variables, and sequences of expressions to the language. They could extend their REPL to support top-level definitions and state. They could add primitive data objects to the language, or boolean values and operators. If they added, booleans, they could add an if expression. If they were willing to learn some new Racket, they could load programs from a text file and evaluate them. I had plenty of ideas for them to try!

So I asked, "If I write up an optional assignment, how many of you would take a stab at some extra credit?" Approximately twenty of the twenty-five students present raised their hands.

I just downloaded the submission folder. The actual number of attempts was a complement of the number of hands raised: five. And, with only one exception, the students who tried the extra credit problems are the students who need it least. They are likely to get an A or a high B in the course anyway. The students who most need extra credit (and the extra practice) didn't attempt the extra credit.

SMH, as the kids say these days. But the more I thought about it, the more this made sense.

  • College students are the most optimistic people I have ever met. Many students probably raised their hand with every intention of submitting extra credit solutions. Then the reality of the last week of classes hit, and they ran out of time.

  • The students who submitted new code are likely the ones doing well enough in their other courses to be able to spare extra time for this course. They are also probably used to doing well in all of their courses, and a little uncertainty about their grade in this course may have spurred them into action. So, they may have had both more time and more internal motivation to do extra work.

  • To be fair, I don't know that the other students didn't attempt to solve some of the problems. Maybe some tried but did not submit their work. They may not have been satisfied with the quality of their code and didn't have time to seek help from me before the deadline.

  • And, if we are being honest, there is at least one more possibility. A few of the students who find themselves needing extra credit at the end of of the semester got themselves into that position by not being very disciplined in their work habits. Those students may be as optimistic as any other students, but they aren't likely to conjure up better work habits on short notice.

These reflections have me thinking... If I want to help the students who most need the help, I need to find ways to reach them sooner. I did try one thing earlier this semester that worked well for many students: the opportunity to rewrite one of their exams at home with access to all the course materials. A couple of students wrote surprisingly candid and insightful assessments of why they had performed below par under test conditions and supplied better work on their second attempt. I hope that experience helps them as they prepare for the final exam.

I've been teaching a long time. I still have much to learn.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 29, 2016 3:30 PM

A Personal Pantheon of Programming Books

Michael Fogus, in the latest issue of Read-Eval-Print-λove, writes:

The book in question was Thinking Forth by Leo Brodie (Brodie 1987) and upon reading it I immediately put it into my own "personal pantheon" of influential programming books (along with SICP, AMOP, Object-Oriented Software Construction, Smalltalk Best Practice Patterns, and Programmers Guide to the 1802).

Mr. Fogus has good taste. Programmers Guide to the 1802 is new to me. I guess I need to read it.

The other five books, though, are in my own pantheon influential programming books. Some readers may be unfamiliar with these books or the acronyms, or aware that so many of them are available free online. Here are a few links and details:

  • Thinking Forth teaches us how to program in Forth, a concatenative language in which programs run against a global stack. As Fogus writes, though, Brodie teaches us so much more. He teaches a way to think about programs.

  • SICP is Structure and Interpretation of Computer Programs, hailed by many as the greatest book on computer programming ever written. I am sympathetic to this claim.

  • AMOP is The Art of the Metaobject Protocol, a gem of a book that far too few programmers know about. It presents a very different and more general kind of OOP than most people learn, the kind possible in a language like Common Lisp. I don't know of an authorized online version of this book, but there is an HTML copy available.

  • Object-Oriented Software Construction is Bertrand Meyer's opus on OOP. It did not affect me as deeply as the other books on this list, but it presents the most complete, internally consistent software engineering philosophy of OOP that I know of. Again, there seems to be an unauthorized version online.

  • I love Smalltalk Best Practice Patterns and have mentioned it a couple of times over the years [ 1 | 2 ]. Ounce for ounce, it contains more practical wisdom for programming in the trenches than any book I've read. Don't let "Smalltalk" in the title fool you; this book will help you become a better programmer in almost any language and any style. I have a PDF of a pre-production draft of SBPP, and Stephane Ducasse has posted a free online copy, with Kent's blessing.

Paradigms of Artificial Intelligence Programming

There is one book on my own list that Fogus did not mention: Paradigms of Artificial Intelligence Programming, by Peter Norvig. It holds perhaps the top position in my personal pantheon. Subtitled "Case Studies in Common Lisp", this book teaches Common Lisp, AI programming, software engineering, and a host of other topics in a classical case studies fashion. When you finish working through this book, you are not only a better programmer; you also have working versions of a dozen classic AI programs and a couple of language interpreters.

Reading Fogus's paragraph of λove for Thinking Forth brought to mind how I felt when I discovered PAIP as a young assistant professor. I once wrote a short blog entry praising it. May these paragraphs stand as a greater testimony of my affection.

I've learned a lot from other books over the years, both books that would fit well on this list (in particular, A Programming Language by Kenneth Iverson) and others that belong on a different list (say, Gödel, Escher, Bach -- an almost incomparable book). But I treasure certain programming books in a very personal way.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Software Development, Teaching and Learning

April 05, 2016 4:06 PM

Umberto Eco and the Ineffable Power of Books

In What Unread Books Can Teach Us Oliver Burkeman relates this story about novelist and scholar Umberto Eco:

While researching his own PhD, Eco recalls, he got deeply stuck, and one day happened to buy a book by an obscure 19th-century abbot, mainly because he liked the binding. Idly paging through it, he found, in a throwaway line, a stunning idea that led him to a breakthrough. Who'd have predicted it? Except that, years later, when a friend asked to see the passage in question, he climbed a ladder to a high bookshelf, located the book... and the line wasn't there. Stimulated by the abbot's words, it seems, he'd come up with it himself. You never know where good ideas will come from, even when they come from you.

A person can learn something from a book he or or she has read, even if the book doesn't contain what the person learned. This is a much steadier path to knowledge than resting in the comfort that all information is available at the touch of a search engine.

A person's anti-library helps to make manifest what one does not yet know. As Eco reminds us, humility is an essential ingredient in this prescription.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

March 31, 2016 2:00 PM

TFW Your Students Get Abstraction

A colleague sent me the following exchange from his class, with the tag line "Best comments of the day." His students were working in groups to design a Java program for Conway's Game of Life.

Student 1: I can't comprehend what you are saying.

Student 2: The board doesn't have to be rectangular, does it?

Instructor: In Conway's design, it was. But abstractly, no.

Student 3: So you could have a board of different shapes, or you could even have a three-dimensional "board". Each cell knows its neighbors even if we can't easily display it to the user.

Instructor: Sure, "neighbor" is an abstract concept that you can implement differently depending on your need.

Student 2: I knew there was a reason I took linear algebra.

Student 1: Ok. So let's only allow rectangular boards.

Maybe Student 1 still can't comprehend what everyone is saying... or perhaps he or she understands perfectly well and is a pragmatist. YAGNI for the win!

It always makes me happy when a student encounters a situation in which linear algebra is useful and recognizes its applicability unprompted.

I salute all three of these students, and the instructor who is teaching the class. A good day.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 30, 2016 3:21 PM

Quick Hits at the University

This morning I read three pieces with some connection to universities and learning. Each had a one passage that made me smart off silently as I pedaled.

From The Humanities: What's The Big Idea?:

Boyarin describes his own research as not merely interdisciplinary but "deeply post-disciplinary." (He jokes that when he first came to Berkeley, his dream was to be 5 percent in 20 departments.)

Good luck getting tenure that way, dude.

"Deeply post-disciplinary" is a great bit of new academic jargon. Universities are very much organized by discipline. Figuring out how to support scholars who work outside the lines is a perpetual challenge, one that we really should address at scale if we want to enable universities to evolve.

From this article on Bernie Sanders's free college plan:

Big-picture principles are important, but implementation is important, too.

Hey, maybe he just needs a programmer.

Implementing big abstractions is hard enough when the substance is technical. When you throw in social systems and politics, implementing any idea that deviates very far from standard practice becomes almost impossible. Big Ball of Mud, indeed.

From Yours, Isaac Asimov: A Life in Letters:

Being taught is the intellectual analog of being loved.

I'll remind my students of this tomorrow when I give them Exam 3, on syntactic abstraction. "I just called to say 'I love you'."

Asimov is right. When I think back on all my years in school, I feel great affection for so many of my teachers, and I recall feeling their affection for me. Knowledge is not only power, says Asimov; it is happiness. When people help me learn they offer me knew ways to be happy.

( The Foundation Trilogy makes me happy, too.)

Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

March 18, 2016 9:58 AM

Thoughts for Programmers from "Stay on the Bus"

Somehow, I recently came across a link to Stay on the Bus, an excerpt from a commencement speech Arno Rafael Minkkinen gave at the New England School of Photography in June 2004. It is also titled "The Helsinki Bus Station Theory: Finding Your Own Vision in Photography". I almost always enjoy reading the thoughts of an artist on his or her own art, and this was no exception. I also usually hear echoes of what I feel about my own arts and avocations. Here are three.

What happens inside your mind can happen inside a camera.

This is one of the great things about any art. What happens inside your mind can happen in a piano, on a canvas, or in a poem. When people find the art that channels their mind best, beautiful things -- and lives -- can happen.

One of the things I like about programming is that is really a meta-art. Whatever happens in your mind can happen inside a camera, inside a piano, on a canvas, or in a poem. Yet whatever happens inside a camera, inside a piano, or on a canvas can happen inside a computer, in the form of a program. Computing is a new medium for experimenting, simulating, and creating.

Teachers who say, "Oh, it's just student work," should maybe think twice about teaching.

Amen. There is no such thing as student work. It's the work our students are ready to make at a particular moment in time. My students are thinking interesting thoughts and doing their best to make something that reflects those thoughts. Each program, each course in their study is a step along a path.

All work is student work. It's just that some of us students are at a different point along our paths.

Georges Braque has said that out of limited means, new forms emerge. I say, we find out what we will do by knowing what we will not do.

This made me think of an entry I wrote many years ago, Patterns as a Source of Freedom. Artists understand better than programmers sometimes that subordinating to a form does not limit creativity; it unleashes it. I love Minkkinen's way of saying this: we find out what we will do by knowing what we will not do. In programming as in art, it is important to ask oneself, "What will I not do?" This is how we discover we will do, what we can do, and even what we must do.

Those are a few of the ideas that stood out to me as I read Minkkinen's address. The Helsinki Bus Station Theory is a useful story, too.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 11, 2016 3:59 PM

The Irony of "I Didn't Have Time to Use Your Technique..."

All last spring, I planned to blog about my Programming Languages class but never got around to writing more than a couple of passing thoughts. I figured this spring would be different, yet here we are at the end of Week 9 and I've not even mentioned the course. This is how guys like me fail in personal relationships. We think a lot of good thoughts but don't follow through with our time and attention.

I am tempted to say that this has been a seesaw semester, but on reflection things have gone pretty well. I have a good group of students, most of whom are engaged in the topic and willing to interact in class. (Lack of interest and engagement was a bit of a problem last spring.) We've had fun in and out of class.

The reason it sometimes feels like I'm on a seesaw is that the down moments stand out more in my memory than they deserve to. You'd think that, as long as I've been teaching, I would have figured out how to manage this dynamic more effectively. Maybe it's just a part of being a teacher. We want things to go perfectly for our students, and when they don't we have to wonder why not.

One source of concern this semester has been the average exam score for the first two tests. They have been lower than historic averages in the course. Have I changed my expectations? Have I done something differently in the classroom? After taking a hard look back on my previous semesters' notes and assignments, I think not. My job now is to help this class reach the level I think they can reach. What can I do differently going forward? What can they do differently, and how do I help them do it?

I know they are capable of growth. Early last month, PhD Comics ran a strip titled The Five Most Typed Words in Academia. The winner was "Sorry for the late reply". At the time, the five most common words my students had said to me in the young semester were:

I didn't read the instructions.

For example, the student would say, "This problem was hard because I didn't know how to take the remainder properly." Me: "I gave that information in the instructions for the assignment." Student: "Oh, I didn't read the instructions."

Fortunately, we seem to moved beyond that stage of our relationship, as most students have come to see that the assignment may actually include some clues to help them out. Or maybe they've just stopped telling me that they don't read the instructions. If so, I appreciate them sparing my feelings.

A related problem is a perennial issue in this course: We learn a new technique, and some students choose not to use it. Writing code to process language expressions is a big part of the course, so we study some structural recursion patterns for processing a grammar specified in BNF. Yet a few students insist on whacking away at the problem with nothing more than a cond expression and a brave heart. When I ask them why, they sometimes say:

I didn't have time to use the technique we learned in class, so...

... so they spent twice as long trying to find their way to a working solution. Even when they find one, the code is generally unreadable even to them. Functional programming is hard, they say.

Fortunately, again, many seem to moved beyond this stage and are now listening to their data structures. The result is beautiful code: short, clear, and expressive. Grading such programs is a pleasure.

It recently occurred to me, though, that I have been guilty of the "I didn't have time to use your technique..." error myself. While trying to improve the recursive programming unit of the course over the last few years, I seem to have invented my own version of the Design Recipe from How to Design Programs. In the spirit of Greenspun's Tenth Rule, I have probably reinvented an ad hoc, informally-specified, bug-ridden, pedagogically less sound version of the Design Recipe.

As we used to say in Usenet newsgroups, "Pot. Kettle, Black." Before the next offering of the course, I intend to do my homework properly and find a way to integrate HtDP's well-tested technique into my approach.

These things stand out in my mind, yet I think that the course is going pretty well. And just when I begin to wonder whether I've been letting the class down, a few students stop in my office and say that this is one of their favorite courses CS courses ever. They are looking forward to the compiler course this fall. I'm not sure students realize the effect such words have on their instructors -- at least on this one.

Off to spring break we go. When we get back: lexical addressing and (eventually) a look at stateful programming. Fun and surprise await us all.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 02, 2016 4:45 PM

Why Bother With Specialty Languages?

In Sledgehammers vs Nut Crackers, Thomas Guest talks about pulling awk of the shelf to solve a fun little string-processing problem. He then shows a solution in Python, one of his preferred general-purpose programming languages. Why bother with languages like awk when you have Python at the ready? Guest writes:

At the outset of this post I admitted I don't generally bother with awk. Sometimes, though, I encounter the language and need to read and possibly adapt an existing script. So that's one reason to bother. Another reason is that it's elegant and compact. Studying its operation and motivation may help us compose and factor our own programs -- programs far more substantial than the scripts presented here, and in which there will surely be places for mini-languages of our own.

As I watch some of my students struggle this semester to master Racket, recursive programming, and functional style, I offer them hope that learning a new language and a new style will make them better Python and Java programmers, even if they never write another Racket or Lisp program again. The more different ways we know how to think about problems and solutions, the more effective we can be as solvers of problems. Of course, Racket isn't a special purpose language, and a few students find they so like the new style that they stick with the language as their preferred tool.

Experienced programmers understand what Guest is saying, but in the trenches of learning, it can be hard to appreciate the value of knowing different languages and being able to think in different ways. My sledgehammer works fine, my students say; why am I learning to use a nutcracker? I have felt that sensation myself.

I try to keep this feeling in mind as my students work hard to master a new way of thinking. This helps me empathize with their struggle, all the while knowing that Racket will shape how some of them think about every program they write in the future.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 14, 2016 11:28 AM

Be Patient, But Expect Better. Then Make Better.

In Reversing the Tide of Declining Expectations Matthew Butterick exhorts designers to expect more from themselves, as well as from the tools they use. When people expect more, other people sometimes tell them to be patient. There is a problem with being patient:

[P]atience is just another word for "let's make it someone else's problem. ... Expectations count too. If you have patience, and no expectations, you get nothing.

But what if you find the available tools lacking and want something better?

Scientists often face this situation. My physicist friends seem always to be rigging up some new apparatus in order to run the experiments they want to run. For scientists and so many other people these days, though, if they want a new kind of tool, they have to write a computer program.

Butterick tells a story that shows designers can do the same:

Let's talk about type-design tools. If you've been at the conference [TYPO Berlin, 2012], maybe you saw Petr van Blokland and Frederick Berlaen talking about RoboFont yesterday. But that is the endpoint of a process that started about 15 years ago when Erik and Petr van Blokland, and Just van Rossum (later joined by many others) were dissatisfied with the commercial type-design tools. So they started building their own. And now, that's a whole ecosystem of software that includes code libraries, a new font-data format called UFO, and applications. And these are not hobbyist applications. These are serious pieces of software being used by professional type designers.

What makes all of this work so remarkable is that there are no professional software engineers here. There's no corporation behind it all. It's a group of type designers who saw what they needed, so they built it. They didn't rely on patience. They didn't wait for someone else to fix their problems. They relied on their expectations. The available tools weren't good enough. So they made better.

That is fifteen years of patience. But it is also patience and expectation in action.

To my mind, this is the real goal of teaching more people how to program: programmers don't have to settle. Authors and web designers create beautiful, functional works. They shouldn't have to settle for boring or cliché type on the web, in their ebooks, or anywhere else. They can make better. Butterick illustrates this approach to design himself with Pollen, his software for writing and publishing books. Pollen is a testimonial to the power of programming for authors (as well as a public tribute to the expressiveness of a programming language).

Empowering professionals to make better tools is a first step, but it isn't enough. Until programming as a skill becomes part of the culture of a discipline, better tools will not always be used to their full potential. Butterick gives an example:

... I was speaking to a recent design-school graduate. He said, "Hey, I design fonts." And I said, "Cool. What are you doing with RoboFab and UFO and Python?" And he said, "Well, I'm not really into programming." That strikes me as a really bad attitude for a recent graduate. Because if type designers won't use the tools that are out there and available, type design can't make any progress. It's as if we've built this great spaceship, but none of the astronauts want to go to Mars. "Well, Mars is cool, but I don't want to drive a spaceship. I like the helmet, though." Don't be that guy. Go the hell to Mars.

Don't be that person. Go to Mars. While you are at it, help the people you know to see how much fun programming can be and, more importantly, how it can help them make things better. They can expect more.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 24, 2016 10:33 AM

Learn Humility By Teaching

In The Books in My Life, Henry Miller writes about discussing books with an inquisitive friend:

I remember this short period vividly because it was an exercise in humility and self-control on my part. The desire to be absolutely truthful with my friend caused me to realize how very little I knew, how very little I could reveal, though he always maintained that I was a guide and mentor to him. In the brief, the result of those communions was that I began to doubt all that I had blithely taken for granted. The more I endeavored to explain my point of view, the more I floundered. He may have thought I acquitted myself well, but not I. Often, on parting from him, I would continue the inner debate interminably.

I am guessing that most anyone who teaches knows the feeling Miller describes. I feel it all the time.

I'm feeling it again this semester while teaching my Programming Languages and Paradigms course. We're learning Racket as a way to learn to talk about programming languages, and also as a vehicle for learning functional programming. One of my goals this semester is to be more honest. Whenever I find a claim in my lecture notes that sounds like dogma that I'm asking students to accept on faith, I'm trying to explain in a way that connects to their experience. Whenever students ask a question about why we do something in a particular way, I'm trying to help them really to see how the new way is an improvement over what they are used to. If I can't, I admit that it's convention and resolve not to be dogmatic about it with them.

This is a challenge for me. I am prone to dogma, and having programmed functionally in Scheme for a long time, so much of what my students experience learning Racket is deeply compiled in my brain. Why do we do that? I've forgotten, if I ever knew. I may have a vague memory that, when I don't do it that way, chaos ensues. Trust me! Unfortunately, that is not a convincing way to teach. Trying to give better answers and more constructive explanations gives rise to the sort of floundering that Miller talks about. After class, the inner debate continues as I try to figure out what I know and why, so that I can do better.

Some natural teachers may find this easy, but for me, learning to answer questions in a way that really helps students has been a decades-long lesson in humility.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 08, 2015 3:55 PM

A Programming Digression: Generating Excellent Numbers


Whenever I teach my compiler course, it seems as if I run across a fun problem or two to implement in our source language. I'm not sure if that's because I'm looking or because I'm just lucky to read interesting blogs and Twitter feeds.

Farey sequences as Ford circles

For example, during a previous offering, I read on John Cook's blog about Farey's algorithm for approximating real numbers with rational numbers. This was a perfect fit for the sort of small language that my students were writing a compiler for, so I took a stab at implementing it. Because our source language, Klein, was akin to an integer assembly language, I had to unravel the algorithm's loops and assignment statements into function calls and if statements. The result was a program that computed an interesting result and that tested my students' compilers in a meaningful way. The fact that I had great fun writing it was a bonus.

This Semester's Problem

Early this semester, I came across the concept of excellent numbers. A number m is "excellent" if, when you split the sequence of its digits into two halves, a and b, b² - a² equals n. 48 is the only two-digit excellent number (8² - 4² = 48), and 3468 is the only four-digit excellent number (68² - 34² = 3468). Working with excellent numbers requires only integers and arithmetic operations, which makes them a perfect domain for our programming language.

My first encounter with excellent numbers was Brian Foy's Computing Excellent Numbers, which discusses ways to generate numbers of this form efficiently in Perl. Foy uses some analysis by Mark Jason Dominus, written up in An Ounce of Theory Is Worth a Pound of Search, that drastically reduces the search space for candidate a's and b's. A commenter on the Programming Praxis article uses the same trick to write a short Python program to solve that challenge. Here is an adaptation of that program which prints all of the 10-digit excellent numbers:

    for a in range(10000, 100000):
        b = ((4*a**2+400000*a+1)**0.5+1) / 2.0
        if b == int(b):
           print( int(str(a)+str(int(b))) )

I can't rely on strings or real numbers to implement this in Klein, but I could see some alternatives... Challenge accepted!

My Standard Technique

We do not yet have a working Klein compiler in class yet, so I prefer not to write complex programs directly in the language. It's too hard to get subtle semantic issues correct without being able to execute the code. What I usually do is this:

  • Write a solution in Python.
  • Debug it until it is correct.
  • Slowly refactor the program until it uses only a Klein-like subset of Python.

This produces what I hope is a semantically correct program, using only primitives available in Klein.

Finally, I translate the Python program into Klein and run it through my students' Klein front-ends. This parses the code to ensure that it is syntactically correct and type-checks the code to ensure that it satisfies Klein's type system. (Manifest types is the one feature Klein has that Python does not.)

As mentioned above, Klein is something like integer assembly language, so converting to a Klein-like subset of Python means giving up a lot of features. For example, I have to linearize each loop into a sequence of one or more function calls, recursing at some point back to the function that kicks off the loop. You can see this at play in my Farey's algorithm code from before.

I also have to eliminate all data types other than booleans and integers. For the program to generate excellent numbers, the most glaring hole is a lack of real numbers. The algorithm shown above depends on taking a square root, getting a real-valued result, and then coercing a real to an integer. What can I do instead?

the iterative step in Newton's method

Not to worry. sqrt is not a primitive operator in Klein, but we have a library function. My students and I implement useful utility functions whenever we encounter the need and add them to a file of definitions that we share. We then copy these utilities into our programs as needed.

sqrt was one of the first complex utilities we implemented, years ago. It uses Newton's method to find the roots of an integer. For perfect squares, it returns the argument's true square root. For all other integers, it returns the largest integer less than or equal to the true root.

With this answer in hand, we can change the Python code that checks whether a purported square root b is an integer using type coercion:

    b == int(b)
into Klein code that checks whether the square of a square root equals the original number:
    isSquareRoot(r : integer, n : integer) : boolean
      n = r*r

(Klein is a pure functional language, so the return statement is implicit in the body of every function. Also, without assignment statements, Klein can use = as a boolean operator.)

Generating Excellent Numbers in Klein

I now have all the Klein tools I need to generate excellent numbers of any given length. Next, I needed to generalize the formula at the heart of the Python program to work for lengths other than 10.

For any given desired length, let n = length/2. We can write any excellent number m in two ways:

  • a10n + b (which defines it as the concatenation of its front and back halves)
  • b² - a² (which defines it as excellent)

If we set the two m's equal to one another and solve for b, we get:

    b = -(1 + sqrt[4a2 + 4(10n)a + 1])

Now, as in the algorithm above, we loop through all values for a with n digits and find the corresponding value for b. If b is an integer, we check to see if m = ab is excellent.

The Python loop shown above works plenty fast, but Klein doesn't have loops. So I refactored the program into one that uses recursion. This program is slower, but it works fine for numbers up to length 6:

    > python3.4 6

Unfortunately, this version blows out the Python call stack for length 8. I set the recursion limit to 50,000, which helps for a while...

    > python3.4 8
    Segmentation fault: 11


Next Step: See Spot Run

The port to an equivalent Klein program was straightforward. My first version had a few small bugs, which my students' parsers and type checkers helped me iron out. Now I await their full compilers, due at the end of the week, to see it run. I wonder how far we will be able to go in the Klein run-time system, which sits on top of a simple virtual machine.

If nothing else, this program will repay any effort my students make to implement the proper handling of tail calls! That will be worth a little extra-credit...

This programming digression has taken me several hours spread out over the last few weeks. It's been great fun! The purpose of Klein is to help my students learn to write a compiler. But the programmer in me has fun working at this level, trying to find ways to implement challenging algorithms and then refactoring them to run deeper or faster. I'll let you know the results soon.

I'm either a programmer or crazy. Probably both.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 20, 2015 6:02 PM

The Scientific Value of Reading Old Texts

In Numbers, Toys and Music, the editors of Plus Magazine interview Manjul Bhargava, who won a 2014 Fields Medal for his work on a problem involving a certain class of square numbers.

Bhargava talked about getting his start on problems of this sort not by studying Gauss's work from nineteenth century, but by reading the work of the seventh century mathematician Brahmagupta in the original Sanskrit. He said it was exciting to read original texts and old translations of original texts from at least two perspectives. Historically, you see an idea as it is encountered and discovered. It's an adventure story. Mathematically, you see the idea as it was when it was discovered, before it has been reinterpreted over many years by more modern mathematicians, using newer, fancier, and often more complicated jargon than was available to the original solver of the problem.

He thinks this is an important step in making a problem your own:

So by going back to the original you can bypass the way of thinking that history has somehow decided to take, and by forgetting about that you can then take your own path. Sometimes you get too influenced by the way people have thought about something for 200 years, that if you learn it that way that's the only way you know how to think. If you go back to the beginning, forget all that new stuff that happened, go back to the beginning. Think about it in a totally new way and develop your own path.

Bhargava isn't saying that we can ignore the history of math since ancient times. In his Fields-winning work, he drew heavily on ideas about hyperelliptic curves that were developed over the last century, as well as computational techniques unavailable to his forebears. He was prepared with experience and deep knowledge. But by going back to Brahmagupta's work, he learned to think about the problem in simpler terms, unconstrained by the accumulated expectations of modern mathematics. Starting from a simpler set of ideas, he was able to make the problem his own and find his own way toward a solution.

This is good advice in computing as well. When CS researchers tell us to read the work of McCarthy, Newell and Simon, Sutherland, and Engelbart, they are channeling the same wisdom that helped Manjul Bhargava discover new truths about the structure of square numbers.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 30, 2015 4:35 PM

Taking Courses Broad and Wide

Nearly nine years ago, digital strategist Russell Davies visited the University of Oregon to work with students and faculty in the advertising program and wrote a blog entry about his stint there. Among his reflections on what the students should be doing and learning, he wrote:

We're heading for a multi-disciplinary world and that butts right up against a university business model. If I were preparing myself for my job right now I'd do classes in film editing, poetry, statistics, anthropology, business administration, copyright law, psychology, drama, the history of art, design, coffee appreciation, and a thousand other things. Colleges don't want you doing that, that destroys all their efficiencies, but it's what they're going to have to work out.

I give similar advice to prospective students of computer science: If they intend to take their CS degrees out into the world and make things for people, they will want to know a little bit about many different things. To maximize the possibilities of their careers, they need a strong foundation in CS and an understanding of all the things that shape how software and software-enhanced gadgets are imagined, made, marketed, sold, and used.

Just this morning, a parent of a visiting high school student said, after hearing about all the computer science that students learn in our programs, "So, our son should probably drop his plans to minor in Spanish?" They got a lot more than a "no" out of me. I talked about the opportunities to engage with the growing population of Spanish-speaking Americans, even here in Iowa; the opportunities available to work for companies with international divisions; and how learning a foreign language can help students study and learn programming languages differently. I was even able to throw in a bit about grammars and the role they play in my compiler course this semester.

I think the student will continue with his dream to study Spanish.

I don't think that the omnivorous course of study that Davies outlines is at odds with the "efficiencies" of a university at all. It fits pretty well with a liberal arts education, which even of our B.S. students have time for. But it does call for some thinking ahead, planning to take courses from across campus that aren't already on some department's list of requirements. A good advisor can help with that.

I'm guessing that computer science students and "creatives" are not the only ones who will benefit from seeking a multi-disciplinary education these days. Davies is right. All university graduates will live in a multi-disciplinary world. It's okay for them (and their parents) to be thinking about careers when they are in school. But they should prepare for a world in which general knowledge and competencies buoy up their disciplinary knowledge and help them adapt over time.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

September 11, 2015 3:55 PM

Search, Abstractions, and Big Epistemological Questions

Andy Soltis is an American grandmaster who writes a monthly column for Chess Life called "Chess to Enjoy". He has also written several good books, both recreational and educational. In his August 2015 column, Soltis talks about a couple of odd ways in which computers interact with humans in the chess world, ways that raise bigger questions about teaching and the nature of knowledge.

As most people know, computer programs -- even commodity programs one can buy at the store -- now play chess better than the best human players. Less than twenty years ago, Deep Blue first defeated world champion Garry Kasparov in a single game. A year later, Deep Blue defeated Kasparov in a closely contested six-game match. By 2005, computers were crushing Top Ten players with regularity. These days, world champion Magnus Larson is no match for his chess computer.

a position in which humans see the win, but computers don't

Yet there are still moments where humans shine through. Soltis opens with a story in which two GMs were playing a game the computers thought Black was winning, when suddenly Black resigned. Surprised journalists asked the winner, GM Vassily Ivanchuk, what had happened. It was easy, he said: it only looked like Black was winning. Well beyond the computers' search limits, it was White that had a textbook win.

How could the human players see this? Were they searching deeper than the computers? No. They understood the position at a higher level, using abstractions such as "being in the square" and passed pawns like splitting a King like "pants". (We chessplayers are an odd lot.)

When you can define 'flexibility' in 12 bits,
it will go into the program.

Attempts to program computers to play chess using such abstract ideas did not work all that well. Concepts like king safety and piece activity proved difficult to implement in code, but eventually found their way into the programs. More abstract concepts like "flexibility", "initiative", and "harmony" have proven all but impossible to implement. Chess programs got better -- quickly -- when two things happened: (1) programmers began to focus on search, implementing metrics that could be applied rapidly to millions of positions, and (2) computer chips got much, much faster.

Pawn Structure Chess, by Andy Soltis

The result is that chess programs can beat us by seeing farther down the tree of possibilities than we do. They make moves that surprise us, puzzle us, and even offend our sense of beauty: "Fischer or Tal would have played this move; it is much more elegant." But they win, easily -- except when they don't. Then we explain why, using ideas that express an understanding of the game that even the best chessplaying computers don't seem to have.

This points out one of the odd ways computers relate to us in the world of chess. Chess computers crush us all, including grandmasters, using moves we wouldn't make and many of us do not understand. But good chessplayers do understand why moves are good or bad, once they figure it out. As Soltis says:

And we can put the explanation in words. This is why chess teaching is changing in the computer age. A good coach has to be a good translator. His students can get their machine to tell them the best move in any position, but they need words to make sense of it.

Teaching computer science at the university is affected by a similar phenomenon. My students can find on the web code samples to solve any problem they have, but they don't always understand them. This problem existed in the age of the book, too, but the web makes available so much material, often undifferentiated and unexplained, so, so quickly.

The inverse of computers making good moves we don't understand brings with it another oddity, one that plays to a different side of our egos. When a chess computer loses -- gasp! -- or fails to understand why a human-selected move is better than the moves it recommends, we explain it using words that make sense of human move. These are, of course, the same words and concepts that fail us most of the time when we are looking for a move to beat the infernal machine. Confirmation bias lives on.

Soltis doesn't stop here, though. He realizes that this strange split raises a deeper question:

Maybe it's one that only philosophers care about, but I'll ask it anyway:

Are concepts like "flexibility" real? Or are they just artificial constructs, created by and suitable only for feeble, carbon-based minds?

(Philosophers are not the only ones who care. I do. But then, the epistemology course I took in grad school remains one of my two favorite courses ever. The second was cognitive psychology.)


We can implement some of our ideas about chess in programs, and those ideas have helped us create machines we can no longer defeat over the board. But maybe some of our concepts are simply be fictions, "just so" stories we tell ourselves when we feel the need to understand something we really don't. I don't think so, the pragmatist in me keeps pushing for better evidence.

Back when I did research in artificial intelligence, I always chafed at the idea of neural networks. They seemed to be a fine model of how our brains worked at the lowest level, but the results they gave did not satisfy me. I couldn't ask them "why?" and receive an answer at the conceptual level at which we humans seem to live. I could not have a conversation with them in words that helped me understand their solutions, or their failures.

Now we live in a world of "deep learning", in which Google Translate can do a dandy job of translating a foreign phrase for me but never tell me why it is right, or explain the subtleties of choosing one word instead of another. Add more data, and it translates even better. But I still want the sort of explanation that Ivanchuk gave about his win or the sort of story Soltis can tell about why a computer program only drew a game because it saddled itself with inflexible pawn structure.

Perhaps we have reached the limits of my rationality. More likely, though, is that we will keep pushing forward, bringing more human concepts and abstractions within the bounds of what programs can represent, do, and say. Researchers like Douglas Hofstadter continue the search, and I'm glad. There are still plenty of important questions to ask about the nature of knowledge, and computer science is right in the middle of asking and answering them.


IMAGE 1. The critical position in Ivanchuk-Jobava, Wijk aan Zee 2015, the game to which Soltis refers in his story. Source: Chess Life, August 2015, Page 17.

IMAGE 2. The cover of Andy Soltis's classic Pawn Structure Chess. Source: the book's page at

IMAGE 3. A bust of Aristotle, who confronted Plato's ideas about the nature of ideals. Source: Classical Wisdom Weekly.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

August 25, 2015 1:57 PM

The Art of Not Reading

The beginning of a new semester brings with it a crush of new things to read, write, and do, which means it's a good time to remember this advice from Arthur Schopenhauer:

Hence, in regard to our subject, the art of not reading is highly important. This consists in not taking a book into one's hand merely because it is interesting the great public at the time -- such as political or religious pamphlets, novels, poetry, and the like, which make a noise and reach perhaps several editions in their first and last years of existence. Remember rather that the man who writes for fools always finds a large public: and only read for a limited and definite time exclusively the works of great minds, those who surpass other men of all times and countries, and whom the voice of fame points to as such. These alone really educate and instruct.

"The man who writes for fools always finds a large public." You do not have to be part of it. Time is limited. Read something that matters.

The good news for me is that there is a lot of writing about compilers by great minds. This is, of course, also the bad news. Part of my job is to help my students navigate the preponderance of worthwhile readings.

Reading in my role as department head is an altogether different matter...


The passage above is from On Books and Reading, which is available via Project Gutenberg, a wonderful source of many great works.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 23, 2015 10:12 AM

Science Students Should Learn How to Program, and Do Research

Physicist, science blogger, and pop science author Chad Orzel offered some advice for prospective science students in a post on his Forbes blog last week. Among other things, he suggests that science students learn to program. Orzel is among many physics profs who integrate computer simulations into their introductory courses, using the Matter and Interactions curriculum (which you may recall reading about here in a post from 2007).

I like the way Orzel explains the approach to his students:

When we start doing programming, I tell students that this matters because there are only about a dozen problems in physics that you can readily solve exactly with pencil and paper, and many of them are not that interesting. And that goes double, maybe triple for engineering, where you can't get away with the simplifying spherical-cow approximations we're so fond of in physics. Any really interesting problem in any technical field is going to require some numerical simulation, and the sooner you learn to do that, the better.

This advice complements Astrachan's Law and its variants, which assert that we should not ask students to write a program if they can do the task by hand. Conversely, if they can't solve their problems by hand, then they should get comfortable writing programs that can. (Actually, that's the contrapositive of Astrachan, but "contrapositively" doesn't sound as good.) Programming is a medium for scientists, just as math is, and it becomes more important as they try to solve more challenging problems.

Orzel and Astrachan both know that the best way to learn to program is to have a problem you need a computer to solve. Curricula such as Matter and Interactions draw on this motivation and integrate computing directly into science courses. This is good news for us in computer science. Some of the students who learn how to program in their science courses find that they like it and want to learn more. We have just the courses they need to go deeper.

I concur with all five of Orzel's suggestions for prospective science students. They apply as well to computer science students as to those interested in the physical sciences. When I meet with prospective CS students and their families, I emphasize especially that students should get involved in research. Here is Orzel's take:

While you might think you love science based on your experience in classes, classwork is a pale imitation of actual science. One of my colleagues at Williams used a phrase that I love, and quote all the time, saying that "the hardest thing to teach new research students is that this is not a three-hour lab."

CS students can get involved in empirical research, but they also have the ability to write their own programs to explore their own ideas and interests. The world of open source software enables them to engage the discipline in ways that preceding generations could only have dreamed of. By doing empirical CS research with a professor or working on substantial programs that have users other than the creators, students can find out what computer science is really about -- and find out what they want to devote their lives to.

As Orzel points out, this is one of the ways in which small colleges are great for science students: undergrads can more readily become involved in research with their professors. This advantage extends to smaller public universities, too. In the past year, we have had undergrads do some challenging work on bioinformatics algorithms, motion virtual manipulatives, and system security. These students are having a qualitatively different learning experience than students who are only taking courses, and it is an experience that is open to all undergrad students in CS and the other sciences here.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 19, 2015 4:07 PM

Working Too Much Means Never Having to Say "No"

Among the reasons David Heinemeier Hansson gives in his advice to Fire the Workaholics is that working too much is a sign of bad judgment:

If all you do is work, your value judgements are unlikely to be sound. Making good calls on "is it worth it?" is absolutely critical to great work. Missing out on life in general to put more hours in at the office screams "misguided values".

I agree, in two ways. First, as DHH says, working too much is itself a general indicator that your judgment is out of whack. Second is the more specific case:

For workaholics, doing more work always looks like a reasonable option. As a result, when you are trying to decide, "Should I make this or not?", you never have to choose not to make the something in question -- even when not making it is the right thing to do. That sort of indifferent decision making can be death in any creative endeavor.

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 04, 2015 1:00 PM

Concrete, Then Abstract

One of the things that ten years teaching the same topic has taught Daniel Lemire is that students generally learn more effectively when they learn practical skills first and only then confront the underlying theory:

Though I am probably biased, I find that it is a lot harder to take students from a theoretical understanding to a practical one... than to take someone with practical skills and teach him the theory. My instinct is that most people can more easily acquire an in-depth practical knowledge through practice (since the content is relevant) and they then can build on this knowledge to acquire the theory.

He summarizes the lesson he learned as:

A good example, well understood, is worth a hundred theorems.

My years of teaching have taught me similar lessons. I described a related idea in Examples First, Names Last: showing students examples of an idea before giving it a name.

Lemire's experience teaching XML and my experience teaching a number of topics, including the object-oriented programming example in that blog post, are specific examples of a pattern I usually call Concrete, Then Abstract. I have found this to be an effective strategy in my teaching and writing. I may have picked up the name from Ralph Johnson at ChiliPLoP 2003, where we were part of a hot topic group sketching programming patterns for beginning programmers. Ralph is a big proponent of showing concrete examples before introducing abstract ideas. You can see that in just about every pattern, paper, and book he has written.

My favorite example of "Concrete, Then Abstract" this week is in an old blog entry by Scott Vokes, Making Diagrams with Graphviz. I recently came back to an idea I've had on hold for a while: using Graphviz to generate a diagram showing all of my department's courses and prerequisites. Whenever I return to Graphviz after time away, I bypass its documentation for a while and pull up instead a cached link to Scott's short introduction. I immediately scroll down to this sample program written in Graphviz's language, DOT:

an example program in Graphviz's DOT language

... and the corresponding diagram produced by Graphviz:

an example diagram produced by Graphviz

This example makes me happy, and productive quickly. It demonstrates an assortment of the possibilities available in DOT, including several specific attributes, and shows how they are rendered by Graphviz. With this example as a starting point, I can experiment with variations of my own. If I ever want or need more, I dig deeper and review the grammar of DOT in more detail. By that time, I have a pretty good practical understanding of how the language works, which makes remembering how the grammar works easier.

Sometimes, the abstract idea to learn, or re-learn, is a context-free grammar. Sometimes, it's a rule for solving a class of problems or a design pattern. And sometimes, it's a theorem or a theory. In all these cases, examples provide hooks that help us learn an abstract idea that is initially hard for us to hold in our heads.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 29, 2015 2:10 PM

How Do You Know If It Is Good? You Don't.

In the Paris Review's Garrison Keillor, The Art of Humor No. 2, Keillor thinks back to his decision to become a writer, which left him feeling uncertain about himself:

Someone once asked John Berryman, How do you know if something you've written is good? And John Berryman said, You don't. You never know, and if you need to know then you don't want to be a writer.

This doesn't mean that you don't care about getting better. It means that you aren't doing it to please someone else, or at least that your doing it is not predicated on what someone else thinks. You are doing it because that's what you think about. It means that you keep writing, whether it's good or not. That's how you get better.

It's always fun to watch our students wrestle with this sort of uncertainty and come out on the other side of the darkness. Last fall, I taught first-semester freshmen who were just beginning to find out if they wanted to be programmers or computer scientists, asking questions and learning a lot about themselves. This fall, I'm teaching our senior project course, with students who are nearing the end of their time at the university. Many of them think a lot about programming and programming languages, and they will drive the course with their questions and intensity. As a teacher, I enjoy both ends of the spectrum.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 27, 2015 2:23 PM

The Flip Side to "Programming for All"

a thin volume of William Blake

We all hear the common refrain these days that more people should learn to program, not just CS majors. I agree. If you know how to program, you can make things. Even if you don't write many programs yourself, you are better prepared to talk to the programmers who make things for you. And even if you don't need to talk to programmers, you have expanded your mind a bit to a way of thinking that is changing the world we live in.

But there are two sides to this equation, as Chris Crawford laments in his essay, Fundamentals of Interactivity:

Why is it that our entertainment software has such primitive algorithms in it? The answer lies in the people creating them. The majority are programmers. Programmers aren't really idea people; they're technical people. Yes, they use their brains a great deal in their jobs. But they don't live in the world of ideas. Scan a programmer's bookshelf and you'll find mostly technical manuals plus a handful of science fiction novels. That's about the extent of their reading habits. Ask a programmer about Rabelais, Vivaldi, Boethius, Mendel, Voltaire, Churchill, or Van Gogh, and you'll draw a blank. Gene pools? Grimm's Law? Gresham's Law? Negentropy? Fluxions? The mind-body problem? Most programmers cannot be troubled with such trivia. So how can we expect them to have interesting ideas to put into their algorithms? The result is unsurprising: the algorithms in most entertainment products are boring, predictable, uninformed, and pedestrian. They're about as interesting in conversation as the programmers themselves.

We do have some idea people working on interactive entertainment; more of them show up in multimedia than in games. Unfortunately, most of the idea people can't program. They refuse to learn the technology well enough to express themselves in the language of the medium. I don't understand this cruel joke that Fate has played upon the industry: programmers have no ideas and idea people can't program. Arg!

My office bookshelf occasionally elicits a comment or two from first-time visitors, because even here at work I have a complete works of Shakespeare, a thin volume of William Blake (I love me some Blake!), several philosophy books, and "The Brittanica Book of Usage". I really should have some Voltaire here, too. I do cover one of Crawford's bases: a recent blog entry made a software analogy to Gresham's Law.

In general, I think you're more likely to find a computer scientist who knows some literature than you are to find a literary professional who knows much CS. That's partly an artifact of our school system and partly a result of the wider range historically of literature and the humanities. It's fun to run into a colleague from across campus who has read deeply in some area of science or math, but rare.

However, we are all prone to fall into the chasm of our own specialties and miss out on the well-roundedness that makes us better at whatever specialty we practice. That's one reason that, when high school students and their parents ask me what students should take to prepare for a CS major, I tell them: four years of all the major subjects, including English, math, science, social science, and the arts; plus whatever else interests them, because that's often where they will learn the most. All of these topics help students to become better computer scientists, and better people.

And, not surprisingly, better game developers. I agree with Crawford that more programmers should be learn enough other stuff to be idea people, too. Even if they don't make games.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 29, 2015 1:58 PM

Bridging the Gap Between Learning and Doing

a sketch of bridging the gap

I recently learned about the work of Amelia McNamara via this paper published as Research Memo M-2014-002 by the Viewpoints Research Institute. McNamara is attacking an important problem: the gap between programming tools for beginners and programming tools for practitioners. In Future of Statistical Programming, she writes:

The basic idea is that there's a gap between the tools we use for teaching/learning statistics, and the tools we use for doing statistics. Worse than that, there's no trajectory to make the connection between the tools for learning statistics and the tools for doing statistics. I think that learners of statistics should also be doers of statistics. So, a tool for statistical programming should be able to step learners from learning statistics and statistical programming to truly doing data analysis.

"Learners of statistics should also be doers of statistics." -- yes, indeed. We see the same gap in computer science. People who are learning to program are programmers. They are just working at a different level of abstraction and complexity. It's always a bit awkward, and often misleading, when we give novice programmers a different set of tools than we give professionals. Then we face a new learning barrier when we ask them to move up to professional tools.

That doesn't mean that we should turn students loose unprotected in the wilds of C++, but it does require that that we have a pedagogically sound trajectory for making the connection between novice languages and tools and those used by more advanced programmers.

It also doesn't mean that we can simply choose a professional language that is in some ways suitable for beginners, such as Python, and not think any more about the gap. My recent experience reminds me that there is still a lot of complexity to help our students deal with.

McNamara's Ph.D. dissertation explored some of the ways to bridge this gap in the realm of statistics. It starts from the position that the gap should not exist and suggests ways to bridge it, via both better curricula and better tools.

Whenever I experience this gap in my teaching or see researchers trying to make it go away, I think back to Alan Kay's early vision for Smalltalk. One of the central tenets of the Smalltalk agenda was to create a language flexible and rich enough that it could accompany the beginner as he or she grew in knowledge and skill, opening up to a new level each time the learner was ready for something more powerful. Just as a kindergartener learns the same English language used by Shakespeare and Joyce, a beginning programmer might learn the same language as Knuth and Steele, one that opens up to a new level each time the learner is ready.

We in CS haven't done especially good job at this over the years. Matthias Felleisen and the How to Design Programs crew have made perhaps the most successful effort thus far. (See *SL, Not Racket for a short note on the idea.) But this project has not made a lot of headway yet in CS education. Perhaps projects such as McNamara's can help make inroads for domain-specific programmers. Alan Kay may harbor a similar hope; he served as a member of McNamara's Ph.D. committee.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 16, 2015 3:17 PM

Dr. Seuss on Research

Reading about the unusual ideas in TempleOS reminded me of a piece of advice I received from the great philosopher of epistemology, Dr. Seuss:

If you want to get eggs
you can't buy at a store,
You have to do things
never thought of before.

As Peter T. Hooper learned in Scrambled Eggs Super, discovering or creating something new requires that we think unusual, or even outrageous, thoughts.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 09, 2015 2:48 PM

I'm Behind on Blogging About My Courses...

... so much so, that I may never catch up. The last year and a half have been crazy, and I simply have not set aside enough time to blog. A big part of the time crunch was teaching three heavy preps in 2014: algorithms, agile software development, and our intro course. It is fitting, then, that blogging about my courses has suffered most of all -- even though, in the moment, I often have plenty to say. Offhand, I can think of several posts for which I once had big plans and for which I still have drafts or outlines sitting in my ideas/ folder:

  • readers' thoughts on teaching algorithms in 2014, along with changes I made to my course. Short version: The old canon still covers most of the important bases.
  • reflections on teaching agile dev again after four years. Short version: The best learning still happens in the trenches working with the students, who occasionally perplex me and often amaze me.
  • reflections on teaching Python in the intro for the first course for the first time. Short version: On balance, there are many positives, but wow, there is a lot of language there, and way too many resources.
  • a lament on teaching programming languages principles when the students don't seem to connect with the material. Surprise ending: Some students enjoyed the course more than I realized.

Thoughts on teaching Python stand out as especially trenchant even many months later. The intro course is so important, because it creates habits and mindsets in students that often long outlive the course. Teaching a large, powerful, popular programming language to beginners in the era of Google, Bing, and DuckDuckGo is a Sisyphean task. No matter how we try to guide the students' introduction to language features, the Almighty Search Engine sits ever at the ready, delivering size and complexity when they really need simple answers. Maybe we need language levels a lá the HtDP folks.

Alas, my backlog is so deep that I doubt I will ever have time to cover much of it. Life goes on, and new ideas pop up every day. Perhaps I can make time the posts outlined above.

Right now, my excitement comes from the prospect of teaching my compilers course again for the first time in two years. The standard material still provides a solid foundation for students who are heading off into the the world of software development. But in the time since I last taught the course, some neat things have happened in the compiler world that will make the course better, if only by putting the old stuff into a more modern context. Consider announcements just this week about Swift, in particular that the source code is being open-sourced and the run-time ported to Linux. The moment these two things happen, the language instantly becomes of greater interest to more of my students. Its openness also makes it more suitable as content for a university course.

So, there will be plenty to blog about, even if I leave my backlog untouched. That's a good thing.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 22, 2015 1:58 PM

When It Comes to Learning, Motivation and Study Habits Trump Technology

A lot of people have been talking about Kentaro Toyama's Why Technology Will Never Fix Education, which appeared in the Chronicle of Higher Education earlier this week. Here is the money paragraph:

The real obstacle in education remains student motivation. Especially in an age of informational abundance, getting access to knowledge isn't the bottleneck, mustering the will to master it is. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure. Most students are seeking credentials that graduate schools and employers will take seriously and an environment in which they're prodded to do the work. But neither of these things is cheaply available online.

My wife just completed the second of two long-term substitute teaching assignments this year in a local elementary school, so we have been discussing the daily challenges that teachers face. The combination of student motivation and support at home account for most of the variation in how well students perform and how well any given class operates. I see a similar pattern at the university. By the time students reach me, the long-term effects of strong or weak support at home has crystallized into study habits and skills. The combination of student motivation and study skills account for most of the variation I see in whether students succeed or struggle in their university courses.

This all reminds me of a short passage from Tyler Cowen's book, Average Is Over:

The more information that's out there, the greater the returns to just being willing to sit down and apply yourself. Information isn't what's scarce; it's the willingness to do something with it.

The easy availability of information made possible by technology places a higher premium on the ability of students to sit down and work hard, and the willingness to do so. We can fool ourselves into thinking we know more than we do when we look things up quickly, but many students can just as easily access the same information.

We have found ways to use technology to make information easily available, but we haven't found a way to make motivation an abundant resource. Motivation has to come from within. So do the skills needed to use the information. We can at least help students develop study habits and skills through school and family life, though these are best learned early in life. It is hard for students to change 15-20 years of bad habits after they get to college.

The irony for young people is that, while they live in an era of increasingly available information, the onus rests more than ever on what they do. That is both good news and bad.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 13, 2015 12:12 PM

The Big Picture

I just turned in my grades. For the most part, this is boring paperwork. Neither students nor faculty really like grades; they are something we all have to do as a part of the system. A lot of people would like to change the system and eliminate grades, but every alternative has its own weaknesses. So we all muddle along.

But there is a bigger picture, one which Matt Reed expresses eloquently:

Tolstoy once claimed that there are really only two stories, and we keep telling each of them over and over again: a stranger comes to town, and a hero goes on a quest. In higher education, we live those two stories continuously. Every semester, a new crop of strangers come to town. And every semester, we set a new group of heroes off on their respective quests.

It's May, so we see a new group of young people set of on their own quests. In a few months, we will welcome a new group of strangers. In between are the students who are in the middle of their own "stranger comes to town" story, who will return to us in the fall a little different yet much the same.

That's the big picture.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 09, 2015 9:28 AM

A Few Thoughts on Graduation Day

Today is graduation day for the Class of 2015 at my university. CS students head out into the world, most with a job in hand or nearly so, ready to apply their hard-earned knowledge and skills to all variety of problems. It's an exciting time for them.

This week also brought two other events that have me thinking about the world in which my students my will live and the ways in which we have prepared them. First, on Thursday, the Technology Association of Iowa organized a #TechTownHall on campus, where the discussion centered on creating and retaining a pool of educated people to participate in, and help grow, the local tech sector. I'm a little concerned that the TAI blog says that "A major topic was curriculum and preparing students to provide immediate value to technology employers upon graduation." That's not what universities do best. But then, that is often what employers want and need.

Second, over the last two mornings, I read James Fallows's classic The Case Against Credentialism, from the archives of The Atlantic. Fallows gives a detailed account of the "professionalization" of many lines of work in the US and the role that credentials, most prominently university degrees, have played in the movement. He concludes that our current approach is biased heavily toward evaluating the "inputs" to the system, such as early success in school and other demonstrations of talent while young, rather than assessing the outputs, namely, how well people actually perform after earning their credentials.

Two passages toward the end stood out for me. In one, Fallows wonders if our professionalized society creates the wrong kind of incentives for young people:

An entrepreneurial society is like a game of draw poker; you take a lot of chances, because you're rarely dealt a pat hand and you never know exactly what you have to beat. A professionalized society is more like blackjack, and getting a degree is like being dealt nineteen. You could try for more, but why?

Keep in mind that this article appeared in 1985. Entrepreneurship has taken a much bigger share of the public conversation since then, especially in the teach world. Still, most students graduating from college these days are likely thinking of ways to convert their nineteens into steady careers, not ways to risk it all on the next Amazon or Über.

Then this quote from "Steven Ballmer, a twenty-nine-year-old vice-president of Microsoft", on how the company looked for new employees:

We go to colleges not so much because we give a damn about the credential but because it's hard to find other places where you have large concentrations of smart people and somebody will arrange the interviews for you. But we also have a lot of walk-on talent. We're looking for programming talent, and the degree is in no way, shape, or form very important. We ask them to send us a program they've written that they're proud of. One of our superstars here is a guy who literally walked in off the street. We talked him out of going to college and he's been here ever since.

Who would have guessed in 1985 the visibility and impact that Ballmer would have over the next twenty years? Microsoft has since evolved from the entrepreneurial upstart to the staid behemoth, and now is trying to reposition itself as an important player in the new world of start-ups and mobile technology.

Attentive readers of this blog may recall that I fantasize occasionally about throwing off the shackles of the modern university, which grow more restrictive every year as the university takes on more of the attributes of corporate and government bureaucracy. In one of my fantasies, I organize a new kind of preparatory school for prospective software developers, one with a more modern view of learning to program but also an attention to developing the whole person. That might not satisfy corporate America's need for credentials, but it may well prepare students better for a world that needs poker players as much as it needs blackjack players. But where would the students come from?

So, on a cloudy graduation day, I think about Fallows's suggestion that more focused vocational training is what many grads need, about the real value of a liberal university education to both students and society, and about how we can best prepare CS students participate to in the world. It is a world that needs not only their technical skills but also their understanding of what tech can and cannot do. As a society, we need them to take a prominent role in civic and political discourse.

One final note on the Fallows piece. It is a bit long, dragging a bit in the middle like a college research paper, but opens and closes strongly. With a little skimming through parts of less interest, it is worth a read. Thanks to Brian Marick for the recommendation.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

April 29, 2015 1:52 PM

Beautiful Sentences: Scientific Data as Program

On the way to making a larger point about the role of software in scientific research, Konrad Hinsen writes these beautiful sentences:

Software is just data that can be interpreted as instructions for a computer. One could conceivably write some interpreter that turns previously generated data into software by executing it.

They express one side of one of the great ideas of computer science, the duality of program and data:

  • Every program is data to some other program, and
  • every set of data is a program to some machine.

This is one of the reasons why it is so important for CS students to study the principles of programming languages, create languages, and build interpreters. These activities help bring this great idea to life and prepare those who understand it to solve problems in ways that are otherwise hard to imagine.

Besides, the duality is a thing of beauty. We don't have to use it as a tool in order to appreciate this sublime truth.

As Hinsen writes, few people outside of computer science (and, sadly, too many within CS) appreciate "the particular status of software as both tool an information carrier and a tool". The same might be said for our appreciation of data, and the role that language plays in bridging the gap between the two.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 26, 2015 9:55 AM

Yesterday's Questions Can Have Different Answers Today

I wrote on Twitter Thursday [ 1 | 2 ] that I end up modifying my lecture notes every semester, no matter how well done they were the last time I taught the course. From one semester to the next, I find that I am more likely to change the introductions, transitions, and conclusions of a session than the body. The intros, transitions, and conclusions help to situate the material in a given place and time: the context of this semester and this set of students. The content, once refined, tends to stabilize, though occasionally I feel a need to present even it in a different way, to fit the current semester.

Novelist Italo Calvino knew this feeling as well, when he was preparing to be interviewed:

Rarely does an interviewer ask questions you did not expect. I have given a lot of interviews and I have concluded that the questions always look alike. I could always give the same answers. But I believe I have to change my answers because with each interview something has changed either inside myself or in the world. An answer that was right the first time may not be right again the second.

This echoes my experience preparing for lecture. The answer that was right the last time does not seem right again this time. Sometimes, I have changed. With any luck, I have learned new things since the last time I taught the course, and that makes for a better story. Sometimes, the world has changed: a new programming language such as Clojure or Scala has burst onto the scene, or a new trend in industry such as mobile app development has made a different set of issues relevant to the course. I need to tell a different story that acknowledges -- and takes advantage of -- these changes.

Something else always changes for a teacher, too: the students. It's certainly true the students in the class are different every time I teach a course. But sometimes, the group is so different from past groups that the old examples, stories, and answers just don't seem to work. Such has been the case for me this semester. I've had to work quite a bit to understand how my students think and incorporate that into my class sessions and homework assignments. This is part of the fun and challenge of being a teacher.

We have to be careful not to take this analogy too far. Teaching computer science is different from an author giving an interview about his or her life. For one thing, there is a more formal sense of objective truth in the content of, say, a programming language course. An object is still a closure; a closure is still an object that other code can interact with over time. These answers tend to stay the same over time. But even as a course communicates the same fundamental truths from semester to semester, the stories we need to tell about these truths will change.

Ever the fantastic writer, Calvino saw in his interview experience the shape of a new story, a meta-story of sorts:

This could be the basis of a book. I am given a list of questions, always the same; every chapter would contain the answers I would give at different times. ... The changes would then become the itinerary, the story that the protagonist lives. Perhaps in this way I could discover some truths about myself.

This is one of the things I like about teaching. I often discover truths about myself, and occasionally transform myself.


The passages quote above come from The Art of Fiction No. 130, Italo Calvino in The Paris Review. It's not the usual Paris Review interview, as Calvino died before the interviewer was done. Instead, it is a pastiche of four different sources. It's a great read nonetheless.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 20, 2015 4:02 PM

"Disjunctive Inference" and Learning to Program

Over the weekend, I read Hypothetical Reasoning and Failures of Disjunctive Inference, a well-sourced article on the problems people have making disjunctive inferences. It made me think about some of the challenges students have learning to program.

Disjunctive inference is reasoning that requires us to consider hypotheticals. A simple example from the article is "the married problem":

Jack is looking at Ann, but Ann is looking at George. Jack is married, but George is not. Is a married person looking at an unmarried person?
  1. Yes.
  2. No.
  3. Cannot be determined.

The answer is yes, of course, which is obvious if we consider the two possible cases for Ann. Most people, though, stop thinking as soon as they realize that the answer hinges on Ann's status. They don't know her status, so they can't know the answer to the question. Even so, most everyone understands the answer as soon as the reasoning is explained to them.

The reasons behind our difficulties handling disjunctive inferences are complex, including both general difficulties we have with hypotheticals and a cognitive bias sometimes called cognitive miserliness: we seek to apply the minimum amount of effort to solving problems and making decisions. This is a reasonable evolutionary bias in many circumstances, but here it is maladaptive.

The article is fascinating and well worth a full read. It points to a number of studies in cognitive psychology that seek to understand how humans behave in the face if disjunctive inferences, and why. It closes with some thoughts on improving disjunctive reasoning ability, though there are no quick fixes.

As I read the article, it occurred to me that learning to program places our students in a near-constant state of hypothetical reasoning and disjunctive inference. Tracing code that contains an if statement asks them to think alternative paths and alternative outcomes. To understand what is true after the if statement executes is disjunctive inference.

Something similar may be true for a for loop, which executes once each for multiple values of a counter, and a while loop, which runs an indeterminate number of times. These aren't disjunctive inferences, but they do require students to think hypothetically. I wonder if the trouble many of my intro CS students had last semester learning function calls involved failures of hypothetical reasoning as much as it involves difficulties with generalization.

And think about learning to debug a program.... How much of that process involves hypotheticals and even full-on disjunctive inference? If most people have trouble with this sort of reasoning even on simple tasks, imagine how much harder it must be for young people who are learning a programming language for the first time and trying to reason about programs that are much more complex than "the married problem"?

Thinking explicitly about this flaw in human thinking may help us teachers do a better job helping students to learn. In the short term, we can help them by giving more direct prompts for how to reason. Perhaps we can also help them learn to prompt themselves when faced with certain kinds of problems. In the longer term, we can perhaps help them to develop a process for solving problems that mitigates the bias. This is all about forming useful habits of thought.

If nothing else, reading this article will help me be slower to judge my students's work ethic. What looks like laziness is more likely a manifestation of a natural bias to exert the minimum amount of effort to solving problems. We are all cognitive misers to a certain extent, and that serves us well. But not always when we are writing and debugging programs.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 09, 2015 3:26 PM

Two Factors for Succeeding at Research, or Investing

Think differently, of course. But be humble. These attitudes go hand-in-hand.

To make money in the markets, you have to think independently and be humble. You have to be an independent thinker because you can't make money agreeing with the consensus view, which is already embedded in the price. Yet whenever you're betting against the consensus there's a significant probability you're going to be wrong, so you have to be humble.

This applies equally well to doing research. You can't make substantial progress with the conventional wisdom, because it defines and limits the scope of the solution. So think differently. But when you leave the safety of conventional wisdom, you find yourself working in an immense space of ideas. There is a significant chance that you will be wrong a lot. So be humble.

(The quote is from Learn or Die: Using Science to Build a Leading-Edge Learning Organization by Ray Dalio.)

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 08, 2015 2:14 PM

Teaching and the Transformation of Self

After reading What Wittgenstein Learned from Teaching Elementary School in The Paris Review, I think that, while Wittgenstein seems to have had some ideas for making elementary education better, I probably wouldn't want to have him as my teacher. This passage, though, really stuck with me:

We all struggle to form a self. Great teaching, Wittgenstein reminds us, involves taking this struggle and engaging in it with others; his whole life was one great such struggle. In working with poor children, he wanted to transform himself, and them.

The experience of teaching grade school for six years seems to have changed Wittgenstein and how he worked. In his later work, he famously moved away from the idea that language could only function by picturing objects in the world. There is no direct evidence that working with children was the impetus for this shift, but "his later work is full of references to teaching and children". In particular, Philosophical Investigations begins its investigation of "the essence of language" by discussing how children learn language.

And Wittgenstein is sometimes explicit about the connection; he once said that in considering the meaning of a word, it's helpful to ask, "How would one set about teaching a child to use this word?"

We all know that teaching can change the student, but foremost it changes the teacher. Wittgenstein seems to have understood that this is a part of the universal task of forming one's self.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 24, 2015 3:40 PM

Some Thoughts on How to Teach Programming Better

In How Stephen King Teaches Writing, Jessica Lahey asks Stephen King why we should teach grammar:

Lahey: You write, "One either absorbs the grammatical principles of one's native language in conversation and in reading or one does not." If this is true, why teach grammar in school at all? Why bother to name the parts?

King: When we name the parts, we take away the mystery and turn writing into a problem that can be solved. I used to tell them that if you could put together a model car or assemble a piece of furniture from directions, you could write a sentence. Reading is the key, though. A kid who grows up hearing "It don't matter to me" can only learn doesn't if he/she reads it over and over again.

There are at least three nice ideas in King's answer.

  • It is helpful to beginners when we can turn writing into a problem that can be solved. Making concrete things out of ideas in our head is hard. When we giving students tools and techniques that help them to create basic sentences, paragraphs, and stories, we make the process of creating a bit more concrete and a bit less scary.

  • A first step in this direction is to give names to the things and ideas students need to think about when writing. We don't want students to memorize the names for their own sake; that's a step in the wrong direction. We simply need to have words for talking about the things we need to talk about -- and think about.

  • Reading is, as the old slogan tells us, fundamental. It helps to build knowledge of vocabulary, grammar, usage, and style in a way that the brain absorbs naturally. It creates habits of thought that are hard to undo later.

All of these are true of teaching programmers, too, in their own way.

  • We need ways to demystify the process and give students concrete steps they can take when they encounter a new problem. The design recipe used in the How to Design Programs approach is a great example. Naming recipes and their steps makes them a part of the vocabulary teachers and students can use to make programming a repeatable, reliable process.

  • I've often had great success by giving names to design and implementation patterns, and having those patterns become part of the vocabulary we use to discuss problems and solutions. I have a category for posts about patterns, and a fair number of those relate to teaching beginners. I wish there were more.

  • Finally, while it may not be practical to have students read a hundred programs before writing their first, we cannot underestimate the importance of students reading code in parallel with learning to write code. Reading lots of good examples is a great way for students to absorb ideas about how to write their own code. It also gives them the raw material they need to ask questions. I've long thought that Clancy's and Linn's work on case studies of programming deserves more attention.

Finding ways to integrate design recipes, patterns, and case studies is an area I'd like to explore more in my own teaching.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 10, 2015 4:45 PM

Learning to Program is a Loser's Game

After a long break from playing chess, I recently played a few games at the local club. Playing a couple of games twice in the last two weeks has reminded me that I am very rusty. I've only made two horrible blunders in four games, but I have made many small mistakes, the kind of errors that accumulate over time and make a position hard to defend, even untenable. Having played better in years past, these inaccuracies are irksome.

Still, I managed to win all four games. As I've watched games at the club, I've noticed that most games are won by the player who makes the second-to-last blunder. Most of the players are novices, and they trade mistakes: one player leaves his queen en prise; later, his opponent launches an underprepared attack that loses a rook; then the first player trades pieces and leaves himself with a terrible pawn structure -- and so on, the players trading weak or bad moves until the position is lost for one of them.

My secret thus far has been one part luck, one part simple strategy: winning by not losing.

This experience reminded me of a paper called The Loser's Game, which in 1975 suggested that it was no longer possible for a fund manager to beat market averages over time because most of the information needed to do well was available to everyone. To outperform the market average, a fund manager has to profit from mistakes made by other managers, sufficiently often and by a sufficient margin to sustain a long-term advantage. Charles Ellis, the author, contrasts this with the bull markets of the 1960s. Then, managers made profits based on the specific winning investments they made; in the future, though, the best a manager could hope for was not to make the mistakes that other investors would profit from. Fund management had transformed from being a Winner's Game to a Loser's Game.

the cover of Extraordinary Tennis for the Ordinary Tennis Player

Ellis drew his inspiration from another world, too. Simon Ramo had pointed out the differences between a Winner's Game and a Loser's Game in Extraordinary Tennis for the Ordinary Tennis Player. Professional tennis players, Ramo said, win based on the positive actions they take: unreturnable shots down the baseline, passing shots out of the reach of a player at the net, service aces, and so on. We duffers try to emulate our heroes and fail... We hit our deep shots just beyond the baseline, our passing shots just wide of the sideline, and our killer serves into the net. It turns out that mediocre players win based on the errors they don't make. They keep the ball in play, and eventually their opponents make a mistake and lose the point.

Ramo saw that tennis pros are playing a Winner's Game, and average players are playing a Loser's Game. These are fundamentally different games, which reward different mindsets and different strategies. Ellis saw the same thing in the investing world, but as part of a structural shift: what had once been a Winner's Game was now a Loser's Game, to the consternation of fund managers whose mindset is finding the stocks that will earn them big returns. The safer play now, Ellis says, is to minimize mistakes. (This is good news for us amateurs investors!)

This is the same phenomenon I've been seeing at the chess club recently. The novices there are still playing a Loser's Game, where the greatest reward comes to those who make the fewest and smallest mistakes. That's not very exciting, especially for someone who fancies herself to be Adolf Anderssen or Mikhail Tal in search of an immortal game. The best way to win is to stay alive, making moves that are as sound as possible, and wait for the swashbuckler across the board from you to lose the game.

What does this have to do with learning to program? I think that, in many respects, learning to program is a Loser's Game. Even a seemingly beginner-friendly programming language such as Python has an exacting syntax compared to what beginners are used to. The semantics seem foreign, even opaque. It is easy to make a small mistake that chokes the compiler, which then spews an error message that overwhelms the new programmer. The student struggles to fix the error, only to find another error waiting somewhere else in the code. Or he introduces a new error while eliminating the old one, which makes even debugging seem scary. Over time, this can dishearten even the heartiest beginner.

What is the best way to succeed? As in all Loser's Games, the key is to make fewer mistakes: follow examples closely, pay careful attention to syntactic details, and otherwise not stray too far from what you are reading about and using in class. Another path to success is to make the mistakes smaller and less intimidating: take small steps, test the code frequently, and grow solutions rather than write them all at once. It is no accident that the latter sounds like XP and other agile methods; they help to guard us from the Loser's Game and enable us to make better moves.

Just as playing the Loser's Game in tennis or investing calls for a different mindset, so, too does learning to program. Some beginners seem to grok programming quickly and move on to designing and coding brilliantly, but most of us have to settle in for a period of discipline and growth. It may not be exciting to follow examples closely when we want to forge ahead quickly to big ideas, but the alternative is to take big shots and let the compiler win all the battles.

Unlike tennis and Ellis's view of stock investing, programming offers us hope: Nearly all of us can make the transition from the Loser's Game to the Winner's Game. We are not destined to forever play it safe. With practice and time, we can develop the discipline and skills necessary to making bold, winning moves. We just have to be patient and put time and energy into the process of becoming less mistake-prone. By adopting the mindset needed to succeed in a Loser's Game, we can eventually play the Winner's Game.

I'm not too sure about the phrases "Loser's Game" and "Winner's Game", but I think that this analogy can help novice programmers. I'm thinking of ways that I can use it to help my students survive until they can succeed.

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

March 02, 2015 4:14 PM

Knowing When We Don't Understand

Via a circuitous walk of web links, this morning I read an old piece called Two More Things to Unlearn from School, which opens:

I suspect the *most* dangerous habit of thought taught in schools is that even if you don't really understand something, you should parrot it back anyway. One of the most fundamental life skills is realizing when you are confused, and school actively destroys this ability -- teaches students that they "understand" when they can successfully answer questions on an exam, which is very very very far from absorbing the knowledge and making it a part of you. Students learn the habit that eating consists of putting food into mouth; the exams can't test for chewing or swallowing, and so they starve.

Schools don't teach this habit explicitly, but they allow it to develop and grow without check. This is one of the things that makes computer science hard for students. You can only get so far by parroting back answers you don't understand. Eventually, you have to write a program or prove an assertion, and all the memorization of facts in the world can't help you.

That said, though, I think students know very well when when they don't understand something. Many of my students struggle with the things they don't understand. But, as Yudkowsky says, they face the time constraints of a course fitting into a fifteen-week window and of one course competing with others for their time. The habit have they developed over the years is to think that, in the end, not understanding is okay, or at least an acceptable outcome of the course. As long as they get the grade they need to move on, they'll have another chance to get it later. And maybe they won't ever need to understand this thing ever again...

One of the best things we can do for students is to ask them to make things and to discuss with them the things they made, and how they made them. This is a sort of intellectual work that requires a deeper understanding than merely factual. It also forces them to consider the choices and trade-offs that characterize real knowledge.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 27, 2015 3:37 PM

Bad Habits and Haphazard Design

With an expressive type system for its teaching
languages, HtDP could avoid this problem to some
extent, but adding such rich types would also take
the fun out of programming.

As we approach the midpoint of the semester, Matthias Felleisen's Turing Is Useless strikes a chord in me. My students have spent the last two months learning a little Racket, a little functional programming, and a little about how to write data-driven recursive programs. Yet bad habits learned in their previous courses, or at least unchecked by what they learned there, have made the task harder for many of them than it needed to be.

The essay's title plays off the Church-Turing thesis, which asserts that all programming languages have the same expressive power. This powerful claim is not good news for students who are learning to program, though:

Pragmatically speaking, the thesis is completely useless at best -- because it provides no guideline whatsoever as to how to construct programs -- and misleading at worst -- because it suggests any program is a good program.

With a Turing-universal language, a clever student can find a way to solve any problem with some program. Even uninspired but persistent students can tinker their way to a program that produces the right answers. Unfortunately, they don't understand that the right answers aren't the point; the right program is. Trolling StackOverflow will get them a program, but too often the students don't understand whether it is a good or bad program in their current situation. It just works.

I have not been as faithful to the HtDP approach this semester as I probably should have been, but I share its desire to help students to design programs systematically. We have looked at design patterns that implement specific strategies, not language features. Each strategy focuses on the definition of the data being processed and the definition of the value being produced. This has great value for me as the instructor, because I can usually see right away why a function isn't working for the student the way he or she intended: they have strayed from the data as defined by the problem.

This is also of great value to some of my students. They want to learn how to program in a reliable way, and having tools that guide their thinking is more important than finding yet another primitive Racket procedure to try. For others, though "garage programming" is good enough; they just want get the job done right now, regardless of which muscles they use. Design is not part of their attitude, and that's a hard habit to break. How use doth breed a habit in a student!

Last semester, I taught intro CS from what Felleisen calls a traditional text. Coupled that experience with my experience so far this semester, I'm thinking a lot these days about how we can help students develop a design-centered attitude at the outset of their undergrad courses. I have several blog entries in draft form about last semester, but one thing that stands out is the extent to which every step in the instruction is driven by the next cool programming construct. Put them all on the table, fiddle around for a while, and you'll make something that works. One conclusion we can draw from the Church-Turing thesis is that this isn't surprising. Unfortunately, odds are any program created this way is not a very good program.


(The sentence near the end that sounds like Shakespeare is. It's from The Two Gentlemen of Verona, with a suitable change in noun.)

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

February 24, 2015 3:03 PM

Why Can't I Use flatten on This Assignment?

The team is in the weight room. Your coach wants you to work on your upper-body strength and so has assigned you a regimen that includes bench presses and exercises for your lats and delts. You are on the bench, struggling.

"Hey, coach, this is pretty hard. Can I use my legs to help lift this weight?"

The coach shakes his head and wonders what he is going to do with you.

Using your legs is precisely not the point. You need to make your other muscles stronger. Stick to the bench.


If athletics isn't your thing, we can tell this story with a piano. Or a pen and paper. Or a chessboard.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

February 21, 2015 11:11 AM

Matthias, Speak to My Students

I love this answer from Matthias Felleisen on the Racket users mailing list today:

The book emphasizes systematic design. You can solve this specific problem with brute force regular-expression matching in a few lines of code. The question is whether you want to learn to solve problems or copy code from mailing lists and StackOverflow without understanding what's really going on.

Students today aren't much different from the students in the good old days. But the tools and information so readily available to them make it a lot easier for them to indulge their baser urges. In the good old days, we had to work hard to get good grades and not understand what we were doing.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 05, 2015 3:57 PM

If You Want to Become a Better Writer...

... write for undergraduates. Why?

Last fall, Steven Pinker took a stab at explaining why academics stink at writing. He hypothesizes that cognitive science and human psychology explain much of the problem. Experts often find it difficult to imagine that others do not know what experts know, which Pinker calls the curse of knowledge. They work around the limitations of short-term memory by packaging ideas into bigger and more abstract units, often called chunking. Finally, they tend to think about the things they understand well in terms of how they use them, not in terms of what they look like, a transition called functional fixity.

Toward the end of the article, Pinker summarizes:

The curse of knowledge, in combination with chunking and functional fixity, helps make sense of the paradox that classic style is difficult to master. What could be so hard about pretending to open your eyes and hold up your end of a conversation? The reason it's harder than it sounds is that if you are enough of an expert in a topic to have something to say about it, you have probably come to think about it in abstract chunks and functional labels that are now second nature to you but are still unfamiliar to your readers--and you are the last one to realize it.

Most academics aren't trying to write bad prose. They simply don't have enough practice writing good prose.

When Calvin explained to Hobbes, "With a little practice, writing can be an intimidating and impenetrable fog," he got it backward. Fog comes easily to writers; it's the clarity that requires practice. The naïve realism and breezy conversation in classic style are deceptive, an artifice constructed through effort and skill.

Wanting to write better is not sufficient. Exorcising the curse requires writers to learn new skills and to practice. One of the best ways to see if the effort is paying off is to get feedback: show the work to real readers and see if they can follow it.

That's where undergraduates come in. If you want to become a better writer or a better speaker, teach undergraduates regularly. They are about as far removed as you can get from an expert while still having an interest in the topic and some inclination to learn more about it.

When I write lecture notes for my undergrads, I have to eliminate as much jargon as possible. I have to work hard to put topics into the best order for learners, not for colleagues who are also expert in the area. I have to find stories to illuminate ideas, and examples to illustrate them. When I miss the intended mark on any of these attempts, my students will let me know, either through their questions or through their inability to perform as I expected. And then I try again.

My lecture notes are far from perfect, but they are always much better after a few iterations teaching a course than they are the first time I do. The weakest parts tend to be for material I'm adding to the course for the first time; the best parts tend to be revisions of existing material. These facts are no surprise to any writer or presenter, of course. Repetition and effort are how we make things better.

Even if you do not consider yourself a teacher by trade, if you want to improve your ability to communicate science, teach undergrads. Write lecture notes and explanations. Present to live students and monitor lab sessions. The students will reward you with vigorous feedback. Besides, they are good people to get to know.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 28, 2015 3:38 PM

The Relationship Between Coding and Literacy

Many people have been discussing Chris Granger's recent essay Coding is not the New Literacy, and most seem to approve of his argument. Reading it brought to my mind this sentence from Alan Kay in VPRI Memo M-2007-007a, The Real Computer Revolution Hasn't Happened Yet:

Literacy is not just being able to read and write, but being able to deal fluently with the kind of ideas that are important enough to write about and discuss.

Literacy requires both the low-level skills of reading and writing and the higher-order capacity for using them on important ideas.

That is one thing that makes me uneasy about Granger's argument. It is true that teaching people only low-level coding skills won't empower them if they don't know how to use them to use them fluently to build models that matter. But neither will teaching them how to build models without giving them access to the programming skills they need to express their ideas beyond what some tool gives them.

Like Granger, though, I am also uneasy about many of the learn-to-code efforts. Teaching people enough Javascript or Ruby to implement a web site out of the box skips past the critical thinking skills that people need to use computation effectively in their world. They may be "productive" in the short term, but they are also likely to hit a ceiling pretty soon. What then? My guess: they become frustrated and stop coding altogether.

the Scratch logo

We sometimes do a better job introducing programming to kids, because we use tools that allow students to build models they care about and can understand. In the VPRI memo, Kay describes experiences teaching elementary school, students to use eToys to model physical phenomena. In the end, they learn physics and the key ideas underlying calculus. But they also learn the fundamentals of programming, in an environment that opens up into Squeak, a flavor of Smalltalk.

I've seen teachers introduce students to Scratch in a similar way. Scratch is a drag-and-drop programming environment, but it really is a open-ended and lightweight modeling tool. Students can learn low-level coding skills and higher-level thinking skills in tandem.

That is the key to making Granger's idea work in the best way possible. We need to teach people how to think about and build models in a way that naturally evolves into programming. I am reminded of another quote from Alan Kay that I heard back in the 1990s. He reminded us that kindergarteners learn and use the same language that Shakespeare used It is possible for their fluency in the language to grow to the point where they can comprehend some of the greatest literature ever created -- and, if they possess some of Shakepeare's genius, to write their own great literature. English starts small for children, and as they grow, it grows with them. We should aspire to do the same thing for programming.

the logo for Eve

Granger reminds us that literacy is really about composition and comprehension. But it doesn't do much good to teach people how to solidify their thoughts so that they can be written if they don't know how to write. You can't teach composition until your students know basic reading and writing.

Maybe we can find a way to teach people how to think in terms of models and how to implement models in programs at the same time, in a language system that grows along with their understanding. Granger's latest project, Eve, may be a step in that direction. There are plenty of steps left for us to take in the direction of languages like Scratch, too.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 19, 2015 2:14 PM

Beginners, Experts, and Possibilities

Last Thursday, John Cook tweeted:

Contrary to the Zen proverb, there may be more possibilities in the expert's mind than in the beginner's.

This summed up nicely one of the themes in my Programming Languages course that afternoon. Some students come into the course knowing essentially only one language, say Python or Ada. Others come knowing several languages well, including their intro language, Java, C, and maybe a language they learned on the job, such as Javascript or Scala.

Which group do you think has a larger view of what a programming language can be? The more knowledgable, to be sure. This is especially true when their experience includes languages from different styles: procedural, object-oriented, functional, and so on.

Previous knowledge affects expectations. Students coming directly out of their first year courses are more likely to imagine that all languages are similar to what they already know. Nothing in their experience contradicts that idea.

Does this mean that the Zen notion of beginner's mind is wrongheaded? Not at all. I think an important distinction can be made between analysis and synthesis. In a context where we analyze languages, broad experience is more valuable than lack of experience, because we are able to bring to our seeing a wider range of possibilities. That's certainly my experience working with students over the years.

However, in a context, where we create languages, broad experience can be an impediment. When we have seen many different languages, it it can be difficult to create something that looks much different from the languages what we've already seen. Something in our minds seems to pull us toward an existing language that already solves the constraint they are struggling with. Someone else has already solved this problem; their solution is probably best.

This is also my experience working with students over the years. My freshmen will almost always come up with a fresher language design than my seniors. The freshmen don't know much about languages yet, and so their minds are relatively unconstrained. (Fantastically so, sometimes.) The seniors often seem to end up with something that is superficially new but, at its core, thoroughly predictable.

The value of "Zen mind, beginner's mind" also follows a bit from the distinction between expertise and experience. Experts typically reach a level of where they solve problem using heuristics to solve problems. There patterns and shortcuts are efficient, but they also tend to be "compiled" and not all that open to critical examination. We create best when we are able to modify, rearrange, and discard, and that's harder to do when our default mode of thinking is in pre-compiled units.

It should not bother us that useful adages and proverbs contradict one another. The world is complex. As Bokononists say, Busy, busy, busy.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 18, 2015 10:26 AM

The Infinite Horizon

In Mathematics, Live: A Conversation with Laura DeMarco and Amie Wilkinson, Amie Wilkinson recounts the pivotal moment when she knew she wanted to be a mathematician. Insecure about her abilities in mathematics, unsure about what she wanted to do for a career, and with no encouragement, she hadn't applied to grad school. So:

I came back home to Chicago, and I got a job as an actuary. I enjoyed my work, but I started to feel like there was a hole in my existence. There was something missing. I realized that suddenly my universe had become finite. Anything I had to learn for this job, I could learn eventually. I could easily see the limits of this job, and I realized that with math there were so many things I could imagine that I would never know. That's why I wanted to go back and do math. I love that feeling of this infinite horizon.

After having written software for an insurance company during the summers before and after my senior year in college, I knew all too well the "hole in my existence" that Wilkinson talks about, the shrinking universe of many industry jobs. I was deeply interested in the ideas I had found in Gödel, Escher, Bach, and in the idea of creating an intelligent machine. There seemed no room for those ideas in the corporate world I saw.

I'm not sure when the thought of graduate school first occurred to me, though. My family was blue collar, and I didn't have much exposure to academia until I got to Ball State University. Most of my friends went out to get jobs, just like Wilkinson. I recall applying for a few jobs myself, but I never took the job search all that seriously.

At least some of the credit belongs to one of my CS professors, Dr. William Brown. Dr. Brown was an old IBM guy who seemed to know so much about how to make computers do things, from the lowest-level details of IBM System/360 assembly language and JCL up to the software engineering principles needed to write systems software. When I asked him about graduate school, he talked to me about how to select a school and a Ph.D. advisor. He also talked about the strengths and weaknesses of my preparation, and let me know that even though I had some work to do, I would be able to succeed.

These days, I am lucky even to have such conversations with my students.

For Wilkinson, DeMarco and me, academia was a natural next step in our pursuit of the infinite horizon. But I now know that we are fortunate to work in disciplines where a lot of the interesting questions are being asked and answers by people working in "the industry". I watch with admiration as many of my colleagues do amazing things while working for companies large and small. Computer science offers so many opportunities to explore the unknown.

Reading Wilkinson's recollection brought a flood of memories to mind. I'm sure I wasn't alone in smiling at her nod to finite worlds and infinite horizons. We have a lot to be thankful for.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

January 12, 2015 10:26 AM

WTF Problems and Answers for Questions Unasked

Dan Meyer quotes Scott Farrand in WTF Math Problems:

Anything that makes students ask the question that you plan to answer in the lesson is good, because answering questions that haven't been asked is inherently uninteresting.

My challenge this semester: getting students to ask questions about the programming languages they use and how they work. I myself have many questions about languages! My experience teaching our intro course last semester reminded me that what interests me (and textbook authors) doesn't always interest my students.

If you have any WTF? problems for a programming languages course, please share.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 09, 2015 3:40 PM

Computer Science Everywhere, Military Edition

Military Operations Orders are programs that are executed by units. Code re-use and other software engineering principles applied regularly to these.

An alumnus of my department, a CS major-turned-military officer, wrote those lines in an e-mail responding to my recent post, A Little CS Would Help a Lot of College Grads. Contrary to what many people might imagine, he has found what he learned in computer science to be quite useful to him as an Army captain. And he wasn't even a programmer:

One of the biggest skills I had over my peers was organizing information. I wasn't writing code, but I was handling lots of data and designing systems for that data. Organizing information in a way that was easy to present to my superiors was a breeze and having all the supporting data easily accessible came naturally to me.

Skills and principles from software engineering and project development apply to systems other than software. They also provide a vocabulary for talking about ideas that non-programmers encounter every day:

I did introduce my units to the terms border cases, special cases, and layers of abstraction. I cracked a smile every time I heard those terms used in a meeting.

Excel may not be a "real programming language", but knowing the ways in which it is a language can make managers of people and resources more effective at what they do.

For more about how a CS background has been useful to this officer, check out CS Degree to Army Officer, a blog entry that expands on his experiences.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

December 31, 2014 10:15 AM

Reinventing Education by Reinventing Explanation

One of the more important essays I read in 2014 was Michael Nielsen's Reinventing Explanation. In it, Nielsen explores how we might design media that help us explain scientific ideas better than we are able with our existing tools.

... it's worth taking non-traditional media seriously not just as a vehicle for popularization or education, which is how they are often viewed, but as an opportunity for explanations which can be, in important ways, deeper.

This essay struck me deep. Nielsen wants us to consider how we might take what we have learned using non-traditional media to popularize and educate and use it to think about how to explain more deeply. I think that learning how to use non-traditional media to explain more deeply will help us change the way we teach and learn.

In too many cases, new technologies are used merely as substitutes for old technology. The web has led to an explosion of instructional video aimed at all levels of learners. No matter how valuable these videos are, most merely replace reading a textbook or a paper. But computational technology enables us to change the task at hand and even redefine what we do. Alan Kay has been telling this story for decades, pointing us to the work of Ivan Sutherland and many others from the early days of computing.

Nielsen points to Bret Victor as an example of someone trying to develop tools that redefine how we think. As Victor himself says, he is following in the grand tradition of Kay, Sutherland, et al. Victor's An Ill-Advised Personal Note about "Media for Thinking the Unthinkable" is an especially direct telling of his story.

Vi Hart is another. Consider her recent Parable of the Polygons, created with Nicky Case, which explains dynamically how local choices and create systemic bias. This simulation uses computation to help people think differently about an idea they might not understand as viscerally from a traditional explanation. Hart has a long body of working using visualization to explain differently, and the introduction of computing extends the depth of her approach.

Over the last few weeks, I have felt myself being pulled by Nielsen's essay and the example of people such as Victor and Hart to think more about how we might design media that help us to teach and explain scientific ideas more deeply. Reinventing explanation might help us reinvent education in a way that actually matters. I don't have a research agenda yet, but looking again at Victor's work is a start.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 28, 2014 11:12 AM

A Little CS Would Help a Lot of College Grads

I would love to see more CS majors, but not everyone should major in CS. I do think that most university students could benefit from learning a little programming. There are plenty of jobs not only for CS and math grads, but also for other majors who have CS and math skills:

"If you're an anthropology major and you want to get a marketing job, well, guess what? The toughest marketing jobs to fill require SQL skills," Sigelman says. "If you can ... along the peripheries of your academic program accrue some strong quantitative skills, you'll still have the advantage [in the job market]." Likewise, some legal occupations (such as intellectual property law) and maintenance and repair jobs stay open for long periods of time, according to the Brookings report, if they require particular STEM skills.

There is much noise these days about the importance of STEM, both for educated citizens and for jobs, jobs, jobs. STEM isn't an especially cohesive category, though, as the quoted Vox article reminds us, and even when we look just at economic opportunity, it misleads. We don't need more college science graduates from every STEM discipline. We do need more people with the math and CS skills that now pervade the workplace, regardless of discipline. As Kurtzleben says in the article, "... characterizing these skill shortages as a broad STEM crisis is misleading to students, and has distorted the policy debate."

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 27, 2014 8:47 AM

Let's Not Forget: CS 1 Is Hard For Most Students

... software is hard. It's harder than
anything else I've ever had to do.
-- Donald Knuth

As students were leaving my final CS 1 lab session of the semester, I overheard two talking about their future plans. One student mentioned that he was changing his major to actuarial science. I thought, wow, that's a tough major. How is a student who is struggling with basic programming going to succeed there?

When I checked on his grades, though, I found that he was doing fine in my course, about average. I also remembered that he had enjoyed best the programming exercises that computed terms of infinite arithmetic series and other crazy mathematical values that his classmates often found impenetrable. Maybe actuarial science, even with some hard math, will be a good fit for him.

It really shouldn't surprise us that some students try computer science and decide to major in something else, even something that looks hard to most people. Teaching CS 1 again this semester after a long break reminded me just how much we expect from the students in our introductory course:

  • Details. Lots and lots of details. Syntax. Grammar. Vocabulary, both in a programming language and about programming more generally. Tools for writing, editing, compiling, and running programs.

  • Experimentation. Students have to design and execute experiments in order to figure out how language constructs work and to debug the programs they write. Much of what they learn is by trial and error, and most students have not yet developed skills for doing that in a controlled fashion.

  • Design. Students have to decompose problems and combine parts into wholes. They have to name things. They have to connect the names they see with ideas from class, the text, and their own experience.

  • Abstraction. Part of the challenge in design comes from abstraction, but abstract ideas are everywhere in learning about CS and how to program. Variables, choices, loops and recursion, functions and arguments and scope, ... all come not just as concrete forms but also as theoretical notions. These notions can sometimes be connected to the students' experience of the physical world, but the computing ideas are often just different enough to disorient the student. Other CS abstractions are so different as to appear unique.

In a single course, we expect students to perform tasks in all three of these modes, while mastering a heavy load of details. We expect them to learn by deduction, induction, and abduction, covering many abstract ideas and many concrete details. Many disciplines have challenging first courses, but CS 1 requires an unusual breadth of intellectual tools.

Yes, we can improve our students' experience with careful pedagogy. Over the last few decades we've seen many strong efforts. And yes, we can help students through the process with structural support, emotional support, and empathy. In the end, though, we must keep this in mind: CS 1 is going to be a challenge for most students. For many, the rewards will be worth the struggle, but that doesn't mean it won't take work, patience, and persistence along the way -- by both the students and the teachers.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 18, 2014 2:33 PM

Technical Problems Are The Easy Ones

Perhaps amid the daily tribulations of a software project, Steven Baker writes

Oy. A moving goal line, with a skeleton crew, on a shoestring budget. Technical problems are the easy ones.

And here we all sit complaining about monads and Java web frameworks...

My big project this semester has not been developing software but teaching beginners to develop software, in our intro course. There is more to Intro than programming, but for many students the tasks of learning a language and trying to write programs comes to dominate most everything else. More on that soon.

Yet even with this different sort of project, I feel much as Baker does. Freshmen have a lot of habits to learn and un-learn, habits that go well beyond how they do Python. My course competes with several others for the students' attention, not to mention with their jobs and their lives outside of school. They come to class with a lifetime of experience and knowledge, as well some surprising gaps in what they know. A few are a little scared by college, and many worry that CS won't be a good fit for them.

The technical problems really are the easy ones.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 02, 2014 2:53 PM

Other People's Best Interests

Yesterday I read:

It's hard for me to figure out people voting against their own self-interests.

I'm not linking to the source, because it wouldn't be fair to single the speaker out, especially when so many other things in the article are spot-on. Besides, I hear many different people express this sentiment from time time, people of various political backgrounds and cultural experiences. It seems a natural human reaction when things don't turn out the way we think they should.

Here is something I've learned from teaching and from working with teams doing research and writing software:

If you find yourself often thinking that people aren't acting in their own self-interests, maybe you don't know what their interests are.

It certainly may be true that people are not acting in what you think is their own self-interest. But it's rather presumptuous to think that you other people's best interest better than they do.

Whenever I find myself in this position, I have some work to do. I need to get to know my students, or my colleagues, or my fellow citizens, better. In cases where it's really true, and I have strong reason to think they aren't acting in their own best interest, I have an opportunity to help them learn. This kind of conversation calls for great care, though, because often we are dealing with people's identities and most deeply-held believes.

Posted by Eugene Wallingford | Permalink | Categories: General, Managing and Leading, Teaching and Learning

November 23, 2014 8:50 AM

Supply, Demand, and K-12 CS

When I meet with prospective students and their parents, we often end up discussing why most high schools don't teach computer science. I tell them that, when I started as a new prof here, about a quarter of incoming freshmen had taken a year of programming in high school, and many other students had had the opportunity to do so. My colleagues and I figured that this percentage would go way up, so we began to think about how we might structure our first-year courses when most or all students already knew how to program.

However, the percentage of incoming students with programming experience didn't go up. It went way down. These days, about 10% of our freshman know how to program when they start our intro course. Many of those learned what they know on their own. What happened, today's parents ask?

A lot of things happened, including the dot-com bubble, a drop in the supply of available teachers, a narrowing of the high school curriculum in many districts, and the introduction of high-stakes testing. I'm not sure how much each contributed to the change, or whether other factors may have played a bigger role. Whatever the causes, the result is that our intro course still expects no previous programming experience.

Yesterday, I saw a post by a K-12 teacher on the Racket users mailing list that illustrates the powerful pull of economics. He is leaving teaching for software development industry, though reluctantly. "The thing I will miss the most," he says, "is the enjoyment I get out of seeing youngsters' brains come to life." He also loves seeing them succeed in the careers that knowing how to program makes possible. But in that success lies the seed of his own career change:

Speaking of my students working in the field, I simply grew too tired of hearing about their salaries which, with a couple of years experience, was typically twice what I was earning with 25+ years of experience. Ultimately that just became too much to take.

He notes that college professors probably know the feeling, too. The pull must be much stronger on him and his colleagues, though; college CS professors are generally paid much better than K-12 teachers. A love of teaching can go only so far. At one level, we should probably be surprised that anyone who knows how to program well enough to teach thirteen- or seventeen-year-olds to do it stays in the schools. If not surprised, we should at least be deeply appreciative of the people who do.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

November 20, 2014 3:23 PM

When I Procrastinate, I Write Code

I procrastinated one day with my intro students in mind. This is the bedtime story I told them as a result. Yes, I know that I can write shorter Python code to do this. They are intro students, after all.


Once upon a time, a buddy of mine, Chad, sent out a tweet. Chad is a physics prof, and he was procrastinating. How many people would I need to have in class, he wondered, to have a 50-50 chance that my class roster will contain people whose last names start with every letter of the alphabet?


This is a lot like the old trivia about how we only need to have 23 people in the room to have a 50-50 chance that two people share a birthday. The math for calculating that is straightforward enough, once you know it. But last names are much more unevenly distributed across the alphabet than birthdays are across the days of the year. To do this right, we need to know rough percentages for each letter of the alphabet.

I can procrastinate, too. So I surfed over to the US Census Bureau, rummaged around for a while, and finally found a page on Frequently Occurring Surnames from the Census 2000. It provides a little summary information and then links to a couple of data files, including a spreadsheet of data on all surnames that occurred at least 100 times in the 2000 census. This should, I figure, cover enough of the US population to give us a reasonable picture of how peoples' last names are distributed across the alphabet. So I grabbed it.

(We live in a wonderful time. Between open government, open research, and open source projects, we have access to so much cool data!)

The spreadsheet has columns with these headers:

    name,rank,count,prop100k,cum_prop100k,      \
                    pctwhite,pctblack,pctapi,   \

The first and third columns are what we want. After thirteen weeks, we know how to do compute the percentages we need: Use the running total pattern to count the number of people whose name starts with 'a', 'b', ..., 'z', as well as how many people there are altogether. Then loop through our collection of letter counts and compute the percentages.

Now, how should we represent the data in our program? We need twenty-six counters for the letter counts, and one more for the overall total. We could make twenty-seven unique variables, but then our program would be so-o-o-o-o-o long, and tedious to write. We can do better.

For the letter counts, we might use a list, where slot 0 holds a's count, slot 1 holds b's count, and so one, through slot 25, which holds z's count. But then we would have to translate letters into slots, and back, which would make our code harder to write. It would also make our data harder to inspect directly.

    ----  ----  ----  ...  ----  ----  ----    slots in the list

0 1 2 ... 23 24 25 indices into the list

The downside of this approach is that lists are indexed by integer values, while we are working with letters. Python has another kind of data structure that solves just this problem, the dictionary. A dictionary maps keys onto values. The keys and values can be of just about any data type. What we want to do is map letters (characters) onto numbers of people (integers):

    ----  ----  ----  ...  ----  ----  ----    slots in the dictionary

'a' 'b' 'c' ... 'x' 'y' 'z' indices into the dictionary

With this new tool in hand, we are ready to solve our problem. First, we build a dictionary of counters, initialized to 0.

    count_all_names = 0
    total_names = {}
    for letter in 'abcdefghijklmnopqrstuvwxyz':
        total_names[letter] = 0

(Note two bits of syntax here. We use {} for dictionary literals, and we use the familiar [] for accessing entries in the dictionary.)

Next, we loop through the file and update the running total for corresponding letter, as well as the counter of all names.

    source = open('app_c.csv', 'r')
    for entry in source:
        field  = entry.split(',')        # split the line
        name   = field[0].lower()        # pull out lowercase name
        letter = name[0]                 # grab its first character
        count  = int( field[2] )         # pull out number of people
        total_names[letter] += count     # update letter counter
        count_all_names     += count     # update global counter

Finally, we print the letter → count pairs.

    for (letter, count_for_letter) in total_names.items():
        print(letter, '->', count_for_letter/count_all_names)

(Note the items method for dictionaries. It returns a collection of key/value tuples. Recall that tuples are simply immutable lists.)

We have converted the data file into the percentages we need.

    q -> 0.002206197888442366
    c -> 0.07694634659082318
    h -> 0.0726864447688946
    f -> 0.03450702533438715
    x -> 0.0002412718532764804
    k -> 0.03294646311104032

(The entries are not printed in alphabetical order. Can you find out why?)

I dumped the output to a text file and used Unix's built-in sort to create my final result. I tweet Chad, Here are your percentages. You do the math.

Hey, I'm a programmer. When I procrastinate, I write code.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

November 17, 2014 3:34 PM

Empathy for Teachers Doing Research

I appreciate that there are different kinds of university, with different foci. Still, I couldn't help but laugh while reading Doug McKee's Balancing Research and Teaching at an Elite University. After acknowledging that he would "like to see the bar for teaching raised at least a little bit", McKee reminds us why they can't raise it too far: Faculty who spend too much time on their teaching will produce lower-quality research, and this will cause the prestige of the university to suffer. But students should know what's going on:

I'd also like to see more honesty about the primacy of research in the promotion process. This would make the expectations of the undergraduates more reasonable and give them more empathy for the faculty (especially the junior faculty).

Reading this, I laughed out loud, as my first thought was, "You're kidding, right?" I'm all for more empathy for professors; teaching is a tough job. But I doubt "My teaching isn't very good because I spend all my time on research, and the university is happy with that." is going to elicit much sympathy.

Then again, I may be seeing the world from the perspective of a different sort of university. Many students are at elite universities in large part for the prestige of the school. Faculty research is what maintains the university's prestige. So maybe those students view subpar classroom teaching simply as part of the cost of doing business.

Then there was this:

Undergraduates may not always get a great teacher in the classroom, but they are always learning from someone at the cutting edge of their discipline, and there is no substitute for that.

Actually, there is. It's called good teaching. Being talked at by an all-research, no-teaching Nobel laureate for fifty minutes a day, three days a week, may score high on the prestige meter, but it won't teach you how to think. Then again, with high enough admission standards, perhaps the students can take care of themselves.

I don't mean to sound harsh. Many research professors are also excellent teachers, and in some courses, being at the forefront of the discipline's knowledge is a great advantage. And I do feel empathy for faculty who find themselves in a position where the reward structure effectively forces them to devote little or no time to their teaching. Teaching well takes time.

But let's also keep in mind that those same faculty members chose to work at an elite university and, unlike many of their their students, the faculty know what that means for the balance between research and teaching in their careers.

Over the last decade or two, funding for public universities in most states has fallen dramatically compared to the cost of instruction. I hope state legislatures eventually remember that great teaching takes time, and take that into account when they are allocating resources among the institutions that focus on research and those that focus on students.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 09, 2014 5:40 PM

Storytelling as a Source of Insight

Last week, someone tweeted a link to Learning to Learn, a decade-old blog entry by Chris Sells. His lesson is two-fold. To build a foundation of knowledge, he asks "why?" a lot. That is a good tactic in many settings.

He learned a method for gaining deeper insight by accident. He was tabbed to teach a five-day course. When he reached Day 5, he realized that he didn't know the material well enough -- he didn't know what to say. The only person who could take over for him was out of town. So he hyperventilated for a while, then decided to fake it.

That's when learning happened:

As I was giving the slides that morning on COM structured storage (a particularly nasty topic in COM), I found myself learning how it it worked as I told the story to my audience. All the studying and experimentation I'd done had given me the foundation, but not the insights. The insights I gained as I explained the technology. It was like fireworks. I went from through those topics in a state of bliss as all of what was really going on underneath exploded in my head. That moment taught me the value of teaching as a way of learning that I've used since. Now, I always explain stuff as a way to gain insight into whatever I'm struggling with, whether it's through speaking or writing. For me, and for lots of others, story telling is where real insight comes from.

I have been teaching long enough to know that it doesn't always go this smoothly. Often, when I don't know what to say, I don't do a very good job. Occasionally, I fail spectacularly. But it happens surprisingly often just as Sells describes. If we have a base of knowledge, the words come, and in explaining an idea for the first time -- or the hundredth -- we come to understand it in a new, deeper way. Sometimes, we write to learn. Other times, we teach.

We can help our students benefit from this phenomenon, too. Of course, we ask them to write. But we can go further with in-class activities in which students discuss topics and explain them to one another. Important cognitive processing happens when students explain a concept that doesn't happen when they study on our own.

I think the teach-to-learn phenomenon is at play in the "why?" tactic we use to learn in the first place. The answer to a why is an reason, an explanation. Asking "why?" is the beginning of telling stories to ourselves.

Story telling is, indeed, a source of deeper insight.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 07, 2014 2:12 PM

Three Students, Three Questions

In the lab, Student 1 asks a question about the loop variable on a Python for statement. My first thought is, "How can you not know that? We are in Week 11." I answer, he asks another question, and we talk some more. The conversation shows me that he has understood some ideas at a deeper level, but a little piece was missing. His question helped him build a more complete and accurate model of how programs work.

Before class, Student 2 asks a question about our current programming assignment. My first thought is, "Have you read the assignment? It answers your question." I answer, he asks another question, and we talk some more. The conversation shows me that he is thinking carefully about details of the assignment, but assignments at this level of detail are new to him. His question helped him learn a bit more about how to read a specification.

After class, Student 3 asks a question about our previous programming assignment. We had recently looked at my solution to the assignment and discussed design issues. "Your program is so clean and organized. My program is so ugly. How can I write better-looking programs?" He is already one of the better students in the course. We discuss the role of experience in writing clearly, and I explain that the best programs are often the result of revision and refactoring. They started out just good enough, and the author worked to make them better. The conversation shows me that he cares about the quality of his code, that elegance matters as much to him as correctness. His question keeps him moving along the path to becoming a good programmer.

Three students, three questions: all three are signs of good things to come. They also remind me that even questions which seem backward at first can point forward.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 01, 2014 3:27 PM

Passion is a Heavy Burden

Mark Guzdial blogged this morning about the challenge of turning business teachers into CS teachers. Where is the passion? he asks.

These days, I wince every time I hear word 'passion'. We apply it to so many things. We expect teachers to have passion for the courses they teach, students to have passion for the courses they take, and graduates to have passion for the jobs they do and the careers they build.

Passion is a heavy burden. In particular, I've seen it paralyze otherwise well-adjusted college students who think they need to try another major, because they don't feel a passion for the one they are currently studying. They don't realize that often passion comes later, after they master something, do it for a while, and come to appreciate it ways they could never imagine before. I'm sure some of these students become alumni who are discontent with their careers, because they don't feel passion.

I think requiring all CS teachers to have a passion for CS sets the bar too high. It's an unrealistic expectation of prospective teachers and of the programs that prepare them.

We can survive without passionate teachers. We should set our sights on more realistic and relevant goals:

  • Teachers should be curious. They should have a desire to learn new things.
  • Teachers should be professional. They should have a desire to do their jobs well.
  • Teachers should be competent. They should be capable of doing their jobs well.

Curiosity is so much more important than passion for most people in most contexts. If you are curious, you will like encountering new ideas and learning new skills. That enjoyment will carry you a long way. It may even help you find your passion.

Perhaps we should set similarly realistic goals for our students, too. If they are curious, professional, and competent, they will most likely be successful -- and content, if not happy. We could all do worse.

Posted by Eugene Wallingford | Permalink | Categories: General, Personal, Teaching and Learning

October 30, 2014 3:11 PM

Why It's Important to Pique Your Students' Curiosity

Because they will remember better! This Science Alert summarizes research which shows that "our minds actively reward us for seeking out the information we're most interested in". When people attain something they find intrinsically valuable, their brains release a dose of dopamine, which increases the learner's energy level and seems to enhance the connections between cells that contribute to remembering.

What I found most interesting, though, was this unexpected connection, as reported in this research paper from the journal Neuron (emphasis added):

People find it easier to learn about topics that interest them, but little is known about the mechanisms by which intrinsic motivational states affect learning. We used functional magnetic resonance imaging to investigate how curiosity (intrinsic motivation to learn) influences memory. In both immediate and one-day-delayed memory tests, participants showed improved memory for information that they were curious about and for incidental material learned during states of high curiosity.

This study suggests that when people are curious about something, they remember better what they learn about it, but they also remember better other information they come into contact with at the same time.

So, it might be worth opening class with a question or challenge that excites students and primes their curiosity, even if it is only tangentially related to the course material. In the process of learning the material they find interesting, they may learn better the material we find interesting.

When trying to teach students about loops or variables or functional decomposition, it is worth the effort to find and cultivate good problems to apply the concept to. If the problem piques the students' interest, it may increase their brains' receptiveness to learning the computational ideas.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 17, 2014 3:05 PM

Assorted Quotes

... on how the world evolves.

On the evolution of education in the Age of the Web. Tyler Cowen, in Average Is Over, via The Atlantic:

It will become increasingly apparent how much of current education is driven by human weakness, namely the inability of most students to simply sit down and try to learn something on their own.

I'm curious whether we'll ever see a significant change in the number of students who can and do take the reins for themselves.

On the evolution of the Web. Jon Udell, in A Web of Agreements and Disagreements:

The web works as well as it does because we mostly agree on a set of tools and practices. But it evolves when we disagree, try different approaches, and test them against one another in a marketplace of ideas. Citizens of a web-literate planet should appreciate both the agreements and the disagreements.

Some disagreements are easier to appreciate after they fade into history.

On the evolution of software. Nat Pryce on the Twitter, via The Problematic Culture of "Worse is Better":

Eventually a software project becomes a small amount of useful logic hidden among code that copies data between incompatible JSON libraries

Not all citizens of a web-literate planet appreciate disagreements between JSON libraries. Or Ruby gems.

On the evolution of start-ups. Rands, in The Old Guard:

... when [the Old Guard] say, "It feels off..." what they are poorly articulating is, "This process that you're building does not support one (or more) of the key values of the company."

I suspect the presence of incompatible JSON libraries means that our software no longer supports the key values of our company.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Managing and Leading, Software Development, Teaching and Learning

October 16, 2014 3:54 PM

For Programmers, There Is No "Normal Person" Feeling

I see this in the lab every week. One minute, my students sit peering at their monitors, their heads buried in their hands. They can't do anything right. The next minute, I hear shouts of exultation and turn to see them, arms thrust in the air, celebrating their latest victory over the Gods of Programming. Moments later I look up and see their heads again in their hands. They are despondent. "When will this madness end?"

Last week, I ran across a tweet from Christina Cacioppo that expresses nicely a feeling that has been vexing so many of my intro CS students this semester:

I still find programming odd, in part, because I'm either amazed by how brilliant or how idiotic I am. There's no normal-person feeling.

Christina is no beginner, and neither am I. Yet we know this feeling well. Most programmers do, because it's a natural part of tackling problems that challenge us. If we didn't bounce between feeling puzzlement and exultation, we wouldn't be tackling hard-enough problems.

What seems strange to my students, and even to programmers with years of experience, is that there doesn't seem to be a middle ground. It's up or down. The only time we feel like normal people is when we aren't programming at all. (Even then, I don't have many normal-person feelings, but that's probably just me.)

I've always been comfortable with this bipolarity, which is part of why I have always felt comfortable as a programmer. I don't know how much of this comfort is natural inclination -- a personality trait -- and how much of it is learned attitude. I am sure it's a mixture of both. I've always liked solving puzzles, which inspired me to struggle with them, which helped me get better struggling with them.

Part of the job in teaching beginners to program is to convince them that this is a habit they can learn. Whatever their natural inclination, persistence and practice will help them develop the stamina they need to stick with hard problems and the emotional balance they need to handle the oscillations between exultation and despondency.

I try to help my students see that persistence and practice are the answer to most questions involving missing skills or bad habits. A big part of helping them this is coaching and cheerleading, not teaching programming language syntax and computational concepts. Coaching and cheerleading are not always tasks that come naturally to computer science PhDs, who are often most comfortable with syntax and abstractions. As a result, many CS profs are uncomfortable performing them, even when that's what our students need most. How do we get better at performing them? Persistence and practice.

The "no normal-person feeling" feature of programming is an instance of a more general feature of doing science. Martin Schwartz, a microbiologist at the University of Virginia, wrote a marvelous one-page article called The importance of stupidity in scientific research that discusses this element of being a scientist. Here's a representative sentence:

One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time.

Scientists get used to this feeling. My students can, too. I already see the resilience growing in many of them. After the moment of exultation passes following their latest conquest, they dive into the next task. I see a gleam in their eyes as they realize they have no idea what to do. It's time to bury their heads in their hands and think.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 15, 2014 3:54 PM

Maybe We Just Need to Teach Better

Maybe We Just Need to Teach Better

A couple of weeks ago, I wrote Skills We Can Learn in response to a thread on the SIGCSE mailing list. Mark Guzdial has now written a series of posts in response to that thread, most recently Teaching Computer Science Better To Get Better Results. Here is one of the key paragraphs in his latest piece:

I watch my children taking CS classes, along with English, Chemistry, Physics, and Biology classes. In the CS classes, they code. In the other classes, they do on-line interactive exercises, they write papers, they use simulations, they solve problems by-hand. Back in CS, the only activity is coding with feedback. If we only have one technique for teaching, we shouldn't be surprised if it doesn't always work.

Mark then offers a reasonable hypothesis: We get poor results because we use ineffective teaching methods.

That's worthy of a new maxim of the sort found in my previous post: If things aren't going well in my course, it's probably my fault. Mark's hypothesis sounds more professional.

A skeptic might say that learning to program is like learning to speak a new human language, and when we learn new human languages we spend most of our time reading, writing, and speaking, and getting feedback from these activities. In an introductory programming course, the programming exercises are where students read, write, and get feedback. Isn't that enough?

For some students, yes, but not for all. This is also true in introductory foreign language courses, which is why teachers in those courses usually include games and other activities to engage the students and provide different kinds of feedback. Many of us do more than just programming exercises in computer science courses, too. In courses with theory and analysis, we give homework that asks students to solve problems, compute results, or give proofs for assertions about computation.

In my algorithms course, I open most days with a game. Students play the game for a while, and then we discuss strategies for playing the game well. I choose games whose playing strategies illustrate some algorithm design technique we are studying. This is a lot more fun than yet another Design an algorithm to... exercise. Some students seem to understand the ideas better, or at least differently, when they experience the ideas in a wider context.

I'm teaching our intro course right now, and over the last few weeks I have come to appreciate the paucity of different teaching techniques and methods used by a typical textbook. This is my first time to teach the course in ten years, and I'm creating a lot of my own materials from scratch. The quality and diversity of the materials are limited by my time and recent experience, with the result being... a lot of reading and writing of code.

What of the other kinds of activities that Mark mentions? Some code reading can be turned into problems that the students solve by hand. I have tried a couple of debugging exercises that students seemed to find useful. I'm only now beginning to see the ways in which those exercises succeeded and failed, as the students take on bigger tasks.

I can imagine all sorts of on-line interactive exercises and simulations that would help in this course. In particular, a visual simulator for various types of loops could help students see a program's repetitive behavior more immediately than watching the output of a simple program. Many of my students would likely benefit from a Bret Victor-like interactive document that exposes the internal working of, say, a for loop. Still others could use assistance with even simpler concepts, such as sequences of statements, assignment to variables, and choices.

In any case, I second Mark's calls to action. We need to find more and better methods for teaching CS topics. We need to find better ways to make proven methods available to CS instructors. Most importantly, we need to expect more of ourselves and demand more from our profession.

When things go poorly in my classroom, it's usually my fault.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 14, 2014 2:22 PM

A Handy Guideline for Teaching Beginners

I was putting together a short exercise for my students to do in class today. "Maybe this is too difficult. Should I make it easier?"

If you ever ask yourself this question, there is only one answer.

When in doubt, make it simpler.

On the days when I trust its wisdom and worry that I've made things too easy to be interesting, I am usually surprised by how well things go. On the days when I get lazy, or a little cocky, and stick with something that caused me to wonder, there is usually very little surprise.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 02, 2014 3:46 PM

Skills We Can Learn

In a thread on motivating students on the SIGCSE mailing list, a longtime CS prof and textbook author wrote:

Over the years, I have come to believe that those of us who can become successful programmers have different internal wiring than most in the population. We know you need problem solving, mathematical, and intellectual skills but beyond that you need to be persistent, diligent, patient, and willing to deal with failure and learn from it.

These are necessary skills, indeed. Many of our students come to us without these skills and struggle to learn how to think like a computer scientist. And without persistence, diligence, patience, and a willingness to deal with failure and learn from it, anyone will likely have a difficult time learning to program.

Over time, it's natural to begin to think that these attributes are prerequisites -- things a person must have before he or she can learn to write programs. But I think that's wrong.

As someone else pointed out in the thread, too many people believe that to succeed in certain disciplines, one must be gifted, to possess an inherent talent for doing that kind of thing. Science, math, and computer science fit firmly in that set of disciplines for most people. Carol Dweck has shown that having such a "fixed" mindset of this sort prevents many people from sticking with these disciplines when they hit challenges, or even trying to learn them in the first place.

The attitude expressed in the quote above is counterproductive for teachers, whose job it is to help students learn things even when the students don't think they can.

When I talk to my students, I acknowledge that, to succeed in CS, you need to be persistent, diligent, patient, and willing to deal with failure and learn from it. But I approach these attributes from a growth mindset:

Persistence, diligence, patience, and willingness to learn from failure are habits anyone can develop with practice. Students can develop these habits regardless of their natural gifts or their previous education.

Aristotle said that excellence is not an act, but a habit. So are most of the attributes we need to succeed in CS. They are habits, not traits we are born with or actions we take.

Donald Knuth once said that only about 2 per cent of the population "resonates" with programming the way he does. That may be true. But even if most of us will never be part of Knuth's 2%, we can all develop the habits we need to program at a basic level. And a lot more than 2% are capable of building successful careers in the discipline.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

September 29, 2014 1:54 PM

"Classes I Control"

I saw a commercial recently for one of those on-line schools no one has ever heard of. In it, a non-traditional student with substantial job experience said, "At [On-Line U.], I can take classes I control."

I understand a desire for control, especially given the circumstances in which so many people go to university now. Late twenties or older, a family, a house, bills to pay. Under such conditions, school becomes a mercenary activity: get in, get a few useful skills and a credential, get out. Maximize ROI; minimize expenses.

In comparison, my years studying at a university were a luxury. I went to college straight out of high school, in a time with low tuition and even reasonable room and board. I was lucky to have a generous scholarship that defrayed my costs. But even my friends without scholarships seemed more relaxed about paying for school than students these days. It wasn't because Mom and Dad were picking up the tab, either; most of my friends paid their own way.

The cost was reasonable and as a result, perhaps, students of my era didn't feel quite the same need to control all of their classes. That is just as well, because we didn't have much control, nor much bargaining power to change how our professors worked.

What a fortunate powerlessness that was, though. In most courses, I encountered creative, intelligent professors. Once a year or so, I would walk into a course with rather pedestrian goals only to find that the professor had something different in mind, something unimagined, something wonderful. If naive, twenty-year-old Eugene had had control of all his courses, he would likely have missed out on a few experiences that changed his life.

What a great luxury it was to surrender control for eleven weeks and be surprised by new knowledge, ideas, and possibilities -- and by the professors who made the effort to take me there.

I know was lucky in a lot of ways, and for that I am thankful. I hope that our inability or unwillingness to keep public university affordable doesn't have as an unintended casualty the wonderful surprises that can happen in our courses.

Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

September 28, 2014 10:21 AM

Modeling the Unseen

This passage comes from an article at LessWrong about making beliefs "pay rent" by holding them accountable for the experiences they entail:

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

Our students bring this double-edged sword to the classroom with them.

Students seem able to learn important ideas even when teachers present the ideas poorly, or inconsistently, or confusingly. I probably don't want to know how often I depend on this good fortune when I teach...

At the same time, students can build models that are flatly untrue. This isn't surprising. When we draw conclusions from incomplete evidence and examples, we will occasionally go astray. The search space in which students work is vast; it is remarkable that they don't go astray more often.

Teachers experience both edges of the sword. Students model the unseen and, for the most part, the model the build is invisible to us. When is it an accurate model?

One of the biggest challenges for teachers is to bring both of the unseens closer to the surface.

  • When we make what we want students to learn more visible, they are able to form more accurate models more quickly.
  • When we make the students' models more visible, we are able to diagnose inaccurate models and help students improve their models more quickly.

The second of these is what makes face-to-face instruction and one-on-one interaction so powerful. We bring students' models to the surface most effectively in the ways we discuss ideas with them. Our greatest opportunities to discuss come from asking students to build something and then discussing with them both the product and the process of making it.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 23, 2014 4:37 PM

The Obstacles in the Way of Teaching More Students to Program

All students should learn to program? Not so fast, says Larry Cuban in this Washington Post blog entry. History, including the Logo movement, illustrates several ways in which such a requirement can fail. I've discussed Cuban's article with a couple of colleagues, and all are skeptical. They acknowledge that he raises important issues, but in the end they offer a "yeah, but...". It is easy to imagine that things are different now, and the result will be similarly different.

I am willing to believe that things may be different this time. They always are. I've written favorably here in the past of the value of more students learning to program, but I've also been skeptical of requiring it. Student motivations change when they "have to take that class". And where will all the teachers come from?

In any case, it is wise to be alert to how efforts to increase the reach of programming instruction have fared. Cuban reminds us of some of the risks. One line in his article expresses what is, to my mind, the biggest challenge facing this effort:

Traditional schools adapt reforms to meet institutional needs.

Our K-12 school system is a big, complex organism (actually, fifty-one of them). It tends to keep moving in the direction of its own inertia. If a proposed reform fits its needs, the system may well adopt it. If it doesn't, but external forces push the new idea onto system, the idea is adapted -- assimilated into what the institution already wants to be, not what the reform actually promises.

We see this in the university all the time, too. Consider accountability measures such as student outcomes assessment. Many schools have adopted the language of SOA, but rarely do faculty and programs change all that much how they behave. They just find ways to generate reports that keep the external pressures at bay. The university and its faculty may well care about accountability, but they tend to keep on doing it the way they want to do it.

So, how can we maximize the possibility of substantive change in the effort to teach more students how to program, and not simply create a new "initiative" with frequent mentions in brochures and annual reports? Mark Guzdial has been pointing us in the right direction. Perhaps the most effective way to change K-12 schools is to change the teachers we send into the schools. We teach more people to be computing teachers, or prepare more teachers in the traditional subjects to teach computing. We prepare them to recognize opportunities to introduce computing into their courses and curricula.

In this sense, universities have an irreplaceable role to play in the revolution. We teach the teachers.

Big companies can fund programs such as and help us reach younger students directly. But that isn't enough. Google's CS4HS program has been invaluable in helping universities reach current K-12 teachers, but they are a small percentage of the installed base of teachers. In our schools of education, we can reach every future teacher -- if we all work together within and across university boundaries.

Of course, this creates a challenge at the meta-level. Universities are big, complex organisms, too. They tends to keep moving in the direction of their own inertia. Simply pushing the idea of programming instruction onto system from the outside is more likely to result in harmless assimilation than in substantive change. We are back to Cuban's square one.

Still, against all these forces, many people are working to make a change. Perhaps this time will be different after all.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 19, 2014 3:34 PM

Ask Yourself, "What is the Pattern?"

I ran across this paragraph in an essay about things you really need to learn in college:

Indeed, you should view the study of mathematics, history, science, and mechanics as the study of archetypes, basic patterns that you will recognize over and over. But this means that, when you study these disciplines, you should be asking, "what is the pattern" (and not merely "what are the facts"). And asking this question will actually make these disciplines easier to learn.

Even in our intro course, I try to help students develop this habit. Rather than spending all of our time looking at syntax and a laundry list of language features, I am introducing them to some of the most basic code patterns, structures they will encounter repeatedly as they solve problems at this level. In week one came Input-Process-Output. Then after learning basic control structures, we encountered guarded actions, range tests, running totals, sentinel loops, and "loop and a half". We encounter these patterns in the process of solving problems.

While they are quite low-level, they are not merely idioms. They are patterns every bit as much as patterns at the level of the Gang of Four or PoSA. They solve common problems, recur in many forms, and are subject to trade-offs that depend on the specific problem instance.

They compose nicely to create larger programs. One of my goals for next week is to have students solve new problems that allow them to assemble programs from ideas they have already seen. No new syntax or language features, just new problems.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

September 04, 2014 3:32 PM

Language Isn't Just for Experts

Stephen Ramsey wrote The Mythical Man-Finger, in defense of an earlier piece on the virtues of the command line. The gist of his argument is this:

... the idea that language is for power users and pictures and index fingers are for those poor besotted fools who just want toast in the morning is an extremely retrograde idea from which we should strive to emancipate ourselves.

Ramsay is an English professor who works in digital humanities. From the writings posted on his web site, it seems that he spends nearly as much time teaching and doing computing these days as he spends on the humanities. This opens him to objections from his colleagues, some of whom minimize the relevance of his perspective for other humanists by reminding him that he is a geek. He is one of those experts who can't see past his own expertise. We see this sort of rhetorical move in tech world all the time.

I think the case is quite the opposite. Ramsay is an expert on language. He knows that language is powerful, that language is more powerful than the alternatives in many contexts. When we hide language from our users, we limit them. Other tools can optimize for a small set of particular use cases, but they generally make it harder to step outside of those lines drawn by the creator of the tools: to combine tasks in novel ways, to extend them, to integrate them with other tools.

Many of my intro students are just beginning to see what knowing a programming language can mean. Giving someone language is one of the best ways to empower them, and also a great way to help them even see what is possible.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 03, 2014 4:13 PM

My PhD Advisor Takes on a New Challenge

Just this week I learned that Jon Sticklen, my PhD advisor, has moved to Michigan Tech to chair its Department of Engineering Fundamentals. As I recall, Michigan Tech focuses much of its effort on undergraduate engineering education. This makes it a good fit for Jon, who has been working on projects in engineering education at Michigan State for a number of years now, with some success. I wish him and them well.

By the way, if you can handle a strong winter, then Tech can be a great place to live. The upper peninsula of Michigan is stunning!

Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

September 01, 2014 3:17 PM

Back to the Beginning

August was quiet on my blog only because it was anything but quiet elsewhere. The department office had its usual August business plus a couple of new challenges thrown in. I spent one day on jury duty, one day in retreat with fellow department heads, and one day on a long bike ride. My older daughter was home for a few days before heading back to college for her senior year, and my younger daughter was preparing to leave for college for the first time.

On top of that, I am teaching our intro course this fall. I have not taught intro since the fall of 2006, when I introduced media computation into our Java track. Before that we have to go back to the late 1990s to find me in front of a classroom full of students embarking on their first programming experience. I'm excited and a little apprehensive. There is great opportunity in helping students lay the foundation for the rest of their CS coursework. But there is also great risk. For the most part, these students have never worked with a floating-point number or a variable or an assignment statement, at least in the context of a programming language. How badly might I lead them astray?

We now teach Python in this track. I could have used media comp as our organizing theme again, but the instructors who have been teaching in this track for the last few years have moved to a data manipulation them, using a textbook by Bill Punch and Rich Enbody. I decided to do the same. There is no sense in me disrupting the flow of the track, especially with the likelihood that I won't teach the course again in the spring. (In the interest of full disclosure, I told my students that Bill was one of my mentors in grad school at Michigan State.)

The first week of class went well. As expected, the students reminded me how different teaching intro can be. There are so many ways for novices to interpret so many things... Type a simple expression or two into the Python shell, ask them what they think,and find out for yourself!

Every teacher knows that the first day of class shatters any illusion we might have of teaching the perfect course. Such illusions are more common for me when I teach a course for the first time, or the first time in a long while. The upside of shattering the illusion is that I can move on to the daily business of getting better.

At the end of our first lab session, I walked with one student as he was leaving the room. He had asked a few questions during the exercise. I asked how he felt, now that he had completed successfully his first lab as a CS major. "I am excited and scared," he said. "Scared has been keeping me away from computer science, but I know I have to try. I'm excited."

I know exactly he how feels. I'm apprehensive, not in fearing failure or catastrophe, but in being aware that I must remain vigilant. When we teach, we affect other peoples' lives. Teaching a first course in the discipline, introducing students to a new set of ideas and way of thinking, is a multiplier on this effect. I owe it to these students to help them overcome their fears and realize their excitement.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

August 19, 2014 1:49 PM

The Universal Justification

Because we need it to tell better stories.

Ethan Zuckerman says that this is the reason people are addicted to big data, quoting Macej Ceglowski's wonderful The Internet with a Human Face But if you look deep enough, this is the reason that most of us do so many of the things we do. We want to tell better stories.

As I teach our intro course this fall, I am going to ask myself occasionally, "How does what we are learning today help my students tell a better story?" I'm curious to see how that changes the way I think about the things we do in class.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

July 30, 2014 1:04 PM

The Reach of a MOOC

Doug Schmidt

Distributed computing, OOP, and patterns guru Doug Schmidt recently posted on Facebook a statistical recap of his summer MOOC on pattern-oriented software architectures for mobile devices and cloud computing. Approximately 80,000 people signed up for the course, 40,000 people showed up, and 4,000 people completed the course in a meaningful way (taking automated quizzes and possibly doing peer-graded programming assignments). So that's either a 5% completion rate for the course, or 10%, depending on which standard you prefer.

A lot of folks complain that the reach of MOOCs is muted by their notoriously low completing rates. But Schmidt puts the numbers into perspective:

... I've taught ~1,000 students at Wash U., UC Irvine, and Vanderbilt in the past 20 years, so regardless of the completion *rate* the opportunity to reach > 4,000 students and teach them about patterns and frameworks for concurrent programming in Java and Android is pretty cool!

Schmidt has a lot of knowledge and experience to share. His MOOC shared it with an awful lot of people in one offering.

My department has not attempted a "massive" on-line course yet, though a few of our faculty did take a small first step last month. As Mark Guzdial lamented a few months ago, Google required that all of its CS4HS summer workshops be offered on-line. A few of my colleagues, led by Ben Schafer, have taught CS4HS workshops for the last five years, reaching in the ballpark of 20-25 teachers from northeast Iowa in each of the first four. As reported in the department's own Facebook post, this year the course enrolled 245 teachers from thirty-nine states and Puerto Rico. I haven't seen final numbers for the workshop yet, but just after it ended Ben reported good participation and positive evaluations from the teachers in the course.

I don't know yet what I think about MOOCs. The trade-offs are numerous, and most of my teaching experience is in smaller, more intimate settings that thrive on individual relationships with students. But I can't deny the potential reach for MOOCs to reach so many people and to provide access to valuable courses to people who otherwise likely could never attend them.

On a lighter note, the first comment in response to Schmidt's Facebook post is my favorite in a while:

I just loaded the dishwasher. Our jobs are so similar! Crazy, eh?

Don't worry, Kristie. Sometimes, I look at all the amazing thing Doug does and feel exactly the same.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 17, 2014 10:00 AM

A New Commandment

... I give unto you:

Our first reaction to any comrade, any other person passionate about and interested in building things with computers, any human crazy and masochistic enough to try to expand the capabilities of these absurd machines, should be empathy and love.

Courtesy of Britt Butler.

I hope to impart such empathy and love to my intro students this fall. Love to program, and be part of a community that loves and learns together.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

July 16, 2014 2:11 PM

Burn All Your Sermons

Marketers and bridge players have their Rules of Seven. Teachers and preachers might, too, if they believe this old saw:

Once in seven years I burn all my sermons; for it is a shame if I cannot write better sermons now than I did seven years ago.

I don't have many courses in which I lecture uninterrupted for long periods of time. Most of my courses are a mixture of short lectures, student exercises, and other activities that explore or build upon whatever we are studying. Even when I have a set of materials I really like, which have been successful for me and my students in the past, I am forever reinventing them, tweaking and improving as we move through the course. This is in the same spirit as the rule of seven: surely I can make something better since the last time I taught the course.

Having a complete set of materials for a course to start from can be a great comfort. It can also be a straitjacket. The high-level structure of a course design limits how we think about the essential goals and topics of the course. The low-level structure generally optimizes for specific transitions and connections, which limits how easily we can swap in new examples and exercises.

Even as an inveterate tinkerer, I occasionally desire to break out of the straitjacket of old material and make a fresh start. Burn it all and start over. Freedom! What I need to remember will come back to me.

The adage quoted above tells us to do this regularly even if we don't feel the urge. The world changes around us. Our understanding grows. Our skills as a writer and storyteller grow. We can do better.

Of course, starting over requires time. It's a lot quicker to prep a course by pulling a prepped course out of an old directory of courses and cleaning it up around the edges. When I decide to redesign a course from bottom up, I usually have to set aside part of a summer to allow for long hours writing from scratch. This is a cost you have to take into account any time you create a new course.

Being in computer science makes it easier to force ourselves to start from scratch. While many of the principles of CS remain the same across decades, the practices and details of the discipline change all the time. And whatever we want to say about timeless principles, the undergrads in my courses care deeply about having some currency when they graduate.

In Fall 2006, I taught our intro course. The course used Java, which was the first language in our curriculum at that time. Before that, the last time I had taught the course, our first language was Pascal. I had to teach an entirely new course, even though many of the principles of programming I wanted to teach were the same.

I'm teaching our intro course again this fall for the first time since 2006. Python is the language of choice now. I suppose I could dress my old Java course in a Python suit, but that would not serve my students well. It also wouldn't do justice to the important ideas of the course, or Python. Add to this that I am a different -- and I hope better -- teacher and programmer now than I was eight years ago, and I have all the reasons I need to design a new course.

So, I am getting busy. Burn all the sermons.

Of course, we should approach the seven-year advice with some caution. The above passage is often attributed to theologian John Wesley. And indeed he did write it. However, as is so often the case, it has been taken out of context. This is what Wesley actually wrote in his journal:

Tuesday, September 1.--I went to Tiverton. I was musing here on what I heard a good man say long since--"Once in seven years I burn all my sermons; for it is a shame if I cannot write better sermons now than I could seven years ago." Whatever others can do, I really cannot. I cannot write a better sermon on the Good Steward than I did seven years ago; I cannot write a better on the Great Assize than I did twenty years ago; I cannot write a better on the Use of Money, than I did nearly thirty years ago; nay, I know not that I can write a better on the Circumcision of the Heart than I did five-and-forty years ago. Perhaps, indeed, I may have read five or six hundred books more than I had then, and may know a little more history, or natural philosophy, than I did; but I am not sensible that this has made any essential addition to my knowledge in divinity. Forty years ago I knew and preached every Christian doctrine which I preach now.

Note that Wesley attributes the passage to someone else -- and then proceeds to deny its validity in his own preaching! We may choose to adopt the Rule of Seven in our teaching, but we cannot do so with Wesley as our prophet.

I'll stick with my longstanding practice of building on proven material when that seems best, and starting from scratch whenever the freedom to tell a new story outweighs the value of what has worked for me and my students in the past.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 15, 2014 3:19 PM

Ambiguous Questions and Wrong Answers

Pianist James Boyk recently shared a story about mathematician Andrew Gleason with me. Boyk had studied a bit with Gleason, who was known for his work in cryptography and in helping to solve Hilbert's fifth problem. Gleason also had a strong interest in early math education. Once, after observing first-grade math teaching for some weeks, Gleason said:

I never saw a kid give a wrong answer. I heard a lot of ambiguous or erroneous questions, but never a wrong answer.

As Boyk told me, there's a lesson in this attitude for everyone. So often in my own courses, I start to mark a student's answer as incorrect, step back, and realize that the question itself was ambiguous. The student had given a correct answer -- only to a question different than the one I had in my mind.

Not all my questions are ambiguous or erroneous, of course. Sometimes the student does give an incorrect answer. For a while now, I've been trying to retrain my mind to think in terms of incorrect answers rather than wrong answers. Yes, these words are synonyms, but their connotations differ. The meaning of "incorrect" is usually limited to the objective sense of being not in accordance with fact or standards. "wrong" has a wider meaning that also can include being immoral or dishonest. Those words seem a bit too judgmental to be applied to my students' sorting algorithms and Scheme programs.

In the case of erroneous answers, I find I'm usually more effective if I focus on the factual incorrectness of an answer. What misconception or missing piece of knowledge led the student to this conclusion? How can I help the student recognize this flaw in the reasoning and reason more accurately in the future?

That seems to be the big lesson of Gleason's comment: to keep the teacher's focus on how a student's answer is the correct answer, given what he or she knows at a given point in time. The teacher's job is to ask better questions and lead students to a better state of knowing and doing.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 11, 2014 2:12 PM

Unclear on the Concept

I love the following passage from Dan Meyer. He is commenting on a survey reported here (scroll to the bottom of the page), in which math teachers were asked what the greatest limitations are on how they teach. 38.79% said, "students who are uninterested", and 23.56% said, "students who are disruptive". Says Meyer:

It's like reading a survey of firefighters in which, when asked about the greatest limitation on how they fight fires, 38.79% responded "all the fires" and 23.56% responded "being a first responder."

Students who are uninterested are part of the job. I don't know that we can make every student who walks into my classroom interested in CS. But I do know that we can teach CS in ways that lose the interest of too many students with the potential to succeed. Trust me; I've done it.

Also, let's not blame the K-12 system for creating students who are predisposed to be uninterested and to lack curiosity. It does not matter if that is true or not. My job is to teach the students in my classroom, not some mythical students somewhere else.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 30, 2014 4:29 PM

Sometimes, The Truth Is Boring

MacKenzie Bezos, in her Amazon review of The Everything Store: Jeff Bezos and the Age of Amazon, writes:

One of the biggest challenges in non-fiction writing is the risk that a truthfully balanced narration of the facts will be boring, and this presents an author with some difficult choices.

Teachers face these choices all the time, too. Whenever I teach a course, I want to help my students be excited about the ideas and tools that we are studying. I like to tell stories that entertain as well as illuminate. But not every moment of learning a new programming language, or a new programming style, or a set of mathematical ideas, is going to have my students on the edges of their seats.

The best I can hope for is that the exciting parts of the course will give us the momentum we need to make it through the more boring parts. A big part of education is learning that the best parts of a course are the motivation for doing the hard work that gets us to the next exciting idea.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 26, 2014 3:27 PM

An Argument Against Personalizing Instruction

Most people seem to believe that personalizing instruction to each individual is an unalloyed good. However, Benjamin Riley argues that two common axioms of individualized instruction "run afoul of our current understanding of cognition":

  • Students will learn more if they have more control over what they learn (the "path").
  • Students will learn more if they have more control over when they learn (the "pace").

He says that both run the risk of giving the learner too much freedom.

Path. Knowledge is cumulative, and students need a suitable context in which to interpret and assimilate new information. If they try to learn things in the wrong order, they may not be able to make sense of the new information. They are also more likely to become frustrated, which impedes learning further.

Pace. Thinking is hard, and learning isn't always fun. Most people have a natural tendency to shy away from difficult or unpleasant tasks, and as a result can slow our overall rate of learning when we have to choose what to work on next.

(Dan Meyer offers a second reason to doubt the pace axiom: a lot of the fun and insight that comes from learning happens when we learn synchronously with a group.)

Of course, we could take Riley's arguments to their extremes and eliminate any consideration of the individual from our instructional plans. That would be a mistake. For example, each student comes into the classroom with a particular level of understanding and a particular body of background knowledge. When we take this background into account in a reasonable way, then we should be able to maximize each student's learning potential. When we don't, we unnecessarily limit their learning.

However, on balance, I agree with Riley's concerns. Some of my university students benefit greatly when given control over their own learning. Most, though, struggle making choices about what to think about next and why. They also tend not to give themselves enough credit for how much they can learn if only they put in the time and energy studying and practicing. They need help with both path and pace.

I've been teaching long enough now to respect the value that comes with experience as a teacher. By no means am I a perfect teacher, but after teaching a course for a few times I begin to see ways in which I can order topics and pace the coverage in ways that help more students succeed in the course. I don't think I appreciated this when I was a student. The best teachers I ever had were the ones who had this experience and used it well.

I'll stick with my usual approach of trying to design a curriculum intentionally with regard to bother order and timing, while at the same time trying to take my students' current knowledge into account as we move through the course.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 25, 2014 2:03 PM

You Shouldn't Need a License to Program

In Generation Liminal, Dorian Taylor recalls how the World Wide Web arrived at the perfect time in his life:

It's difficult to appreciate this tiny window of opportunity unless you were present for it. It was the World-Wild West, and it taught me one essential idea: that I can do things. I don't need a license, and I don't need credentials. I certainly don't need anybody telling me what to do. I just need the operating manual and some time to read it. And with that, I can bring some amazing -- and valuable -- creations to life.

I predate the birth of the web. But when we turned on the computers at my high school, BASIC was there. We could program, and it seemed the natural thing to do. These days, the dominant devices are smart phones and iPads and tablets. Users begin their experience far away from the magic of creating. It is a user experience for consumers.

One day many years ago, my older daughter needed to know how many words she had written for a school assignment. I showed her and wc. She was amazed by its simplicity; it looked like nothing else she'd ever seen. She still uses it occasionally.

I spent several days last week watching middle schoolers -- play. They consumed other people's creations, including some tools my colleagues set up for them. They have creative minds, but for the most part it doesn't occur to them that they can create things, too.

We need to let them know they don't need our permission to start, or credentials defined by anyone else. We need to give them the tools they need, and the time to play with them. And, sometimes, we need to give them a little push to get started.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 23, 2014 3:13 PM

The Coder's High Beats The Rest

At least David Auerbach thinks so. One of the reasons is that programming has a self-perpetuating cycle of creation, implementation, repair, and new birth:

"Coding" isn't just sitting down and churning out code. There's a fair amount of that, but it's complemented by large chunks of testing and debugging, where you put your code through its paces and see where it breaks, then chase down the clues to figure out what went wrong. Sometimes you spend a long time in one phase or another of this cycle, but especially as you near completion, the cycle tightens -- and becomes more addictive. You're boosted by the tight feedback cycle of coding, compiling, testing, and debugging, and each stage pretty much demands the next without delay. You write a feature, you want to see if it works. You test it, it breaks. It breaks, you want to fix it. You fix it, you want to build the next piece. And so on, with the tantalizing possibility of -- just maybe! -- a perfect piece of code gesturing at you in the distance.

My experience is similar. I can get lost for hours in code, and come out tired but mentally energized. Writing has never given me that kind of high, but then I've not written a really long piece of prose in a long time. Perhaps writing fiction could give me the sort of high I experience when deep in a program.

What about playing games? Back in my younger days, I experienced incredible flow while playing chess for long stretches. I never approached master level play, but a good game could still take my mind to a different level of consciousness. That high differed from a coder's high, though, in that it left me tired. After a three-round day at a chess tournament, all I wanted to do was sleep.

Getting lost in a computer game gives me a misleading feeling of flow, but it differs from the chess high. When I come out of a session lost in most computer games, I feel destroyed. The experience doesn't give life the way coding does, or the way I imagine meditation does. I just end up feeling tired and used. Maybe that's what drug addiction feels like.

I was thinking about computer games even before reading Auerbach's article. Last week, I was sitting next to one of the more mature kids in our summer camp after he had just spent some time gaming, er, collecting data for our our study of internet traffic. We had an exchange that went something like this:

Student: I love this feeling. I'd like to create a game like this some day.

Eugene: You can!

Student: Really? Where?

Eugene: Here. A group of students in my class last month wrote a computer game next door. And it's way cooler than playing a game.

I was a little surprised to find that this young high schooler had no idea that he could learn computer programming at our university. Or maybe he didn't make the connection between computer games and computer programs.

In any case, this is one of the best reasons for us CS profs to get out of their university labs and classrooms and interact with younger students. Many of them have no way of knowing what computer science is, what they can do with computer science, or what computer science can do for them -- unless we show them!

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 17, 2014 2:38 PM

Cookies, Games, and Websites: A Summer Camp for Kids

Cut the Rope 2 logo

Today is the first day of Cookies, Games, and Websites, a four-day summer camp for middle-school students being offered by our department. A colleague of mine developed the idea for a workshop that would help kids of that age group understand better what goes on when they play games on their phones and tablets. I have been helping, as a sounding board for ideas during the prep phase and now as a chaperone and helper during the camp. A local high school student has been providing much more substantial help, setting up hardware and software and serving as a jack-of-all-trades.

The camp's hook is playing games. To judge from this diverse group of fifteen students from the area, kids this age already know very well how to download, install, and play games. Lots of games. Lots and lots of games. If they had spent as much time learning to program as they seem to have spent playing games, they would be true masters of the internet.

The first-order lesson of the camp is privacy. Kids this age play a lot of games, but they don't have a very good idea how much network traffic a game like Cut the Rope 2 generates, or how much traffic accessing Instagram generates. Many of their apps and social websites allow them to exercise some control over who sees what in their space, but they don't always know what that means. More importantly, they don't realize how important all this all is, because they don't know how much traffic goes on under the hood when they use their mobiles devices -- and even when they don't!

The second-order lesson of the camp, introduced as a means to an end, is computing: the technology that makes communication on the web possible, and some of the tools they can use to look at and make sense of the network traffic. We can use some tools they already know and love, such as Google maps, to visualize the relevant data.

This is a great idea: helping young people understand better the technology they use and why concepts like privacy matter to them when they are using that technology. If the camp is successful, they will be better-informed users of on-line technology, and better prepared to protect their identities and privacy. The camp should be a lot of fun, too, so perhaps one or two of them will be interested diving deeper into computer science after the camp is over.

This morning, the campers learned a little about IP addresses and domain names, mostly through interactive exercises. This afternoon, they are learning a little about watching traffic on the net and then generating traffic by playing some of their favorite games. Tomorrow, we'll look at all the traffic they generated playing, as well as all the traffic generated while their tablets were idle overnight.

We are only three-fourths of the way through Day 1, and I have already learned my first lesson: I really don't want to teach middle school. The Grinch explains why quite succinctly: noise, noise, NOISE! One thing seems to be true of any room full of fifteen middle-school students: several of them are talking at any given time. They are fun people to be around, but they are wearing me out...

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 12, 2014 2:29 PM

Points of Emphasis for Teaching Design

As I mentioned recently, design skills were a limiting factor for some of the students in my May term course on agile software development. I saw similar issues for many in my spring Algorithms course as well. Implementing an algorithm from lecture or reading was straightforward enough, but organizing the code of the larger system in which the algorithm resided often created challenges for students.

I've been thinking about ways to improve how I teach design in the future, both in courses where design is a focus and in courses where it lives in the background of other material. Anything I come up with can be also part of conversation with colleagues as we talk about design in their courses.

I read Kent Beck's initial Responsive Design article when it first came out a few years ago and blogged about it then, because it had so many useful ideas for me and my students. I decided to re-read the article again last week, looking for a booster shot of inspiration.

First off, it was nice to remember how many of the techniques and ideas that Kent mentions already play a big role in my courses. Ones that stood out on this reading included:

  • taking safe steps,
  • isolating changes within modules,
  • recognizing that design is a team sport, fundamentally a social activity, and
  • playing with words and pictures.

My recent experiences in the classroom made two other items in Kent's list stand out as things I'll probably emphasize more, or at least differently, in upcoming courses.

Exploit Symmetries. Divide similar elements into identical parts and different parts.

As I noted in my first blog about this article, many programmers find it counterintuitive to use duplication as a tool in design. My students struggle with this, too. Soon after that blog entry, I described an example of increasing duplication in order to eliminate duplication in a course. A few years later, in a fit of deja vu, I wrote about another example, in which code duplication is a hint to think differently about a problem.

I am going to look for more opportunities to help students see ways in which they can make design better by isolating code into the identical and the different.

Inside or Outside. Change the interface or the implementation but not both at the same time.

This is one of the fundamental tenets of design, something students should learn as early as possible. I was surprised to see how normal it was for students in my agile development course not to follow this pattern, even when it quickly got them into trouble. When you try to refactor interface and implementation at the same time, things usually don't go well. That's not a safe step to take...

My students and I discussed writing unit tests before writing code a lot during the course. Only afterward did it occur to me that Inside or Outside is the basic element of test-first programming and TDD. First, we write the test; this is where we design the interface of our system. Then, we write code to pass the test; this is where we implement the system.

Again, in upcoming courses, I am going to look for opportunities to help students think more effectively about the distinction between the inside and the outside of their code.

Thus, I have a couple of ideas for the future. Hurray! Even so, I'm not sure how I feel about my blog entry of four years ago. I had the good sense to read Kent's article back then, draw some good ideas from it, and write a blog entry about them. That's good. But here I am four years later, and I still feel like I need to make the same sort of improvements to how I teach.

In the end, I am glad I wrote that blog entry four years ago. Reading it now reminds me of thoughts I forgot long ago, and reminds me to aim higher. My opening reference to getting a booster shot seems like a useful analogy for talking about this situation in my teaching.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 07, 2014 10:17 AM

Pascal, Forgiveness, and CS1

Last time, I thought about the the role of forgiveness in selecting programming languages for instruction. I mentioned that BASIC had worked well for me as a first programming language, as it had worked for so many others. Yet I would probably would never choose it as a language for CS1, at least for more than a few weeks of instruction. It is missing a lot of the features that we want CS majors to learn about early. It's also a bit too free.

In that post, I did say that I still consider Pascal a good standard for first languages. It dominated CS1 for a couple of decades. What made it work so well as a first instructional language?

Pascal struck a nice balance for its time. It was small enough that students could master it all, and also provided constructs for structured programming. It had the sort of syntax that enabled a compiler to provide students guidance about errors, but its compilers did not seem overbearing. It had a few "gothchas", such as the ; as a statement separator, but not so many that students were constantly perplexed. (Hey to C++.) Students were able try things out and get programs to work without becoming demoralized by a seemingly endless stream of complaints.

(Aside: I have to admit that I liked Pascal's ; statement separator. I understood it conceptually and, in a strange way, appreciated it aesthetically. Most others seem to have disagreed with me...)

Python has attracted a lot of interest as a CS1 language in recent years. It's the first popular language in a long while that brings to mind Pascal's feel for me. However, Pascal had two things that supported the teaching of CS majors that Python does not: manifest types and pointers. I love dynamically-typed languages with managed memory and prefer them for my own work, but using that sort of language in CS1 creates some new challenges when preparing students for upper-division majors courses.

So, Pascal holds a special place for me as a CS1 language, though it was not the language I learned there. We used it to teach CS1 for many years and it served me and our students well. I think it balances a good level of forgiveness with a reasonable level of structure, all in a relatively small package.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 06, 2014 4:24 PM

Programming Languages and the Right Level of Forgiveness

In the last session of my May term course on agile software development, discussion eventually turned to tools and programming languages. We talked about whether some languages are more suited to agile development than others, and whether some languages are better tools for a given developer team at a given time. Students being students, we also discussed the courses used in CS courses, including the intro course.

Having recently thought some about choosing the right languages for early CS instruction, I was interested to hear what students thought. Haskell and Scala came up; they are the current pet languages of students in the course. So did Python, Java, and Ada, which are languages our students have seen in their first-year courses. I was the old guy in the room, so I mentioned Pascal, which I still consider a good standard for comparing CS1 languages, and classic Basic, which so many programmers of my generation and earlier learned as their first exposure to the magic of making computers do our bidding.

Somewhere in the conversation, an interesting idea came up regarding the first language that people learn: good first languages provide the right amount of forgiveness when programmers make mistakes.

A language that is too forgiving will allow the learner to be sloppy and fall into bad habits.

A language that is not forgiving enough can leave students dispirited under a barrage of not good enough, a barrage of type errors and syntax gotchas.

What we mean by 'forgiving' is hard to define. For this and other reasons, not everyone agrees with this claim.

Even when people agree in principle with this idea, they often have a hard time agreeing on where to draw the line between too forgiving and not forgiving enough. As with so many design decisions, the correct answer is likely a local maximum that balances the forces at play among the teachers, students, and desired applications involved.

I found Basic to be just right. It gave me freedom to play, to quickly make interesting programs run, and to learn from programs that didn't do what I expected. For many people's taste, though Basic is too forgiving and leads to diseased minds. (Hey to Edsger Dijkstra.) Maybe I was fortunate to learn how to use GOSUBs early and well.

Haskell seems like a language that would be too unforgiving for most learners. Then again, neither my students nor I have experience with it as a first-year language, so maybe we are wrong. We could imagine ways in which learning it first would lead to useful habits of thought about types and problem decomposition. We are aware of schools that use Haskell in CS1; perhaps they have made it work for them. Still, it feels a little too judgmental...

In the end, you can't overlook context and the value of good tools. Maybe these things shift the line of "just right" forgiveness for different audiences. In any case, finding the right level seems to be a useful consideration in choosing a language.

I suspect this is true when choosing languages to work in professionally, too.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 05, 2014 2:45 PM

Choosing the Right Languages for Early CS Instruction is Important

In today's ACM interview, Donald Knuth identifies one of the problems he has with computer science instruction:

Similarly, the most common fault in computer classes is to emphasize the rules of specific programming languages, instead of to emphasize the algorithms that are being expressed in those languages. It's bad to dwell on form over substance.

I agree. The challenges are at least two in number:

  • ... finding the right level of support for the student learning his or her first language. It is harder for students to learn their first language than many people realize until after they've tried to teach them.

  • ... helping students develop the habit and necessary skills to learn new languages on their own with some facility. For many, this involves overcoming the fear they feel until they have done it on their own a time or two.

Choosing the right languages can greatly help in conquering Challenges 1 and 2. Choosing the wrong languages can make overcoming them almost impossible, if only because we lose students before they cross the divide.

I guess that makes choosing the right languages Challenge 3.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 30, 2014 4:09 PM

Programming is Social

The February 2014 issue of Math Horizons included A Conversation With Steven Strogatz, an interview conducted by Patrick Honner. The following passage came to mind this week:

PH: Math is collaborative?

SS: Yeah, math is social. ... The fact that math is social would come as a surprise to the people who think of it as antisocial.

PH: It might also come as a surprise to some math teachers!

SS: It's extremely social. Mathematicians constantly spend time talking to each other about places where they're stuck. They get insights from each other, new ways of looking at things. Sometimes it's just to commiserate.

Programming is social, too. Most people think it's not. With assistance from media portrayals of programmers and sloppy stereotypes of our own, they think most of us would prefer to work alone in the dark. Some do, of course, but even then most programmers I know like to talk shop with other programmers all the time. They like to talk about the places where they are stuck, as well as the places they used to be stuck. War stories are the currency of the programmer community.

I think a big chunk of the "programming solo" preference many programmers profess is learned habit. Most programming instruction and university course work encourages or requires students to work alone. What if we started students off with pair programming in their CS 1 course, and other courses nurtured that habit throughout the rest of their studies? Perhaps programmers would learn a different habit.

My agile software development students this semester are doing all of their project work via pair programming. Class time is full of discussion: about the problem they are solving, about the program they are evolving, and about the intricacies of Java. They've been learning something about all three, and a large part of that learning has been social.

They've only been trying out XP for a couple of weeks, so naturally the new style hasn't replaced their old habits. I see them fall out of pairing occasionally. One partner will switch off to another computer to look up the documentation for a Java class, and pretty soon both partners are quietly looking at their own screens. Out of deference to me or the course, though, they return after a couple of minutes and resume their conversation. (I'm trying to be a gentle coach, not a ruthless tyrant, when it comes to the practices.)

I suspect a couple members of the class would prefer to program on their own, even after noticing the benefits of pairing. Others really enjoy pair programming but may well fall back into solo programming after the class ends. Old habits die hard, if at all. That's too bad, because most of us are better programmers when pairing.

But even if they do choose, or fall back into, old habits, I'm sure that programming will remain a social activity for them, at some level. There are too many war stories to tell.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 28, 2014 4:20 PM

Programming for Everyone, Intro Physics Edition

Rhett Allain asked his intro physics students to write a short bit of Python code to demonstrate some idea from the course, such as the motion of an object with a constant force, or projectile motion with air resistance. Apparently, at least a few complained: "Wait! I'm not a computer scientist." That caused Allain to wonder...

I can just imagine the first time a physics faculty told a class that they needed to draw a free body diagram of the forces on an object for the physics solutions. I wonder if a student complained that this was supposed to be a physics class and not an art class.

As Allain points out, the barriers that used to prevent students from doing numerical calculations in computer programs have begun to disappear. We have more accessible languages now, such as Python, and powerful computers are everywhere, capable of running VPython and displaying beautiful visualizations.

About all that remains is teaching all physics students, even the non-majors, a little programming. The programs they write are simply another medium through which they can explore physical phenomena and perhaps come to understand them better.

Allain is exactly right. You don't have to be an artist to draw simple diagrams or a mathematician to evaluate an integral. All students accept, if grudgingly, that people might reasonably expect them to present an experiment orally in class.

Students don't have to be "writers", either, in order for teachers or employers to reasonably expect them to write an essay about physics or computer science. Even so, you might be surprised how many physics and computer science students complain if you ask them to write an essay. And if you dare expect them to spell words correctly, or to write prose somewhat more organized than Faulkner stream of consciousness -- stand back.

(Rant aside, I have been quite lucky this May term. I've had my students write something for me every night, whether a review of something they've read or a reflection on the practices they are struggling to learn. There's been nary a complaint, and most of their writings have been organized, clear, and enjoyable to read.)

You don't have to be a physicist to like physics. I hope that most educated adults in the 21st century understand how the physical world works and appreciate the basic mechanisms of the universe. I dare to hope that many of them are curious enough to want to learn more.

You also don't have to be a computer programmer, let alone a computer scientist, to write a little code. Programs are simply another medium through which we can create and express ideas from across the spectrum of human thought. Hurray to Allain for being in the vanguard.


Note. Long-time readers of this blog may recognize the ideas underlying Allain's approach to teaching introductory physics. He uses Matter and Interactions, a textbook and set of supporting materials created by Ruth Chabay and Bruce Sherwood. Six years ago, I wrote about some of Chabay's and Sherwood's ideas in an entry on creating a dialogue between science and CS and mentioned the textbook project in an entry on scientists who program. These entries were part of a report on my experiences attending SECANT, a 2007 NSF workshop on the intersection of science, computation, and education.

I'm glad to see that the Matter and Interactions project continued to fruition and has begun to seep into university physics instruction. It sounds like a neat way to learn physics. It's also a nice way to pick up a little "stealth programming" along the way. I can imagine a few students creating VPython simulations and thinking, "Hey, I'd like to learn more about this programming thing..."

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 23, 2014 12:27 PM

Words Matter, Even in Code

Jim Weirich on dealing with failure in Ruby, via Avdi Grimm's blog:

(An aside, because I use exceptions to indicate failures, I almost always use the "fail" keyword rather than the "raise" keyword in Ruby. Fail and raise are synonyms so there is no difference except that "fail" more clearly communicates that the method has failed. The only time I use "raise" is when I am catching an exception and re-raising it, because here I'm *not* failing, but explicitly and purposefully raising an exception. This is a stylistic issue I follow, but I doubt many other people do).

Words matter: the right words, used at the right times. Weirich always cared about words, and it showed both in his code and in his teaching and writing.

The students in my agile class got to see my obsession with word choice and phrasing in class yesterday, when we worked through the story cards they had written for their project. I asked questions about many of their stories, trying to help them express what they intended as clearly as possible. Occasionally, I asked, "How will you write the test for this?" In their proposed test we found what they really meant and were able to rephrase the story.

Writing stories is hard, even for experienced programmers. My students are doing this for the first time, and they seemed to appreciate the need to spend time thinking about their stories and looking for ways to make them better. Of course, we've already discussed the importance of good names, and they've already experienced that way in which words matter in their own code.

Whenever I hear someone say that oral and verbal communication skills aren't all that important for becoming a good programmer, I try to help them see that they are, and why. Almost always, I find that they are not programmers and are just assuming that we techies spend all our time living inside mathematical computer languages. If they had ever written much software, they'd already know.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 19, 2014 4:09 PM

Becoming More Agile in Class, No. 2

After spending a couple of days becoming familiar with pair programming and unit tests, for Day 4 we moved on to the next step: refactoring. I had the students study the "before" code base from Martin Fowler's book, Refactoring, to identify several ways they thought we could improve it. Then they worked in pairs to implement their ideas. The code itself is pretty simple -- a small part of the information system for a movie rental store -- and let the students focus on practice with tools, running tests, and keeping the code base "green".

We all know Fowler's canonical definition of refactoring:

Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.

... but it's easy to forget that refactoring really is about design. Programmers with limited experience in Java or OOP can bring only so much to the conversation about improving an OO program written in Java. We can refactor confidently and well only if we have a target in mind, one we understand and can envision in our code. Further, creating a good software design requires taste, and taste generally comes from experience.

I noticed this lack of experience manifesting itself in the way my students tried to decompose the work of a refactoring into small, safe steps. When we struggle with decomposing a refactoring, we naturally struggle with choosing the next step to work on. Kent Beck calls this the challenge of succession. Ordering the steps of a refactoring is a more subtle challenge than many programmers realize at first.

This session reminded me why I like to teach design and refactoring in parallel: coming to appreciate new code smells and quickly learning how to refactor code into a better state. This way, programming skill grows along side the design skill.

On Day 5, we tried to put the skills from the three previous days all together, using an XP-style test-code-refactor-repeat cycle to implement a bit of code. Students worked on either the Checkout kata from Dave Thomas or a tic-tac-toe game based on a write-up by Gojko Adzic. No, these are not the most exciting programs to build, but as I told the class, this makes it possible for them to focus on the XP practices and habits of mind -- small steps, unit tests, and refactoring -- without having to work too hard to grok the domain.

My initial impression as the students worked was that the exercise wasn't going as well as I had hoped it would. The step size was too big, and the tests were too intrusive, and the refactoring was almost non-existent. Afterwards, though, I realized that programmers learning such foreign new habits must go through this phase. The best I can do is inject an occasional suggestion or question, hoping that it helps speed them along the curve.

This morning, I decided to have each student pair up with someone who had worked on the other task last time, flip a coin, and work on the one of the same two tasks. This way, each pair had someone working on the same problem again and someone working on a new problem. I instructed them to start from scratch -- new code, new thoughts -- and have the person new to the task write the first test.

The goal wass to create an asymmetry within each pair. Working on the same piece again would be valuable for the partner doing so, in the way playing finger exercises or etudes is valuable for a musician. At the same time, the other partner would see a new problem, bringing fresh eyes and thoughts to the exercise. This approach seems like a good one, as it varies the experience for both members of the pair. I know how important varying the environment can be for student learning, but I sometimes forget to do that often enough in class.

The results seemed so much better today. Students commented that they made better progress this time around, not because one of them had worked on the same problem last time, but because they were feeling more comfortable with the XP practices. One students something to the effect,

Last time, we were trying to work on the simplest or smallest piece of code we could write. This time, we were trying to work on the smallest piece of functionality we could add to the program.

That's a solid insight from an undergrad, even one with a couple of years programming experience.

I also liked the conversation I was hearing among the pairs. They asked each other, "Should we do this feature next, or this other?" and said, "I'm not sure how we can test this." -- and then talked it over before proceeding. One pair had a wider disparity in OO experience, so the more experienced programmer was thinking out loud as he drove, taking into account comments from his partner as he coded.

This is a good sign. I'm under no illusion that they have all suddenly mastered ordering features, writing unit tests, or refactoring. We'll hit bumps over the next three weeks. But they all seem to be pretty comfortable with working together and collaborating on code. That's an essential skill on an agile team.

Next up: the Planning Game for a project that we'll work on for the rest of the class. They chose their own system to build, a cool little Android game app. That will change the dynamic a bit for customer collaboration and story writing, but I think that the increased desire to see a finished product will increase their motivation to master the skills and practice. My job as part-time customer, part-time coach will require extra vigilance to keep them on track.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 14, 2014 4:52 PM

Becoming More Agile in Class

Days 2 and 3 of my Agile Software Development May term course are now in the books. This year, I decided to move as quickly as we could in the lab. Yesterday, the students did their first pair-programming session, working for a little over an hour on one of the industry standard exercises, Conway's Game of Life. Today, they did their first pair programming with unit tests, using Bill Wake's Test-First Challenge to implement the beginnings of a simple data model for spreadsheets.

I always enjoy watching students write code and interacting with them while they do it. The thing that jumped out to me yesterday was just how much code some students write before they ever think about compiling it, let alone testing it. Another was how some students manage to get through a year of programming-heavy CS courses without mastering their basic tools: text editor, compiler, and language. It's hard to work confidently when your tools feel uncomfortable in your hands.

There's not much I can do to help students develop greater facility with their tools than give them lots of practice, and we will do that. However, writing too much code before testing even its syntactic correctness is a matter of mindset and habit. So I opened today's session with a brief discussion, and then showed them what I meant in the form of a short bit of code I wrote yesterday while watching them. Then I turned them loose with Wake's spreadsheet tests and encouragement to help each other write simple code, compile frequently even with short snippets, and run the tests as often as their code compiles.

Today, we had an odd number of students in class, something that's likely to be our standard condition this term, so paired with one of the students on a spreadsheet. He wanted to work in Haskell, and I was game. I refreshed my Haskell memories a bit and even contributed some helpful bits of code, in addition to meta-contributions on XP style.

The student is relatively new to the language, so he's still developing the Haskell in his in his mind. There were times we struggled because we were thinking of the problem in a stateful way. As you surely know, that's not the best way to work in Haskell. Our solutions were not always elegant, but we did our best to get in the rhythm of writing tests, writing code, and running.

As the period was coming to an end, our code had just passed a test that had been challenging us. Almost simultaneously, a student in another thrust his arms in the air as his pair's code passed a challenging test, too, much deeper in the suite. We all decided to declare victory and move on. We'll all get better with practice.

Next up: refactoring, and tools to support it and automated testing.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 12, 2014 5:01 PM

Teaching a Compressed Class

May term started today, so my agile course is off the ground. We will meet for 130 minutes every morning through June 6, excepting only Memorial Day. That's a lot of time spent together in a short period of time.

As I told the students today, each class is almost a week's worth of class in a regular semester. This means committing a fair amount of time out of class every day, on the order of 5-7 hours. There isn't a lot of time for our brains to percolate on the course content. We'll be moving steadily for four weeks.

This makes May term unsuitable, in my mind at least, for a number of courses. I would never teach CS 1 in May term. Students are brand new to the discipline, to programming, and usually to their first programming language. They need time for the brains to percolate. I don't think I'd want to teach upper-division CS courses in May term if they have a lot of content, either. Our brains don't always absorb a lot of information quickly in a short amount of time, so letting it sink in more slowly, helped by practice and repetition, seems best.

My agile course is, on the other hand, almost custom made for a compressed semester. There isn't a lot of essential "content". The idea is straightforward. I don't expect students to memorize lists of practices, or the rules of tools. I expect them to do the practices. Doing them daily, in extended chunks of time, with immediate feedback, is much better than taking a day off between practice sessions.

Our goal is, in part, to learn new habits and then reflect on how well they fit, on where they might help us most and where they might get in the way. We'll have better success learning new habits in the compressed term than we would with breaks. And, as much as I want students to work daily during a fifteen-week semester to build habits, it usually just doesn't happen. Even when the students buy in and intend to work that way, life's other demands get in the way. Failing with good intentions is still failing, and sometimes feels worse than failing without them.

So we begin. Tomorrow we start working on our first practice, a new way of working with skills to be learned through repetition every day the rest of the semester. Wish us luck.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 09, 2014 4:11 PM


Spring semester ends today. May term begins Monday. I haven't taught during the summer since 2010, when I offered a course on agile software development. I'm reprising that course this month, with nine hardy souls signed on for the mission. That means no break for now, just a new start. I like those.

I'm sure I could blog for hours on the thoughts running through my head for the course. They go beyond the readings we did last time and the project we built, though all that is in the mix, too.

For now, though, three passages that made the highlights of my recent reading. All fit nicely with the theme of college days and transition.


First, this reminder from John Graham, a "self-made merchant" circa 1900, in a letter to his son at college.

Adam invented all the different ways in which a young man can make a fool of himself, and the college yell at the end of them is just a frill that doesn't change essentials.

College is a place all its own, but it's just a place. In many ways, it's just the place where young people spend a few years while they are young.


Next, a writer tells a story of studying with Annie Dillard in college. During their last session together, she told the class:

If I've done my job, you won't be happy with anything you write for the next 10 years. It's not because you won't be writing well, but because I've raised your standards for yourself.

Whatever we "content" teach our students, raising their standards and goals is sometimes the most important thing we do. "Don't compare yourselves to each other", she says. Compare yourselves to the best writers. "Shoot there." This advice works just as well for our students, whether they are becoming software developers or computer scientists. (Most of our students end up being a little bit of both.)

It's better to aim at the standard set by Ward Cunningham or Alan Kay than at the best we can imagine ourselves doing right now.


Now that I think about it, this last one has nothing to do with college or transitions. But it made me laugh, and after a long academic year, with no break imminent, a good laugh is worth something.

What do you call a rigorous demonstration that a statement is true?
  1. If "proof", then you're a mathematician.
  2. If "experiment", then you're a physicist.
  3. If you have no word for this concept, then you're an economist.

This is the first of several items in The Mathematical Dialect Quiz at Math with Bad Drawings. It adds a couple of new twists to the tongue-in-cheek differences among mathematicians, computer scientists, and engineers. With bad drawings.

Back to work.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 05, 2014 4:35 PM

Motivated by Teaching Undergrads

Recently, a gentleman named Seth Roberts passed away. I didn't know Roberts and was not familiar with his work. However, several people I respect commented on his life and career, so I took a look at one colleague's reminiscence. Roberts was an interesting fellow who didn't do things the usual way for a research academic. This passage stood out:

Seth's academic career was unusual. He shot through college and graduate school to a tenure-track job at a top university, then continued to do publication-quality research for several years until receiving tenure. At that point he was not a superstar but I think he was still considered a respected member of the mainstream academic community. But during the years that followed, Seth lost interest in that thread of research (you can see this by looking at the dates of most of his highly-cited papers). He told me once that his shift was motivated by teaching introductory undergraduate psychology: the students, he said, were interested in things that would affect their lives, and, compared to that, the kind of research that leads to a productive academic career did not seem so appealing.

That last sentence explains, I think, why so many computer science faculty at schools that are not research-intensive end up falling away from traditional research and publishing. When you come into contact with a lot of undergrads, you may well find yourself caring more deeply about things that will affect their lives in a more direct way. Pushing deeper down a narrow theoretical path, or developing a novel framework for file system management that most people will never use, may not seem like the best way to use your time.

My interests have certainly shifted over the years. I found myself interested in software development, in particular tools and practices that students can use to make software more reliably and teaching practices that would students learn more effectively. Fortunately, I've always loved programming qua programming, and this has allowed me to teach different programming styles with an eye on how learning them will help my students become better programmers. Heck, I was even able to stick with it long enough that functional programming became popular in industry! I've also been lucky that my interest in languages and compilers has been of interest to students and employers over the last few years.

In any event, I can certainly understand how Roberts diverged from the ordained path and turned his interest to other things. One challenge for leaving the ordained path is to retain the mindset of a scientist, seeking out opportunities to evaluate ideas and to disseminate the ones that appear to hold up. You don't need to publish in the best journals to disseminate good ideas widely. That may not even be the best route.

Another challenge is to find a community of like-minded people in which to work. An open, inquisitive community is a place to find new ideas, a place to try ideas out before investing too much in a doomed one, and a place to find the colleagues most of us need to stay sane while exploring what interests. The software and CS worlds have helped create the technology that makes it possible to grow such communities in new ways, and our own technology now supports some amazing communities of software and CS people. It is a good time to be an academic or developer.

I've enjoyed reading about Roberts' career and learning about what seems to have been one of the academia's unique individuals. And I certainly understand how teaching introductory undergrads might motivate a different worldview for an academic. It's good to be reminded that it's okay to care about the things that will affect the lives of our students now rather than later.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

March 29, 2014 10:06 AM


That is the advice I find myself giving to students again and again this semester: Sooner.

Review the material we cover in class sooner.

Ask questions sooner.

Think about the homework problems sooner.

Clarify the requirements sooner.

Write code sooner.

Test your code sooner.

Submit a working version of your homework sooner. You can submit a more complete version later.

A lot of this advice boils down to the more general Get feedback sooner. In many ways, it is a dual of the advice, Take small steps. If you take small steps, you can ask, clarify, write, and test sooner. One of the most reliable ways to do those things sooner is to take small steps.

If you are struggling to get things done, give sooner a try. Rather than fail in a familiar way, you might succeed in an unfamiliar way. When you do, you probably won't want to go back to the old way again.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 12, 2014 3:55 PM

Not Content With Content

Last week, the Chronicle of Higher Ed ran an article on a new joint major at Stanford combining computer science and the humanities.

[Students] might compose music or write a short story and translate those works, through code, into something they can share on the web.

"For students it seems perfectly natural to have an interest in coding," [the program's director] said. "In one sense these fields might feel like they're far apart, but they're getting closer and closer."

The program works in both directions, by also engaging CS students in the societal issues created by ubiquitous networks and computing power.

We are doing something similar at my university. A few years ago, several departments began to collaborate on a multidisciplinary program called Interactive Digital Studies which went live in 2012. In the IDS program, students complete a common core of courses from the Communication Studies department and then take "bundles" of coursework involving digital technology from at least two different disciplines. These areas of emphasis enable students to explore the interaction of computing with various topics in media, the humanities, and culture.

Like Stanford's new major, most of the coursework is designed to work at the intersection of disciplines, rather than pursuing disciplines independently, "in parallel".

The initial version of the computation bundle consists of an odd mix of application tools and opportunities to write programs. Now that the program is in place, we are finding that students and faculty alike desire more depth of understanding about programming and development. We are in the process of re-designing the bundle to prepare students to work in a world where so many ideas become web sites or apps, and in which data analytics plays an important role in understanding what people do.

Both our IDS program and Stanford's new major focus on something that we are seeing increasingly at universities these days: the intersections of digital technology and other disciplines, in particular the humanities. Computational tools make it possible for everyone to create more kinds of things, but only if people learn how to use new tools and think about their work in new ways.

Consider this passage by Jim O'Loughlin, a UNI English professor, from a recent position statement on the the "digital turn" of the humanities:

We are increasingly unlikely to find writers who only provide content when the tools for photography, videography and digital design can all be found on our laptops or even on our phones. It is not simply that writers will need to do more. Writers will want to do more, because with a modest amount of effort they can be their own designers, photographers, publishers or even programmers.

Writers don't have to settle for producing "content" and then relying heavily on others to help bring the content to an audience. New tools enable writers to take greater control of putting their ideas before an audience. But...

... only if we [writers] are willing to think seriously not only about our ideas but about what tools we can use to bring our ideas to an audience.

More tools are within the reach of more people now than ever before. Computing makes that possible, not only for writers, but also for musicians and teachers and social scientists.

Going further, computer programming makes it possible to modify existing tools and to create new tools when the old ones are not sufficient. Writers, musicians, teachers, and social scientists may not want to program at that level, but they can participate in the process.

The critical link is preparation. This digital turn empowers only those who are prepared to think in new ways and to wield a new set of tools. Programs like our IDS major and Stanford's new joint major are among the many efforts hoping to spread the opportunities available now to a larger and more diverse set of people.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

March 11, 2014 4:52 PM

Change The Battle From Arguments To Tests

In his recent article on the future of the news business, Marc Andreessen has a great passage in his section on ways for the journalism industry to move forward:

Experimentation: You may not have all the right answers up front, but running many experiments changes the battle for the right way forward from arguments to tests. You get data, which leads to correctness and ultimately finding the right answers.

I love that clause: "running many experiments changes the battle for the right way forward from arguments to tests".

While programming, it's easy to get caught up in what we know about the code we have just written and assume that this somehow empowers us to declare sweeping truths about what to do next.

When students are first learning to program, they often fall into this trap -- despite the fact that they don't know much at all. From other courses, though, they are used to thinking for a bit, drawing some conclusions, and then expressing strongly-held opinions. Why not do it with their code, too?

No matter who we are, whenever we do this, sometimes we are right, and sometimes, we are wrong. Why leave it to chance? Run a simple little experiment. Write a snippet of code that implements our idea, and run it. See what happens.

Programs let us test our ideas, even the ideas we have about the program we are writing. Why settle for abstract assertions when we can do better? In the end, even well-reasoned assertions are so much hot air. I learned this from Ward Cunningham: It's all talk until the tests run.

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

March 08, 2014 10:18 AM

Sometimes a Fantasy

This week I saw a link to The Turing School of Software & Design, "a seven-month, full-time program for people who want to become professional developers". It reminded me of Neumont University, a ten-year-old school that offers a B.S. degree program in Computer science that students can complete in two and a half years.

While riding the bike, I occasionally fantasize about doing something like this. With the economics of universities changing so quickly [ 1 | 2 ], there is an opportunity for a new kind of higher education. And there's something appealing about being able to work closely with a cadre of motivated students on the full spectrum of computer science and software development.

This could be an accelerated form of traditional CS instruction, without the distractions of other things, or it could be something different. Traditional university courses are pretty confining. "This course is about algorithms. That one is about programming languages." It would be fun to run a studio in which students serve as apprentices making real stuff, all of us learning as we go along.

A few years ago, one of our ChiliPLoP hot topic groups conducted a greenfield thought experiment to design an undergrad CS program outside of the constraints of any existing university structure. Student advancement was based on demonstrating professional competencies, not completing packaged courses. It was such an appealing idea! Of course, there was a lot of hard work to be done working out the details.

My view of university is still romantic, though. I like the idea of students engaging the great ideas of humanity that lie outside their major. These days, I think it's conceivable to include the humanities and other disciplines in a new kind of CS education. In a recent blog entry, Hollis Robbins floats the idea of Home College for the first year of a liberal arts education. The premise is that there are "thousands of qualified, trained, energetic, and underemployed Ph.D.s [...] struggling to find stable teaching jobs". Hiring a well-rounded tutor could be a lot less expensive than a year at a private college, and more lucrative for the tutor than adjuncting.

Maybe a new educational venture could offer more than targeted professional development in computing or software. Include a couple of humanities profs, maybe some a social scientist, and it could offer a more complete undergraduate education -- one that is economical both in time and money.

But the core of my dream is going broad and deep in CS without the baggage of a university. Sometimes a fantasy is all you need. Other times...

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

March 07, 2014 2:24 PM

Take Small Steps

If a CS major learns only one habit of professional practice in four years, it should be:

Take small steps.

A corollary:

If things aren't working, take smaller steps.

I once heard Kent Beck say something similar, in the context of TDD and XP. When my colleague Mark Jacobson works with students who are struggling, he uses a similar mantra: Solve a simpler problem. As Dr. Nick notes, students and professionals alike should scale the step size according to their level of knowledge or their confidence about the problem.

When I tweeted these thoughts yesterday, two pieces of related advice came in:

  • Slow down. -- Big steps are usually a sign of trying to hurry. Beginners are especially prone to this.

  • Lemma: Keep moving. -- Small steps keep us moving more reliably. We can always fool ourselves into believing that the next big step is all we need...

Of course, I've always been a fan of baby steps and unusual connections to agile software development. They apply quite nicely to learners in many settings.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 01, 2014 11:35 AM

A Few Old Passages

I was looking over a couple of files of old notes and found several quotes that I still like, usually from articles I enjoyed as well. They haven't found their way into a blog entry yet, but they deserve to see the light of day.

Evidence, Please!

From a short note on the tendency even among scientists to believe unsubstantiated claims, both in and out of the professional context:

It's hard work, but I suspect the real challenge will lie in persuading working programmers to say "evidence, please" more often.

More programmers and computer scientists are trying to collect and understand data these days, but I'm not sure we've made much headway in getting programmers to ask for evidence.

Sometimes, Code Before Math

From a discussion of The Expectation Maximization Algorithm:

The code is a lot simpler to understand than the math, I think.

I often understand the language of code more quickly than the language of math. Reading, or even writing, a program sometimes helps me understand a new idea better than reading the math. Theory is, however, great for helping me to pin down what I have learned more formally.

Grin, Wave, Nod

From Iteration Inside and Out, a review of the many ways we loop over stuff in programs:

Right now, the Rubyists are grinning, the Smalltalkers are furiously waving their hands in the air to get the teacher's attention, and the Lispers are just nodding smugly in the back row (all as usual).

As a Big Fan of all three languages, I am occasionally conflicted. Grin? Wave? Nod? Look like the court jester by doing all three simultaneously?

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

February 21, 2014 3:35 PM

Sticking with a Good Idea

My algorithms students and I recently considered the classic problem of finding the closest pair of points in a set. Many of them were able to produce a typical brute-force approach, such as:

    minimum ← 
    for i ← 1 to n do
        for j ← i+1 to n do
            distance ← sqrt((x[i] - x[j])² + (y[i] - y[j])²)
            if distance < minimum then
               minimum ← distance
               first   ← i
               second  ← j
    return (first, second)

Alas, this is an O(n²) process, so we considered whether we might do better with a divide-and-conquer approach. It did not look promising, though. Divide-and-conquer doesn't let us solve the sub-problems independently. What if the closest pair straddles two partitions?

This is a common theme in computing, and problem solving more generally. We try a technique only to find that it doesn't quite work. Something doesn't fit, or a feature of the domain violates a requirement of the technique. It's tempting in such cases to give up and settle for something less.

Experienced problem solvers know not to give up too quickly. Many of the great advances in computing came under conditions just like this. Consider Leonard Kleinrock and the theory of packet switching.

In a Computing Conversations podcast published last year, Kleinrock talks about his Ph.D. research. He was working on the problem of how to support a lot of bursty network traffic on a shared connection. (You can read a summary of the podcast in an IEEE Computer column also published last year.)

His wonderful idea: apply the technique of time sharing from multi-user operating systems. The system could break all messages into "packets" of a fixed size, let messages take turns on the shared line, then reassemble each message on the receiving end. This would give every message a chance to move without making short messages wait too long behind large ones.

Thus was born the idea of packet switching. But there was a problem. Kleinrock says:

I set up this mathematical model and found it was analytically intractable. I had two choices: give up and find another problem to work on, or make an assumption that would allow me to move forward. So I introduced a mathematical assumption that cracked the problem wide open.

His "independence assumption" made it possible for him to complete his analysis and optimize the design of a packet-switching network. But an important question remained: Was his simplifying assumption too big a cheat? Did it skew the theoretical results in such a way that his model was no longer a reasonable approximation of how networks would behave in the real world?

Again, Kleinrock didn't give up. He wrote a program instead.

I had to write a program to simulate these networks with and without the assumption. ... I simulated many networks on the TX-2 computer at Lincoln Laboratories. I spent four months writing the simulation program. It was a 2,500-line assembly language program, and I wrote it all before debugging a single line of it. I knew if I didn't get that simulation right, I wouldn't get my dissertation.

High-stakes programming! In the end, Kleinrock was able to demonstrate that his analytical model was sufficiently close to real-world behavior that his design would work. Every one of us reaps the benefit of his persistence every day.

Sometimes, a good idea poses obstacles of its own. We should not let those obstacles beat us without a fight. Often, we just have to find a way to make it work.

This lesson applies quite nicely to using divide-and-conquer on the closest pairs problem. In this case, we don't make a simplifying assumption; we solve the sub-problem created by our approach:

After finding a candidate for the closest pair, we check to see if there is a closer pair straddling our partitions. The distance between the candidate points constrains the area we have to consider quite a bit, which makes the post-processing step feasible. The result is an O(n log n) algorithm that improves significantly on brute force.

This algorithm, like packet switching, comes from sticking with a good idea and finding a way to make it work. This is a lesson every computer science student and novice programmer needs to learn.

There is a complementary lesson to be learned, of course: knowing when to give up on an idea and move on to something else. Experience helps us tell the two situations apart, though never with perfect accuracy. Sometimes, we just have to follow an idea long enough to know when it's time to move on.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 19, 2014 4:12 PM

Teaching for the Perplexed and the Traumatized

Teaching for the Perplexed and the Traumatized

On need for empathy when writing about math for the perplexed and the traumatized, Steven Strogatz says:

You have to help them love the questions.

Teachers learn this eventually. If students love the questions, they will do an amazing amount of working searching for answers.

Strogatz is writing about writing, but everything he says applies to teaching as well, especially teaching undergraduates and non-majors. If you teach only grad courses in a specialty area, you may be able to rely on the students to provide their own curiosity and energy. Otherwise having empathy, making connections, and providing Aha! moments are a big part of being successful in the classroom. Stories trump formal notation.

This semester, I've been trying a particular angle on achieving this trifecta of teaching goodness: I try to open every class session with a game or puzzle that the students might care about. From there, we delve into the details of algorithms and theoretical analysis. I plan to write more on this soon.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

February 16, 2014 10:48 AM

Experience Happens When You Keep Showing Up

You know what they say about good design coming from experience, and experience coming from bad design? That phenomenon is true of most things non-trivial. Here's an example from men's college basketball.

The University of Florida has a veteran team. The University of Kentucky has a young team. Florida's players are very good, but not generally considered to be in the same class as Kentucky's highly-regarded players. Yesterday, the two teams played a close game on Kentucky's home floor.

Once they fell behind by five with less than two minutes remaining, Kentucky players panicked. Florida players didn't. Why not? "Well, we have a veteran group here that's panicked before -- that's been in this situation and not handled it well," [Patric] Young said.

How did Florida's players maintain their composure at the end of a tight game on the road against another good team? They had been in that same situation three times before, and failed. They didn't panic this time in large part because they had panicked before and learned from those experiences.

Kentucky's starters have played a total of 124 college games. Florida's four seniors have combined to play 491. That's a lot of experience -- a lot of opportunities to panic, to guess wrong, to underestimate a situation, or otherwise to come up short. And a lot of opportunities to learn.

The young players at Kentucky hurt today. As the author of the linked game report notes, Florida's players have hurt like that before, for coming up short in much the same way, "and they used that pain to get better".

It turns out that composure comes from experience, and experience comes from lack of composure.

As a teacher, I try to convince students not to shy away from the error messages their compiler gives them, or from the convoluted code they eventually sneak past it. Those are the experiences they'll eventually look back to when they are capable, confident programmers. They just need the opportunity to learn.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 14, 2014 3:07 PM

Do Things That Help You Become Less Wrong

My students and I debriefed a programming assignment in class yesterday. In the middle of class, I said, "Now for a big question: How do you know your code is correct?

There were a lot of knowing smiles and a lot of nervous laughter. Most of them don't.

Sure, they ran a few test cases, but after making additions and changes to the code, some were just happy that it still ran. The output looked reasonable, so it must be done. I suggested that they might want to think more about testing.

This morning I read a great quote from Nathan Marz that I will share with my students:

Feedback is everything. Most of the time you're wrong, and feedback is the only way to realize your mistakes and help you become less wrong. This applies to everything.

Most of the time you're wrong. Do things that help you become less wrong. Getting feedback, early and often, is one of the best ways to do this.

A comment by a student earlier in the period foreshadowed our discussion of testing, which made me feel even better. In response to the retrospective question, "What design or programming choices went well for you?", he answered unit tests.

That set me up quite nicely to segue from manual testing into automated testing. If you aren't testing your code early and often, then manual testing is a huge improvement. But you can do even better by pushing those test cases into a form that can be executed quickly and easily, with the code doing the tedious work of verifying the output.

My students are writing code in many different languages, so I showed them testing frameworks in Ruby, Java, and Python. The code looks simple, even with the boilerplate imposed by the frameworks.

The big challenges in getting students to write unit tests are the same as for getting professionals to write them: lack of time, and misplaced confidence. I hope that a few of my students will see that the real time sink is debugging bad code and that a fear of changing code is a lack of confidence. The best way to be confident is to have evidence.

The student who extolled unit tests works in Racket and so has test cases in RackUnit. He set me up nicely for a future discussion, too, when he admitted out loud that he wrote his tests first. This time, it was I who smiled knowingly.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 13, 2014 4:25 PM

Politics in a CS Textbook

Earlier this week, I looking for inspiration for an exam problem in my algorithms course. I started thumbing through a text I briefly considered considering for adoption this semester. (I ended up opting for no text without considering any of them very deeply.)

No Politics warning sign -- handmade

The first problem I read was written with a political spin, at the expense of one of the US political parties. I was aghast. I tweeted:

This textbook uses an end-of-chapter exercise to express an opinion about a political party. I will no longer consider it for adoption.

I don't care which party is made light of or demeaned. I'm no longer interested. I don't want a political point unrelated to my course to interfere with my students' learning -- or with the relationship we are building.

In general, I don't want students to think of me in a partisan way, whether the topic is politics, religion, or anything else outside scope of computer science. It's not my job as a CS instructor to make my students uncomfortable.

That isn't to say that I want students to think of me as bland or one-dimensional. They know me to be a sports fan and a chess player. They know I like to read books and to ride my bike. I even let them know that I'm a Billy Joel fan. All of these give me color and, while they may disagree with my tastes, none of these are likely to create distance between us.

Nor do I want them to think I have no views on important public issues of the day. I have political opinions and actually like to discuss the state of the country and world. Those discussions simply don't belong in a course on algorithms or compiler construction. In the context of a course, politics and religion are two of many unnecessary distractions.

In the end, I did use the offensive problem, but only as inspiration. I wrote a more generic problem, the sort you expect to see in an algorithms course. It included all the relevant features of the original. My problem gave the students a chance to apply what they have have learned in class without any noise. The only way this problem could offend students was by forcing them to demonstrate that they are not yet ready to solve such a problem. Alas, that is an offense that every exam risks giving.

... and yes, I still owe you a write-up on possible changes to the undergraduate algorithms canon, promised in the entry linked above. I have not forgotten! Designing a new course for spring semester is a time-constrained operation.

Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

February 12, 2014 11:49 AM

They Are Always Watching

Rand's Inbox Reboot didn't do much for me on the process and tool side of things, perhaps other than rousing a sense of familiarity. Been there. The part that stood out for me was when he talked about the ultimate effect of not being in control of the deluge:

As a leader, you define your reputation all the time. You'd like to think that you could choose the moments that define your reputation, but you don't. They are always watching and learning. They are always updating their model regarding who you are and how you lead with each observable action, no matter how large or small.

He was speaking of technical management, so I immediately thought about my time as department head and how true this passage is.

But it is also true -- crucially true -- of the relationship teachers have with their students. There are few individual moments that define how a teacher is seen by his or her students. Students are always watching and learning. They infer things about the discipline and about the teacher from every action, from every interaction. What they learn in all the little moments comes to define you in their minds.

Of course, this is even more true of parents and children. To the extent I have any regrets as a parent, it is that I sometimes overestimated the effect of the Big Moments and underestimated the effect of all those Little Moments, the ones that flow by without pomp or recognition. They probably shaped my daughters, and my daughters' perception of me, more than anything particular action I took.

Start with a good set of values, then create a way of living and working that puts these values at the forefront of everything you do. Your colleagues, your students, and your children are always watching.

Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Personal, Teaching and Learning

February 07, 2014 3:25 PM

Technology and Change at the University, Circa 1984

cover of David Lodge's Small World, Wendy Edelson

As I mentioned last time, I am re-reading David Lodge's novel Small World. Early on in the story, archetypal academic star and English professor Morris Zapp delivers a eulogy to the research university of the past:

The day of the individual campus has passed. It belongs to an obsolete technology -- railways and the printing press. I mean just look at this campus -- it epitomizes the whole thing: the heavy industry of the mind.

... It's huge, heavy, monolithic. It weighs about a billion tons. You can feel the weight of those buildings, pressing down the earth. Look at the Library -- built like a huge warehouse. The whole place says, "We have learning stored here; if you want it, you have to come inside and get it." Well, that doesn't apply any more.

... Information is much more portable in the modern world than it used to be. So are people. Ergo, it's no longer necessary to hoard your information in one building, or keep your top scholars corralled in one campus.

Small World was published in 1984. The technologies that had revolutionized the universities of the day were the copy machine, universal access to direct-dial telephone, and easy access to jet travel. Researchers no longer needed to be together in one place all the time; they could collaborate at a distance, bridging the distances for short periods of collocation at academic conferences.

Then came the world wide web and ubiquitous access to the Internet. Telephone and Xerox machines were quickly overshadowed by a network of machines that could perform the functions of both phone and copier, and so much more. In particular, they freed information even further from being bound to place. Take out the dated references to Xerox, and most of what Zapp has to say about universities could be spoken today:

And you don't have to grub about in library stacks for data: any book or article that sounds interesting they have Xeroxed and read at home. Or on the plane going to the next conference. I work mostly at home or on planes these days. I seldom go into the university except to teach my courses.

Now, the web is beginning to unbundle even the teaching function from the physical plant of a university and the physical presence of a teacher. One of our professors used to routinely spend hours hanging out with students on Facebook and Google+, answering questions and sharing the short of bonhomie that ordinarily happens after class in the hallways or the student union. -10 degrees outside? No problem.

Clay Shirky recently wrote that higher education's Golden Age has ended -- long ago, actually: "Then the 1970s happened." Most of Shirky's article deals with the way the economics of universities had changed. Technology is only a part of that picture. I like how Lodge's novel shows an awareness even in the early 1980s that technology was also changing the culture of scholarship and consequently scholarship's preeminent institution, the university.

Of course, back then old Morris Zapp still had to go to campus to teach his courses. Now we are all wondering how much longer that will be true for the majority of students and professors.


IMAGE. The cover from the first edition of David Lodge's Small World, by Wendy Edelson.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 02, 2014 5:19 PM

Things That Make Me Sigh

In a recent article unrelated to modern technology or the so-called STEM crisis, a journalist writes:

Apart from mathematics, which demands a high IQ, and science, which requires a distinct aptitude, the only thing that normal undergraduate schooling prepares a person for is... more schooling.


On the one hand, this seems to presume that one need neither a high IQ nor any particular aptitude to excel in any number of non-math and science disciplines.

On the other, it seems to say that if one does not have the requisite characteristics, which are limited to a lucky few, one need not bother with computer science, math or science. Best become a writer or go into public service, I guess.

I actually think that the author is being self-deprecating, at least in part, and that I'm probably reading too much into one sentence. It's really intended as a dismissive comment on our education system, the most effective outcome of which often seems to be students who are really good at school.

Unfortunately, the attitude expressed about math and science is far too prevalent, even in our universities. It demeans our non-scientists as well as our scientists and mathematicians. It also makes it even harder to convince younger students that, with a little work and practice, they can achieve satisfying lives and careers in technical disciplines.

Like I said, sigh.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 31, 2014 3:13 PM

"What Should It Be?"

When asked to design and implement a program, beginning programmers often aren't sure what data type or data structure to use for a particular value. Should they use an array or a list? Or they've decided to use a record but can't decide exactly what fields to include, or names to give them.

"What should it be?", they ask.

I often have a particular implementation in mind, based on what we've been studying or on my own experience as a programmer, but I prefer not to tell them what to do. This is a great opportunity for them to learn to think about design.

Instead, I ask questions. "What have you considered?" "Do you think one or the other is better?" "Why?"

We discuss how so often there is no "right" answer. There are merely trade-offs. They have to choose. This is a design decision.

But, in making this decision, there's another opportunity to learn something about design. They don't have to commit now and forever to an implementation before proceeding with the rest of their program. Because the rest of the program shouldn't know about their decision anyway!

They should make an object that encapsulates the choice. They are then able to start building the rest of the program without fear that it depends on the details of their design choice. The rest of the program will interact with the object in terms of what the object means in the program, not in terms of how it is implemented. Later, if they change their minds, they will be able to change the implementation details without disturbing the rest of the code.

Yes this is basic stuff, but beginners often struggle with basic stuff. They've learned about ADTs or OOP, and they can talk abstractly about abstraction. But when it comes time to write code, indecision descends upon them. They are afraid of messing up.

If I can help allay their fears of proceeding, then I've contributed something to their work that day. I even suggest that writing the rest of the program might even help them figure out which alternative is better. I like to listen to my code, even if that idea seems strange or absurd to them. Some day soon, it may not.

In any case, they have the beginnings of a program, and perhaps a better idea of what design is all about.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 27, 2014 11:39 AM

The Polymath as Intellectual Polygamist

Carl Djerassi, quoted in The Last Days of the Polymath:

Nowadays people [who] are called polymaths are dabblers -- are dabblers in many different areas. I aspire to be an intellectual polygamist. And I deliberately use that metaphor to provoke with its sexual allusion and to point out the real difference to me between polygamy and promiscuity.

On this view, a dilettante is merely promiscuous, making no real commitment to any love interest. A polymath has many great loves, and loves them all deeply, if not equally.

We tend to look down on dilettantes, but they can perform a useful service. Sometimes, making a connection between two ideas at the right time and in the right place can help spur someone else to "go deep" with the idea. Even when that doesn't happen, dabbling can bring great personal joy and provide more substantial entertainment than a lot of pop culture.

Academics are among the people these days with a well-defined social opportunity to be explore at least two areas deeply and seriously: their chosen discipline and teaching. This is perhaps the most compelling reason to desire a life in academia. It even offers a freedom to branch out into new areas later in one's career that is not so easily available to people who work in industry.

These days, it's hard to be a polymath even inside one's own discipline. To know all sub-areas of computer science, say, as well as the experts in those sub-areas is a daunting challenge. I think back to the effort my fellow students and I put in over the years that enabled us to take the Ph.D. qualifying exams in CS. I did quite well across the board, but even then I didn't understand operating systems or programming languages as well as experts in those areas. Many years later, despite continued reading and programming, the gap has only grown.

I share the vague sense of loss, expressed by the author of the article linked to above, of a time when one human could master multiple areas of discourse and make fundamental advances to several. We are certainly better off for collective understanding the world so much much better, but the result is a blow to a certain sort of individual mind and spirit.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

January 26, 2014 3:05 PM

One Reason We Need Computer Programs

Code bridges the gap between theory and data. From A few thoughts on code review of scientific code:

... there is a gulf of unknown size between the theory and the data. Code is what bridges that gap, and specifies how edge cases, weird features of the data, and unknown unknowns are handled or ignored.

I learned this lesson the hard way as a novice programmer. Other activities, such as writing and doing math, exhibit the same characteristic, but it wasn't until I started learning to program that the gap between theory and data really challenged me.

Since learning to program myself, I have observed hundreds of CS students encounter this gap. To their credit, they usually buckle down, work hard, and close the gap. Of course, we have to close the gap for every new problem we try to solve. The challenge doesn't go away; it simply becomes more manageable as we become better programmers.

In the passage above, Titus Brown is talking to his fellow scientists in biology and chemistry. I imagine that they encounter the gap between theory and data in a new and visceral way when they move into computational science. Programming has that power to change how we think.

There is an element of this, too, in how techies and non-techies alike sometimes lose track of how hard it is to create a successful start up. You need an idea, you need a programmer, and you need a lot of hard work to bridge the gap between idea and executed idea.

Whether doing science or starting a company, the code teaches us a lot about out theory. The code makes our theory better.

As Ward Cunningham is fond of saying, it's all talk until the tests run.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

January 24, 2014 2:18 PM

Could I be a programmer?

... with apologies to Annie Dillard.

A well-known programmer got collared by a university student who asked, "Do you think I could be a programmer?"

"Well," the programmer said, "I don't know... Do you like function calls?"

The programmer could see the student's amazement. Function calls? Do I like function calls? I am twenty years old and do I like function calls?

If the student had liked function calls, of course, he could begin, like a joyful painter I knew. I asked him how he came to be a painter. He said, "I liked the smell of paint."

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 23, 2014 4:14 PM

The CS Mindset

Chad Orzel often blogs about the physics mindset, the default orientation that physicists tend to have toward the world, and the way they think about and solve problems. It is fun to read a scientist talking about doing science.

Earlier this week I finally read this article about the popularity of CS50, an intro CS course at Harvard. It's all about how Harvard is "is righting an imbalance in academia" by finally teaching practical skills to its students. When I read:

"CS50 made me look at Harvard with new eyes," Guimaraes said.

That is a sea change from what Harvard represented to the outside world for decades: the guardian of a classic education, where the value of learning is for its own sake.

I sighed audibly, loud enough for the students on the neighboring exercise equipment to hear. A Harvard education used to be about learning only for its own sake, but now students can learn practical stuff, too. Even computer programming!

As I re-read the article now, I see that it's not as blunt as that throughout. Many Harvard students are learning computing because of the important role it plays in their futures, whatever their major, and they understand the value of understanding it better. But there are plenty of references to "practical ends" and Harvard's newfound willingness to teach practical skills it once considered beneath it.

Computer programming is surely one of those topics old Charles William Eliot would deem unworthy of inclusion in Harvard's curriculum.

I'm sensitive to such attitudes because I think computer science is and should be more. If you look deeper, you will see that the creators of CS50 think so, too. On its Should I take CS50? FAQ page, we find:

More than just teach you how to program, this course teaches you how to think more methodically and how to solve problems more effectively. As such, its lessons are applicable well beyond the boundaries of computer science itself.

The next two sentences muddy the water a bit, though:

That the course does teach you how to program, though, is perhaps its most empowering return. With this skill comes the ability to solve real-world problems in ways and at speeds beyond the abilities of most humans.

With this skill comes something else, something even more important: a discipline of thinking and a clarity of thought that are hard to attain when you learn "how to think more methodically and how to solve problems more effectively" in the abstract or while doing almost any other activity.

Later the same day, I was catching up on a discussion taking place on the PLT-EDU mailing list, which is populated by the creators, users, and fans of the Racket programming language and the CS curriculum designed in tandem with it. One poster offered an analogy for talking to HS students about how and why they are learning to program. A common theme in the discussion that ensued was to take the conversation off of the "vocational track". Why encourage students to settle for such a limiting view of what they are doing?

One snippet from Matthias Felleisen (this link works only if you are a member of the list) captured my dissatisfaction with the tone of the Globe article about CS50:

If we require K-12 students to take programming, it cannot be justified (on a moral basis) with 'all of you will become professional programmers.' I want MDs who know the design recipe, I want journalists who write their articles using the design recipe, and so on.

The "design recipe" is a thinking tool students learn in Felleisen "How to Design Programs" curriculum. It is a structured way to think about problems and to create solutions. Two essential ideas stand out for me:

  • Students learn the design recipe in the process of writing programs. This isn't an abstract exercise. Creating a working computer program is tangible evidence that student has understood the problem and created a clear solution.
  • This way of thinking is valuable for everyone. We will all better off if our doctors, lawyers, and journalists are able to think this way.

This is one of my favorite instantiations of the vague term computational thinking so many people use without much thought. It is a way of thinking about problems both abstractly and concretely, that leads to solutions that we have verified with tests.

You might call this the CS mindset. It is present in CS50 independent of any practical ends associated with tech start-ups and massaging spreadsheet data. It is practical on a higher plane. It is also present in the HtDP curriculum and especially in the Racket Way.

It is present in all good CS education, even the CS intro courses that more students should be taking -- even if they are only going to be doctors, lawyers, or journalists.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 07, 2014 4:09 PM

Know The Past

In this profile of computational geneticist Jason Moore, the scientist speaks explains how his work draws on work from the 1910s, which may offer computational genetics a better path forward than the work that catapulted genetics forward in the 1990s and 2000s.

Yet despite his use of new media and advanced technology, Moore spends a lot of time thinking about the past. "We have a lot to learn from early geneticists," he says. "They were smart people who were really thinking deeply about the problem."

Today, he argues, genetics students spend too much time learning to use the newest equipment and too little time reading the old genetics literature. Not surprisingly, given his ambivalent attitude toward technology, Moore believes in the importance of history. "Historical context is so important for what we do," he says. "It provides a grounding, a foundation. You have to understand the history in order ... to understand your place in the science."

Anyone familiar with the history of computing knows there is another good reason to know your history: Sometimes, we dream too small these days, and settle for too little. We have a lot to learn from early computer scientists.

I intend to make this a point of emphasis in my algorithms course this spring. I'd like to expose students to important new ideas outside the traditional canon (more on that soon), while at the same time exposing them to some of the classic work that hasn't been topped.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 17, 2013 3:32 PM

Always Have At Least Two Alternatives

Paraphrasing Kent Beck:

Whenever I write a new piece of code, I like to have at least two alternatives in mind. That way, I know I am not doing the worst thing possible.

I heard Kent say something like this at OOPSLA in the late 1990s. This is advice I give often to students and colleagues, but I've never had a URL that I could point them to.

It's tempting for programmers to start implementing the first good idea that comes to mind. It's especially tempting for novices, who sometimes seem surprised that they have even one good idea. Where would a second one come from?

More experienced students and programmers sometimes trust their skill and experience a little too easily. That first idea seems so good, and I'm a good programmer... Famous last words. Reality eventually catches up with us and helps us become more humble.

Some students are afraid: afraid they won't get done if they waste time considering alternatives, or afraid that they will choose wrong anyway. Such students need more confidence, the kind born out of small successes.

I think the most likely explanation for why beginners don't already seek alternatives is quite simple. They have not developed the design habit. Kent's advice can be a good start.

One pithy statement is often enough of a reminder for more experienced programmers. By itself, though, it probably isn't enough for beginners. But it can be an important first step for students -- and others -- who are in the habit of doing the first thing that pops into their heads.

Do note that this advice is consistent with XP's counsel to do the simplest thing that could possibly work. "Simplest" is a superlative. Grammatically, that suggests having at least three options from which to choose!

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

December 16, 2013 2:20 PM

More Fun with Integer "Assembly Language": Brute-Forcing a Function Minimum

Or: Irrational Exuberance When Programming

My wife and daughter laughed at me yesterday.

A few years ago, I blogged about implementing Farey sequences in Klein, a language for which my students at the time were writing a compiler. Klein was a minimal functional language with few control structures, few data types, and few built-in operations. Computing rational approximations using Farey's algorithm was a challenge in Klein that I likened to "integer assembly programming".

I clearly had a lot of fun with that challenge, especially when I had the chance to watch my program run using my students' compilers.

This semester, I am again teaching the compiler course, and my students are writing a compiler for a new version of Klein.

Last week, while helping my daughter with a little calculus, I ran across a fun new problem to solve in Klein:

the task of optimizing cost across the river

There are two stations on opposite sides of a river. The river is 3 miles wide, and the stations are 5 miles apart along the river. We need to lay pipe between the stations. Pipe laid on land costs $2.00/foot, and pipe laid across the river costs $4.00/foot. What is the minimum cost of the project?

This is the sort of optimization problem one often encounters in calculus textbooks. The student gets to construct a couple of functions, differentiate one, and find a maximum or minimum by setting f' to 0 and solving.

Solving this problem in Klein creates some of challenges. Among them are that ideally it involves real numbers, which Klein doesn't support, and that it requires a square root function, which Klein doesn't have. But these obstacles are surmountable. We already have tools for computing roots using Newton's method in our collection of test programs. Over a 3mi-by-5mi grid, an epsilon of a few feet approximates square roots reasonably well.

My daughter's task was to use the derivative of the cost function but, after talking about the problem with her, I was interested more in "visualizing" the curve to see how the cost drops as one moves in from either end and eventually bottoms out for a particular length of pipe on land.

So I wrote a Klein program that "brute-forces" the minimum. It loops over all possible values in feet for land pipe and compares the cost at each value to the previous value. It's easy to fake such a loop with a recursive function call.

The programmer's challenge in writing this program is that Klein has no local variables other function parameters. So I had to use helper functions to simulate caching temporary variables. This allowed me to give a name to a value, which makes the code more readable, but most importantly it allowed me to avoid having to recompute expensive values in what was already a computationally-expensive program.

This approach creates another, even bigger challenge for my students, the compiler writers. My Klein program is naturally tail recursive, but tail call elimination was left as an optional optimization in our class project. With activation records for all the tail calls stored on the stack, a compiler has to use a lot of space for its run-time memory -- far more than is available on our default target machine.

How many frames do we need? Well, we need to compute the cost at every foot along a (5 miles x 5280 feet/mile) rectangle, for a total of 26,400 data points. There will, of course, be other activation records while computing the last value in the loop.

Will I be able to see the answer generated by my program using my students' compilers? Only if one or more of the teams optimized tail calls away. We'll see soon enough.

So, I spent an hour or so writing Klein code and tinkering with it yesterday afternoon. I was so excited by the time I finished that I ran upstairs to tell my wife and daughter all about it: my excitement at having written the code, and the challenge it sets for my students' compilers, and how we could compute reasonable approximations of square roots of large integers even without real numbers, and how I implemented Newton's method in lieu of a sqrt, and...

That's when my wife and daughter laughed at me.

That's okay. I am programmer. I am still excited, and I'd do it again.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Personal, Teaching and Learning

December 08, 2013 11:48 AM

Change Happens When People Talk to People

I finally got around to reading Atul Gawande's Slow Ideas this morning. It's a New Yorker piece from last summer about how some good ideas seem to resist widespread adoption, despite ample evidence in their favor, and ways that one might help accelerate their spread.

As I read, I couldn't help but think of parallels to teaching students to write programs and helping professionals develop software more reliably. We know that development practices such as version control, short iterations, and pervasive testing lead to better software and more reliable process. Yet they are hard habits for many programmers to develop, especially when they have conflicting habits in place.

Other development practices seem counterintuitive. "Pair programming can't work, right?" In these cases, we have to help people overcome both habits of practice and habits of thought. That's a tall order.

Gawande's article is about medical practice, from surgeons to home practitioners, but his conclusions apply to software development as well. For instance: People have an easier time changing habits when the benefit is personal, immediate, and visceral. When the benefit is not so obvious, a whole new way of thinking is needed. That requires time and education.

The key message to teach surgeons, it turned out, was not how to stop germs but how to think like a laboratory scientist.

This is certainly true for software developers. (If you replace "germs" with "bugs", it's an even better fit!) Much of the time, developers have to think about evidence the ways scientists do.

This lesson is true not just for surgeons and software developers. It is true for most people, in most ways of life. Sometimes, we all have to be able to think and act like a scientist. I can think of no better argument for treating science as important for all students, just as we do reading and writing.

Other lessons from Gawande's article are more down-to-earth:

Many of the changes took practice for her, she said. She had to learn, for instance, how to have all the critical supplies -- blood-pressure cuff, thermometer, soap, clean gloves, baby respiratory mask, medications -- lined up and ready for when she needed them; how to fit the use of them into her routine; how to convince mothers and their relatives that the best thing for a child was to be bundled against the mother's skin. ...

So many good ideas in one paragraph! Many software development teams could improve by putting them in action:

  • Construct a work environment with essential tools ready at hand.
  • Adjust routine to include new tools.
  • Help collaborators see and understand the benefit of new habits.
  • Practice, practice, practice.

Finally, the human touch is essential. People who understand must help others to see and understand. But when we order, judge, or hector people, they tend to close down the paths of communication, precisely when we need them to be most open. Gawande's colleagues have been most successful when they built personal relationships:

"It wasn't like talking to someone who was trying to find mistakes," she said. "It was like talking to a friend."

Good teachers know this. Some have to learn it the hard way, in the trenches with their students. But then, that is how Gawande's colleagues learned it, too.

"Slow Hands" is good news for teachers all around. It teaches ways to do our job better. But also, in many ways, it tells us that teaching will continue to matter in an age dominated by technological success:

People talking to people is still how the world's standards change.

Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Software Development, Teaching and Learning

December 03, 2013 3:17 PM

The Workaday Byproducts of Striving for Higher Goals

Why set audacious goals? In his piece about the Snowfall experiment, David Sleight says yes, and not simply for the immediate end:

The benefits go beyond the plainly obvious. You need good R&D for the same reason you need a good space program. It doesn't just get you to the Moon. It gives you things like memory foam, scratch-resistant lenses, and Dustbusters. It gets you the workaday byproducts of striving for higher goals.

I showed that last sentence a little Twitter love, because it's something people often forget to consider, both when they are working in the trenches and when they are selecting projects to work on. An ambitious project may have a higher risk of failure than something more mundane, but it also has a higher chance of producing unexpected value in the form of new tools and improved process.

This is also something that university curricula don't do well. We tend to design learning experiences that fit neatly into a fifteen-week semester, with predictable gains for our students. That sort of progress is important, of course, but it misses out on opportunities for students to produce their own workaday byproducts. And that's an important experience for students to have.

It also gives a bad example of what learning should feel like, and what it should do for us. Students generally learn what we teach them, or what we make easiest for them to learn. If we always set before them tasks of known, easily-understood dimensions, then they will have to learn after leaving us that the world doesn't usually work like that.

This is one of the reasons I am such a fan of project-based computer science education, as in the traditional compiler course. A compiler is an audacious enough goal for most students that they get to discover their own personal memory foam.

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

November 25, 2013 2:56 PM

The Moment When Design Happens

Even when we plan ahead a bit, the design of a program tends to evolve. Gary Bernhardt gives an example in his essay on abstraction:

If I feel the need to violate the abstraction, I need to reconsider how to modify the boundaries to match that need, rather than violating the boundaries by crossing them.

This is the moment when design happens...

This is a hard design lesson to give students, because it is likely to click with them only after living with the consequences of violating the abstraction. This requires working with the same large program over time, preferably one they are building along the way.

This is one of the reasons I so like our senior project courses. My students are building a compiler this term, which gives them a chance to experience a moment when design happens. Their abstract syntax trees and symbol tables are just the sort of abstractions that invite violation -- and reward a little re-design.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 24, 2013 10:54 AM

Teaching Algorithms in 2014

This spring, I will be teaching the undergraduate algorithms course for first time in nine years, since the semester before I became department head. I enjoy this course. It gives both the students and me opportunities to do a little theory, a little design, and a little programming. I also like to have some fun, using what we learn to play games and solve puzzles.

Nine years is a long time in computing, even in an area grounded in well-developed theory. I will need to teach a different sort of course. At the end of this entry, I ask for your help in meeting this challenge.

Algorithms textbooks don't look much different now than they did in the spring of 2005. Long-time readers of this blog know that I face the existential crisis of selecting a textbook nearly every semester. Picking a textbook requires balancing several forces, including the value they give to the instructor, the value they give to the student during and after the course, and the increasing expense to students.

My primary focus in these decisions is always on net value to the students. I like to write my own material anyway. When time permits, I'd rather write as much as I can for students to read than outsource that responsibility (and privilege) to a textbook author. Writing my lecture notes in depth lets me weave a lot of different threads together, including pointers into primary and secondary sources. Students benefit from learning to read non-textbook material, the sort they will encounter as throughout their careers.

My spring class brings a new wrinkle to the decision, though. Nearly fifty students are enrolled, with the prospect a few more to come. This is a much larger group than I usually work with, and large classes carry a different set of risks than smaller courses. In particular, when something goes wrong in a small section, it is easier to recover through one-on-one remediation. That option is not so readily available for a fifty-person course.

There is more risk in writing new lecture material than in using a textbook that has been tested over time. A solid textbook can be a security blanket as much for the instructor as for the student. I'm not too keen on selecting a security blanket for myself, but the predictability of a text is tempting. There is one possible consolation in such a choice: perhaps subordinating my creative impulses to the design of someone's else's textbook will make me more creative as a result.

But textbook selection is a fairly ordinary challenge for me. The real question is: Which algorithms should we teach in this course, circa 2014? Surely the rise of big data, multi-core processors, mobile computing, and social networking require a fresh look at the topics we teach undergrads.

Perhaps we need only adjust the balance of topics that we currently teach. Or maybe we need to add a new algorithm or data structure to the undergraduate canon. If we teach a new algorithm, or a new class of algorithms, which standard material should be de-emphasized, or displaced altogether? (Alas, the semester is still but fifteen weeks long.)

Please send me your suggestions! I will write up a summary of the ideas you share, and I will certainly use your suggestions to design a better algorithms course for my students.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 10, 2013 9:10 AM

You May Be a Teacher If ...

... you wake groggily at 5:30 on a Sunday morning. You lie in bed, half awake, as your mind begins designing a new class session for your compiler course. You never go back to sleep.

Before you rise, you have a new reading assignment, an opening exercise asking your students to write a short assembly language program, and two larger in-class exercises aimed at helping them make a good start on their compiler's run-time system.

This is a thorny topic. It's been bothering you. Now, you have a plan.

Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

November 04, 2013 2:41 PM

Those Silly Tests

I love this passage by Mark Dominus in Overlapping Intervals:

This was yet another time when I felt slightly foolish as I wrote the automated tests, assuming that the time and effort I spent on testing this trivial function would be time and effort thrown away on nothing -- and then they detected a real fault. Someday perhaps I'll stop feeling foolish writing tests for functions like this one; until then, many cases just like this one will help me remember that I must write the tests even though I feel foolish doing it.

Even excellent programmers feel silly writing tests sometimes. But they also benefit from writing them. Dominus was saved here by his test-writing habit, or by his sense of right and wrong.

Helping students develop that habit or that moral sense is a challenge. Even so, I rarely come across a situation where my students or I write or run too many tests. I regular encounter cases where we write or run too few.

Dominus's blog entry also a great passage on a larger lesson from that coding experience. In the end, his clever solution to a tricky problem results not from "just thinking" but from deeper thought: from "applying carefully-learned and practiced technique". That's an important form of thinking, too.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 11, 2013 1:42 PM

The Tang of Adventure, and a Lively Appreciation

"After you've learned the twelve times table," John Yarnelle asks, "what else is there to do?"

The concepts of modern mathematics give the student something else to do in great abundance and variety at all levels of his development. Not only may he discover unusual types of mathematical structures where, believe it or not, two and two does not equal four, but he may even be privileged to invent a new system virtually on his own. Far from a sense of stagnation, there is the tang of adventure, the challenge of exploration; perhaps also a livelier appreciation of the true nature of mathematical activity and mathematical thought.

Not only the tang of adventure; students might also come to appreciate what math really is. That's an admirable goal for any book or teacher.

This passage comes from Yarnelle's Finite Mathematical Structures, a 1964 paperback that teaches fields, groups, and algebras with the prose of a delighted teacher. I picked this slender, 66-page gem up off a pile of books being discarded by a retired math professor a decade ago. How glad I am that none of the math profs who walked past that pile bothered to claim it before I happened by.

We could use a few CS books like this, too.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 08, 2013 3:48 PM

You Never Know What Students Will Remember...

There are a lot of clichés about teachers and their unexpected effects on students. Some of them are true. In the last couple of weeks, I've been reminded twice that students remember the darnedest things, and those things can affect how they behave and live.

First, while at StrangeLoop, I was tagged in a Facebook conversation. A former student had posted an update that involved special-ordering a mini-fridge on-line. The ensuing conversation included the following:

Commenter: "In a college town you have to special order a mini-fridge? Me thinks you are doing it wrong."

Former student: "Yeah, I know... Eugene Wallingford once said the same when I wrote a framework to implement a stack..."

I know this student well and remember his stack framework. He is a smart guy who was learning a lot about CS and programming in a very short time. In his earnestness to apply what he was learning, he had indeed written a flexible, generic framework for a stack in response to a problem that called for twenty, maybe thirty, lines of Java 1.4.

We talked about simplicity, trade-offs, You Aren't Gonna Need It, and other design issues. I believe him when he says that I said, "You must be doing it wrong." That's the sort of thing I would say. I don't remember saying it in that moment, though.

Then, earlier this week, the latest issue of our college newsletter hit the newsstand. An undergrad CS major was interviewed for a "student spotlight" column. It contains this snippet:

Last semester, I went into Dr. Wallingford's office asking why I was not very efficient when answering questions, even though I read the material over and over again. I felt that because I had memorized the books information, I could quickly answer questions.... "He told me, 'when I want to improve my mile time, I run. I don't think about running; I go out and run.' This has really stuck with me since and showed me how books can only do so much. After that, it is the practice and experience that takes you far.

Again, I recall having a conversation with this student about how he could improve his test performance. He, too, is a smart guy who is good at learning the material we see in class and in the readings. What he needed at the time was more practice, to cement the new ideas in his mind, to help him make connections with what he already knew, and to help him write good code faster on exams.

I believe him when he says that I used a running analogy. Over the years, I have found that training for marathons illustrated a lot of useful ideas about learning, especially when it comes to practice and habit. And I do like analogies. But I don't remember the details of what I said in that particular conversation.

These two incidents are salient reminders that we should take seriously the time we spend with our students. They are listening -- and learning.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 07, 2013 12:07 PM

StrangeLoop: Exercises in Programming Style

[My notes on StrangeLoop 2013: Table of Contents]

Crista Lopes

I had been looking forward to Crista Lopes's StrangeLoop talk since May, so I made sure I was in the room well before the scheduled time. I even had a copy of the trigger book in my bag.

Crista opened with something that CS instructors have learned the hard way: Teaching programming style is difficult and takes a lot of time. As a result, it's often not done at all in our courses. But so many of our graduates go into software development for the careers, where they come into contact with many different styles. How can they understand them -- well, quickly, or at all?

To many people, style is merely the appearance of code on the screen or printed. But it's not. It's more, and something entirely different. Style is a constraint. Lopes used images of a few stylistic paintings to illustrate the idea. If an artist limits herself to pointillism or cubism, how can she express important ideas? How does the style limit the message, or enhance it?

But we know this is true of programming as well. The idea has been a theme in my teaching for many years. I occasionally write about the role of constraints in programming here, including Patterns as a Source of Freedom, a few programming challenges, and a polymorphism challenge that I've run as a workshop.

Lopes pointed to a more universal example, though: the canonical The Elements of Programming Style. Drawing on this book and other work in software, she said that programming style ...

  • is a way to express tasks
  • exists at all scales
  • recurs at multiple scales
  • is codified in programming language

For me, the last bullet ties back most directly to idea of style as constraint. A language makes some things easier to express than others. It can also make some things harder to express. There is a spectrum, of course. For example, some OO languages make it easy to create and use objects; others make it hard to do anything else! But the language is an enabler and enforcer of style. It is a proxy for style as a constraint on the programmer.

Back to the talk. Lopes asked, Why is it so important that we understand programming style? First, a style provides the reader with a frame of reference and a vocabulary. Knowing different styles makes us a more effective consumers of code. Second, one style can be more appropriate for a given problem or context than another style. So, knowing different styles makes us a more effective producers of code. (Lopes did not use the producer-consumer distinction in the talk, but it seems to me a nice way to crystallize her idea.)

the cover of Raymond Queneau's Exercises in Style

The, Lopes said, I came across Raymond Queneau's playful little book, "Exercises in Style". Queneau constrains himself in many interesting ways while telling essentially the same story. Hmm... We could apply the same idea to programming! Let's do it.

Lopes picked a well-known problem, the common word problem famously solved in a Programming Pearls column more than twenty-five years. This is a fitting choice, because Jon Bentley included in that column a critique of Knuth's program by Doug McIlroy, who considered both engineering concerns and program style in his critique.

The problem is straightforward: identify and print the k most common terms that occur in a given text document, in decreasing order. For the rest of the talk, Lopes presented several programs that solve the problem, each written in a different style, showing code and highlighting its shape and boundaries.

Python was her language of choice for the examples. She was looking for a language that many readers would be able to follow and understand, and Python has the feel of pseudo-code about it. (I tell my students that it is the Pascal of their time, though I may as well be speaking of hieroglyphics.) Of course, Python has strengths and weaknesses that affect its fit for some styles. This is an unavoidable complication for all communication...

Also, Lopes did not give formal names to the styles she demonstrated. Apparently, at previous versions of this talk, audience members had wanted to argue over the names more than the styles themselves! Vowing not to make that mistake again, she numbered her examples for this talk.

That's what programmers do when they don't have good names.

In lieu of names, she asked the crowd to live-tweet to her what they thought each style is or should be called. She eventually did give each style a fun, informal name. (CS textbooks might be more evocative if we used her names instead of the formal ones.)

I noted eight examples shown by Lopes in the talk, though there may have been more:

  • monolithic procedural code -- "brain dump"
  • a Unix-style pipeline -- "code golf"
  • procedural decomposition with a sequential main -- "cook book"
  • the same, only with functions and composition -- "Willy Wonka"
  • functional decomposition, with a continuation parameter -- "crochet"
  • modules containing multiple functions -- "the kingdom of nouns"
  • relational style -- (didn't catch this one)
  • functional with decomposition and reduction -- "multiplexer"

Lopes said that she hopes to produce solutions using a total of thirty or so styles. She asked the audience for help with one in particular: logic programming. She said that she is not a native speaker of that style, and Python does not come with a logic engine built-in to make writing a solution straightforward.

Someone from the audience suggested she consider yet another style: using a domain-specific language. That would be fun, though perhaps tough to roll from scratch in Python. By that time, my own brain was spinning away, thinking about writing a solution to the problem in Joy, using a concatenative style.

Sometimes, it's surprising just how many programming styles and meaningful variations people have created. The human mind is an amazing thing.

The talk was, I think, a fun one for the audience. Lopes is writing a book based on the idea. I had a chance to review an early draft, and now I'm looking forward to the finished product. I'm sure I'll learn something new from it.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 28, 2013 12:17 PM

StrangeLoop: This and That, Volume 2

[My notes on StrangeLoop 2013: Table of Contents]

I am at a really good talk and look around the room. So many people are staring at their phones, scrolling away. So many others are staring at their laptops, typing away. The guy next to me: doing both at the same time. Kudos, sir. But you may have missed the point.


Conference talks are a great source of homework problems. Sometimes, the talk presents a good problem directly. Others, watching the talk sets my subconscious mind in motion, and it creates something useful. My students thank you. I thank you.


Jenny Finkel talked about the difference between two kinds of recommenders: explorers, who forage for new content, and exploiters, who want to see what's already popular. The former discovers cool new things occasionally but fails occasionally, too. The latter is satisfied most of the time but rarely surprised. As conference goes, I felt this distinction at play in my own head this year. When selecting the next talk to attend, I have to take a few risks if I ever hope to find something unexpected. But when I fail, a small regret tugs at me.


We heard a lot of confident female voices on the StrangeLoop stages this year. Some of these speakers have advanced academic degrees, or at least experience in grad school.


The best advice I received on Day 1 perhaps came not from a talk but from the building:

The 'Do not Climb on Bears' sign on a Peabody statue

"Please do not climb on bears." That sounds like a good idea most everywhere, most all the time.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

September 10, 2013 3:40 PM

A Laugh at My Own Expense

This morning presented a short cautionary tale for me and my students, from a silly mistake I made in a procmail filter.

Back story: I found out recently that I am still subscribed to a Billy Joel fan discussion list from the 1990s. The list has been inactive for years, or I would have been filtering its messages to a separate mailbox. Someone has apparently hacked the list, as a few days ago it started spewing hundreds of spam messages a day.

I was on the road for a few days after the deluge began and was checking mail through a shell connection to the mail server. Because I was busy with my trip and checking mail infrequently, I just deleted the messages by hand. When I got back, soon learned they were junk and filtered them away for me. But the spam was still hitting my inbox on the mail server, where I read my mail occasionally even on campus.

After a session on the server early this morning, I took a few minutes to procmail them away. Every message from the list has a common pattern in the Subject: line, so I copied it and pasted it into a new procmail recipe to send all list traffic to /dev/null :

    * ^Subject.*[billyjoel]

Do you see the problem? Of course you do.

I didn't at the time. My blindness probably resulted from a combination of the early hour, a rush to get over to the gym, and the tunnel vision that comes from focusing on a single case. It all looked obvious.

This mistake offers programming lessons at several different levels.

The first is at the detailed level of the regular expression. Pay attention to the characters in your regex -- all of them. Those brackets really are in the Subject: line, but by themselves mean something else in the regex. I need to escape them:

    * ^Subject.*\[billyjoel\]

This relates to a more general piece of problem-solving advice. Step back from individual case you are solving and think about the code you are writing more generally. Focused on the annoying messages from the list, the brackets are just characters in a stream. Looked at from the perspective of the file of procmail recipes, they are control characters.

The second is at the level of programming practice. Don't /dev/null something until you know it's junk. Much better to send the offending messages to a junk mbox first:

    * ^Subject.*\[billyjoel\]

Once I see that all and only the messages from the list are being matched by the pattern, I can change that line send list traffic where it belongs. That's a specific example of the sort of defensive programming that we all should practice. Don't commit to solutions too soon.

This, too, relates to more general programming advice about software validation and verification. I should have exercised a few test cases to validate my recipe before turning it loose unsupervised on my live mail stream.

I teach my students this mindset and program that way myself, at least most of the time. Of course, the time you most need test cases will be the time you don't write them.

The day provided a bit of irony to make the story even better. The topic of today's session in my compilers course? Writing regular expressions to describe the tokens in a language. So, after my mail admin colleague and I had a good laugh at my expense, I got to tell the story to my students, and they did, too.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

July 30, 2013 2:43 PM

Education, Then and Now

... is your job.


Whatever be the qualifications of your tutors, your improvement must chiefly depend on yourselves. They cannot think or labor for you, they can only put you in the best way of thinking and laboring for yourselves. If therefore you get knowledge, you must acquire it by your own industry.


School isn't over. School is now. School is blogs and experiments and experiences and the constant failure of shipping and learning.

This works if you interpret "then and now" as in the past (Joseph Priestly, in his dedication of New College, London, 1794) and in the present (Seth Godin, in Brainwashed, from 2008).

It also works if you interpret "then and now" as when you were in school and now that you are out in the real world. It's all real.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 18, 2013 2:22 PM

AP Computer Science in Iowa High Schools

Mark Guzdial posted a blog entry this morning pointing to a Boston Globe piece, Interest in computer science lags in Massachusetts schools. Among the data supporting this assertion was participation in Advanced Placement:

Of the 85,753 AP exams taken by Massachusetts students last year, only 913 were in computing.

Those numbers are a bit out of context, but they got me to wondering about the data for Iowa. So I tracked down this page on AP Program Participation and Performance Data 2012 and clicked through to the state summary report for Iowa. The numbers are even more dismal than Massachusetts's.

Of the 16,413 AP exams taken by Iowa students in 2012, only sixty-nine were in computer science. The counts for groups generally underrepresented in computing were unsurprisingly small, given that Iowa is less diverse than many US states. Of the sixty-nine, fifty-four self-reported as "white", ten as "Asian", and one as "Mexican-American", with four not indicating a category.

The most depressing number of all: only nine female students took the AP Computer Science exam last year in Iowa.

Now, Massachusetts has roughly 2.2 times as many people as Iowa, but even so Iowa compares unfavorably. Iowans took about one-fifth as many AP exams as many Massachusetts students, and for CS the percentage drops to 7.5%. If AP exams indicate much about the general readiness of a state's students for advanced study in college, then Iowa is at a disadvantage.

I've never been a huge proponent of the AP culture that seems to dominate many high schools these days (see, for instance, this piece), but the low number of AP CS exams taken in Iowa is consistent with what I hear when I talk to HS students from around the state and their parents: Iowa schools are not teaching much computer science at all. The university is the first place most students have an opportunity to take a CS course, and by then the battle for most students' attention has already been lost. For a state with a declared goal of growing its promising IT sector, this is a monumental handicap.

Those of us interested in broadening participation in CS face an even tougher challenge. Iowa's demographics create some natural challenges for attracting minority students to computing. And if the AP data are any indication, we are doing a horrible job of reaching women in our high schools.

There is much work to do.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 03, 2013 10:22 AM

Programming for Everyone, Venture Capital Edition

Christina Cacioppo left Union Square Ventures to learn how to program:

Why did I want to do something different? In part, because I wanted something that felt more tangible. But mostly because the story of the internet continues to be the story of our time. I'm pretty sure that if you truly want to follow -- or, better still, bend -- that story's arc, you should know how to write code.

So, rather than settle for her lot as a non-programmer, beyond the accepted school age for learning these things -- technology is a young person's game, you know -- Cacioppo decided to learn how to build web apps. And build one.

When did we decide our time's most important form of creation is off-limits? How many people haven't learned to write software because they didn't attend schools that offered those classes, or the classes were too intimidating, and then they were "too late"? How much better would the world be if those people had been able to build their ideas?

Yes, indeed.

These days, she is enjoying the experience of making stuff: trying ideas out in code, discarding the ones that don't work, and learning new things every day. Sounds like a programmer to me.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 26, 2013 2:30 PM

An Opportunity to Learn, Born of Deprivation

Earlier this summer, my daughter was talking about something one of her friends had done with Instagram. As a smug computer weenie, I casually mentioned that she could do that, too.

She replied, "Don't taunt me, Dad."

You see, no one in our family has a cell phone, smart or otherwise, so none of us use Instagram. That's not a big deal for dear old dad, even though (or perhaps because) he's a computer scientist. But she is a teenager growing up in an entirely different world, filled with technology and social interaction, and not having a smart phone must surely seem like a form of child abuse. Occasionally, she reminds us so.

This gave me a chance to explain that Instagram filters are, at their core, relatively simple little programs, and that she could learn to write them. And if she did, she could run them on almost any computer, and make them do things that even Instagram doesn't do.

I had her attention.

So, this summer I am going to help her learn a little Python, using some of the ideas from media computation. At the end of our first pass, I hope that she will be able to manipulate images in a few basic ways: changing colors, replacing colors, copying pixels, and so on. Along the way, we can convert color images to grayscale or sepia tones, posterize images, embed images, and make simple collages.

That will make her happy. Even if she never feels the urge to write code again, she will know that it's possible. And that can be empowering.

I have let my daughter know that we probably will not write code that does as good a job as what she can see in Instagram or Photoshop. Those programs are written by pros, and they have evolved over time. I hope, though, that she will appreciate how simple the core ideas are. As James Hague said in a recent post, then key idea in most apps require relatively few lines of code, with lots and lots of lines wrapped around them to handle edge cases and plumbing. We probably won't write much code for plumbing... unless she wants to.

Desire and boredom often lead to creation. They also lead to the best kind of learning.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

June 10, 2013 2:41 PM

Unique in Exactly the Same Way

Ah, the idyllic setting of my youth:

When people refer to "higher education" in this country, they are talking about two systems. One is élite. It's made up of selective schools that people can apply to -- schools like Harvard, and also like U.C. Santa Cruz, Northeastern, Penn State, and Kenyon. All these institutions turn most applicants away, and all pursue a common, if vague, notion of what universities are meant to strive for. When colleges appear in movies, they are verdant, tree-draped quadrangles set amid Georgian or Gothic (or Georgian-Gothic) buildings. When brochures from these schools arrive in the mail, they often look the same. Chances are, you'll find a Byronic young man reading "Cartesian Meditations" on a bench beneath an elm tree, or perhaps his romantic cousin, the New England boy of fall, a tousle-haired chap with a knapsack slung back on one shoulder. He is walking with a lovely, earnest young woman who apparently likes scarves, and probably Shelley. They are smiling. Everyone is smiling. The professors, who are wearing friendly, Rick Moranis-style glasses, smile, though they're hard at work at a large table with an eager student, sharing a splayed book and gesturing as if weighing two big, wholesome orbs of fruit. Universities are special places, we believe: gardens where chosen people escape their normal lives to cultivate the Life of the Mind.

I went to a less selective school than the ones mentioned here, but the vague ideal of higher education was the same. I recognized myself, vaguely, in the passage about the tousle-haired chap with a knapsack, though on a Midwestern campus. I certainly pined after a few lovely, earnest young women with a fondness for scarves and the Romantic poets in my day. These days, I have become the friendly, glasses-wearing, always-smiling prof in the recruiting photo.

The descriptions of movie scenes and brochures, scarves and Shelley and approachable professors, reminded me most of something my daughter told me as she waded through recruiting literature from so many schools a few years ago, "Every school is unique, dad, in exactly the same way." When the high school juniors see through the marketing facade of your pitch, you are in trouble.

That unique-in-the-same-way character of colleges and university pitches is a symptom of what lies at the heart of the coming "disruption" of what we all think of as higher education. The traditional ways for a school to distinguish itself from its peers, and even from schools it thinks of as lesser rivals, are becoming less effective. I originally wrote "disappearing", but they are now ubiquitous, as every school paints the same picture, stresses the same positive attributes, and tries not to talk too much about the negatives they and their peers face. Too many schools chasing too few tuition-paying customers accelerates the process.

Trying to protect the ideal of higher education is a noble effort now being conducted in the face of a rapidly changing landscape. However, the next sentence of the recent New Yorker article Laptop U, from which the passage quoted above comes, reminds us:

But that is not the kind of higher education most Americans know. ...

It is the other sort of higher education that will likely be the more important battleground on which the higher ed is disrupted by technology.

We are certainly beginning to have such conversations at my school, and we are starting to hear rumblings from outside. My college's dean and our new university president recently visited the Fortune 100 titan that dominates local industry. One of the executives there gave them several documents they've been reading there, including "Laptop U" and the IPPR report mentioned in it, "An Avalanche is Coming: Higher Education and the Revolution Ahead".

It's comforting to know your industry partners value you enough to want to help you survive a coming revolution. It's also hard to ignore the revolution when your partners begin to take for granted that it will happen.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 05, 2013 1:52 PM

I Fooled Around and Fell in Love

Cue the Elvin Bishop [ video ]...

I smile whenever I see this kind of statement on a website's About page:

Erika Carlson was studying clinical psychology in 2011, when she wrote her first line of Python code. She fell in love with programming, decided to change paths, and is now a software developer at Pillar Technology.

I fell in love upon writing my first line of code, too.

Not everyone will have the same reaction Erika and I had, but it's good that we give people at least an opportunity to learn how to program. Knowing that someone might react this way focuses my mind on giving novice programmers a good enough experience that they can, if they are so inclined.

My teaching should never get in the way of true love.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

June 03, 2013 2:52 PM

Reflection on Zimmer's Open Letter and Student Questions

Carl Zimmer recently wrote an open letter to science students and teachers to address a disturbing trend: students cold-mailing him to ask for information. He stresses that he and his fellow science writers like to interact with students, but only after students have done some work on their own and have started to develop their own questions. The offending messages can be boiled down to the anti-pattern:

I have homework. I need information from you.

The actual messages can be a lot more entertaining. Be sure to check out at least the opening of his piece, which reprises a multi-day exchange with a random high school student.

I'm not an accomplished popular science writer like Zimmer, but as a CS professor at a public university I receive occasional messages of the sort Zimmer describes from high school students across our region. I try to help out as best I can, and the volume is not so large that I get burnt out trying to guide the student to a more fruitful exchange than "Tell me something" followed by "Here is some information you could have read for yourself".

Fortunately, most of the messages of this sort that reach my inbox come from students in my own courses. Well, it's unfortunate that I receive these messages at all, because they are often a symptom of laziness and presumptuousness. Most often, though, they are simply a sign of bad habits learned in their previous years of schooling. The fortunate part is that I have a chance to help students learn new, better habits of intellectual discipline and discourse.

My approach is a lot like the one Zimmer relates in his exchange with young Davis, so much so that a former student of mine forwarded me a link to Zimmer's piece and said "The exchange at the beginning reminded me of you."

But as a classroom teacher, I have an opportunity that internet celebrities don't: I get to see the question-askers in the classroom two or three times a week and in office hours whenever students avail themselves of the resource. I can also ask students to come see me outside of class for longer conversations.

Talking face-to-face can help students know for certain that I really do care about their curiosity and learning, even as I choose consciously not to fulfill their request until they have something specific to ask. They have to invest something in the conversation, too, and demonstrate their investment by having more to say than simply, "I don't get it."

(Talking face-to-face also helps me have a better idea of when a student has done his or her work and yet still is struggling to the point of not knowing what to say other than "I don't get it." Sometimes, I need to go farther than halfway to help a student in real need of help.)

Most students are open to the idea that college is different from their past experience and that it's time for them to start taking more control of their learning. They learn new habits and are able to participate in meaningful exchanges about material with which they have already engaged.

A few continue to struggle, never moving far beyond "Give me information". I don't know whether their previous schooling has driven them into such a deep rut that they can't get out, or whether even with different schooling they would have been poorly suited for university study. Fortunately, these students are quite few among our student body. Most really do just need a gentle push in the right direction.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 31, 2013 1:44 PM

Quotes of the Week, in Four Dimensions


Michael Bernstein, in A Generation Ago, A Thoroughly Modern Sampling:

The AI Memos are an extremely fertile ground for modern research. While it's true that what this group of pioneers thought was impossible then may be possible now, it's even clearer that some things we think are impossible now have been possible all along.

When I was in grad school, we read a lot of new and recent research papers. But the most amazing, most educational, and most inspiring stuff I read was old. That's often true today as well.


Financial Agile tweets:

"If it disagrees with experiment, it's wrong". Classic.

... with a link to The Scientific Method with Feynman, which has a wonderful ten-minute video of the physicist explaining how science works. Among its important points is that guessing is huge part of science. It's just that scientists have a way of telling which guesses are right and which are wrong.


James Boyk, in Six Words:

Like others of superlative gifts, he seemed to think the less gifted could do as well as he, if only they knew a few powerful specifics that could readily be conveyed. Sometimes he was right!

"He" is Leonid Hambro, who played with Victor Borge and P. D. Q. Bach but was also well-known as a teacher and composer. Among my best teachers have been some extraordinarily gifted people. I'm thankful for the time they tried to convey their insights to the likes of me.


Amanda Palmer, in a conference talk:

We can only connect the dots that we collect.

Palmer uses this sentence to explain in part why all art is about the artist, but it means something more general, too. You can build, guess, and teach only with the raw materials that you assemble in your mind and your world. So collect lots of dots. In this more prosaic sense, Palmer's sentence applies to not only to art but also to engineering, science, and teaching.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

May 26, 2013 9:45 AM

Programming Magic and Business Skeuomorphism

Designer Craig Mod offers Marco Arment's The Magazine as an exemplar of Subcompact Publishing in the digital age: "No cruft, all substance. A shadow on the wall."; a minimal disruptor that capitalizes on the digital medium without tying itself down with the strictures of twentieth-century hardcopy technology.

After detailing the advantages of Arment's approach, Mod points out the primary disadvantage: you have to be able to write an iOS application. Which leads to this gem

The fact that Marco -- a programmer -- launched one of the most 'digitally indigenous' contemporary tablet publications is indicative of two things:
  1. Programmers are today's magicians. In many industries this is obvious, but it's now becoming more obvious in publishing. Marco was able to make The Magazine happen quickly because he saw that Newsstand was underutilized and understood its capabilities. He knew this because he's a programmer. Newsstand wasn't announced at a publishing conference. It was announced at the WWDC.
  2. The publishing ecosystem is now primed for complete disruption.

If you are a non-programmer with ideas, don't think I just need a programmer; instead think, I need a technical co-founder. A lot of people think of programming as Other, as a separate world from what they do. Entrepreneurs such as Arment, and armies of young kids writing video games and apps for their friends, know instead that it is a tool they can use to explore their interests.

Mod offers an a nice analogy from the design world to explain why entrenched industry leaders and even prospective entrepreneurs tend to fall into the trap of mimicking old technology in their new technologies: business skeuomorphism.

For example, designers "bring the mechanical camera shutter sound to digital cameras because it feels good" to users. In a similar way, a business can transfer a decision made under the constraints of one medium or market into a new medium or market in which the constraints no longer apply. Under new constraints, and with new opportunities, the decision is no longer a good one, let alone necessary or optimal.

As usual, I am thinking about how these ideas relate to the disruption of university education. In universities, as in the publishing industry, business skeuomorphism is rampant. What is the equivalent of the Honda N360 in education? Is it Udacity or Coursera? Enstitute? Or something simpler?

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 25, 2013 12:12 PM

Getting the Right Relationship with a Class of Students

Thinking back over the semester, both about my course and about conversations I've had with colleagues about theirs, I'm reminded of a short passage from Keith Johnstone's "Impro: Improvisation and the Theatre":

There seems no doubt that a group can make or break its members, and that it's more powerful than the individuals in it. A great group can propel its members forward so that they achieve amazing things. Many teachers don't seem to think that manipulating a group is their responsibility at all. If they're working with a destructive, bored group, they just blame the students for being 'dull', or uninterested. It's essential for the teacher to blame himself if the group isn't in a good state.

It is hard to predict when a group of students will come together on its own and propel its members forward to achieve amazing things. Sometimes it happens almost spontaneously, as if by magic. I doubt it's magic, though, as much as students with the right attitudes and with skills for working with others. Sometimes, we don't know our students as well as we think. In any case, we teachers love to be around when the lightning strikes.

Bad teachers too often blame their students, or external circumstances, when a class dynamic turns destructive. Good teachers know it's their job to mold the group. They pay attention to attitudes and interactions, and they work to steer the group toward cohesiveness and accomplishment. Great teachers deliver consistently, within the constraints of their environments.

I'd love to say that I aspire to greatness, but that's too lofty a goal. These days, I still have to work hard just to get a peek at good every now and then. Being aware that the group dynamic is part of the teacher's responsibility is a good first step.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 23, 2013 3:57 PM

Bad Examples Are Everywhere

... even in the Java class libraries.

Earlier today, @fogus joked:

The java.awt.Point class was only created because someone needed a 1st example to show how to make an object for a book they were writing.

My response was only half-joking:

And I use it as an example of how not to make a class.

If you have ever seen the Point class, you might understand why. Two public instance variables, seven methods for reading and writing the instance variables, and only one method (translate) that could conceivably be considered a behavior. But it's not; it's just a relative writer.

When this is the first class we show our students and ask them to use, we immediately handicap them with an image of objects as buckets of data and programs as manipulation of values. We may as well teach them C or Pascal.

This has long been a challenger for teaching OOP in CS1. If a class has simple enough syntax for the novice programmer to understand, it is generally a bad example of an object. If a class has interesting behavior, it is generally too complex for the novice programmer to understand.

This is one of the primary motivations for authors to create frameworks for their OOP/CS1 textbooks. One of the earliest such frameworks I remember was the Graphics Package (GP) library in Object-Oriented Programming in Pascal, by Connor, Niguidula, and van Dam. Similar approaches have been used in more recent books, but the common thread is an existing set of classes that allow users to use and create meaningful objects right away, even as they learn syntax.

a hadrosaurus, which once roamed the land with legacy CS profs

A lot of these frameworks have a point objects as egregious as Java's GP included. But with these frameworks, the misleading Point class need not be the first thing students see, and when seen they are used in a context that consist of rich objects interacting as objects should.

These frameworks create a new challenge for the legacy CS profs among us. We like to "begin with the fundamentals" and have students write programs "from scratch", so that they "understand the entire program" from the beginning. Because, you know, that's the way we learned to program.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 21, 2013 3:05 PM

Exercises in Exercises in Style

I just registered for Strange Loop 2013, which doesn't happen until this fall. This has become a popular conference, deservedly so, and it didn't seem like a good idea to wait to register and risk being shut out.

One of the talks I'm looking forward to is by Crista Lopes. I mentioned Crista in a blog entry from last year's Strange Loop, for a talk she gave at OOPSLA 2003 that made an analogy between programming language and natural language. This year, she will give a talk called Exercises in Style that draws inspiration from a literary exercise:

Back in the 1940s, a French writer called Raymond Queneau wrote an interesting book with the title Exercises in Style featuring 99 renditions of the exact same short story, each written in a different style. This talk will shamelessly do the same for a simple program. From monolithic to object-oriented to continuations to relational to publish/subscribe to monadic to aspect-oriented to map-reduce, and much more, you will get a tour through the richness of human computational thought by means of implementing one simple program in many different ways.

If you've been reading this blog for long, you can image how much I like this idea. I even checked Queneau's book out of the library and announced on Twitter my plan to read it before the conference. From the response I received, I gather a lot of conferences attendees plan to do the same. You gotta love the audience Strange Loop cultivates.

I actually have a little experience with this idea of writing the same program in multiple styles, only on a much smaller scale. For most of the last twenty years, our students have learned traditional procedural programming in their first-year sequence and object-oriented programming in the third course. I taught the third course twice a year for many years. One of things I often did early in the course was to look at the same program in two forms, one written in a procedural style and one written in OOP. I hoped that the contrast between the programs would help them see the contrast between how we think about programs in the two styles.

I've been teaching functional programming regularly for the last decade, after our students have seen procedural and OO styles in previous courses, but I've rarely done the "exercises in style" demo in this course. For one thing, it is a course on languages and interpreters, not a course on functional programming per se, so the focus is on getting to interpreters as soon as possible. We do talk about differences in the styles in terms of their concepts and the underlying differences (and similarities!) in their implementation. But I think about doing so every time I prep the next offering of the course.

Not doing "exercises in style" can be attractive, too. Small examples can mislead beginning students about what is important, or distract them with concepts they'd won't understand for a while. The wrong examples can damage their motivation to learn. In the procedural/object-oriented comparison, I have had reasonable success in our OOP course with a program for simple bank accounts and a small set of users. But I don't know how well this exercise would work for a larger and more diverse set of styles, at least not at a scale I could use in our courses.

I thought of this when @kaleidic tweeted, "I hope @cristalopes includes an array language among her variations." I do, too, but my next thought was, "Well, now Crista needs to use an example problem for which an array language is reasonably well-suited." If the problem is not well suited to array languages, the solution might look awkward, or verbose, or convoluted. A newcomer to array languages is left to wonder, "Is this a problem with array languages, or with the example?" Human nature as it is, too many of us are prone to protect our own knowledge and assume that something is wrong with the new style.

An alternative approach is to get learners to suspend their disbelief for a while, learn some nuts and bolts, and then help them to solve bigger problems using the new style. My students usually struggle with this at first, but many of them eventually reach a point where they "get" the style. Solving a larger problem gives them a chance to learn the advantages and disadvantages of their new style, and retroactively learn more about the advantages and disadvantages of the styles they already know well. These trade-offs are the foundation of a really solid understanding of style.

I'm really intrigued by Queneau's idea. It seems that he uses a small example not to teach about each style in depth but rather to give us a taste. What does each style feel like in isolation? It is up to the aspiring writer to use this taste as a starting point, to figure out where each style might take you when used for a story of the writer's choosing.

That's a promising approach for programming styles, too, which is one of the reasons I am so looking forward to Crista's talk. As a teacher, I am a shameless thief of good ideas, so I am looking forward to seeing the example she uses, the way she solves it in the different styles, and the way she presents them to the crowd.

Another reason I'm looking forward to the talk is that I love programs, and this should be just plain fun.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

May 14, 2013 2:35 PM

An Unusual Scheme-ism This Semester

While grading final projects and final exams in my Programming Languages courses this week, I noticed my students using an unusual Scheme construction. Instead of writing:

(cons new-item result-of-recursive-call)

... a few troubled souls fell into the habit of writing:

(append (list new-item) result-of-recursive-call)

We have been using Scheme in this course for many years, and this may be the first semester that I've ever seen this. It is certainly the first time I saw it enough to notice it and wonder, "What's up with that?"

Of course, in previous semesters, a few students have always used a similar construction when putting new items at the back of the list:

(append result-of-recursive-call (list new-item))

This semester, though, a handful used (append ... (list ... all over the place. When I asked one of them about it after seeing it on an earlier homework assignment, he didn't have much to say, so we talked about the more direct solution a bit and moved on.

But after seeing it multiple times on the final exam, I have to wonder what is going on. Maybe they simply adopted this form as a mental shorthand, a one-size-fits-all tool that gave them one fewer thing to think about when writing code. Maybe, though, it masks a systematic error in thinking that I might address.

Out of curiosity, I ran a quick experiment to see what sort of time advantage (cons ... has over (append ... (list .... Surely, initializing the cdr field of a new list's cons cell to null, then asking append to walk there and set it to point to the other list wastes time. But how much?

In Racket, the penalty is actually quite small, in the ballpark of 40ms for every million executions. That's not surprising, given that append must examine only one cell before reaching the end of its first argument. This stands in contrast to the more general case of wrapping an O(n) append operation inside another O(n) operation, which my students encounter when learning how to implement more efficient algorithms using an accumulator variable.

More important than any speed penalty, though, is the misunderstanding that enables or encourages a student to think about reassembling a structure of this sort with anything other than a straightforward call to cons. Seeing an anti-pattern of this sort in my students' code makes me want to uncover the pathology at play and root it out.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 10, 2013 4:03 PM

Using Language to Understand a Data Set

Today was our twice-annual undergraduate research presentation day. Every B.S. student must do an undergraduate research project and present the results publicly. For the last few years, we have pooled the presentations on the morning of the Friday in finals week, after all the exams are given and everyone has a chunk of time free to present. It also means that more students and professors can attend, which makes for more a more engaging audience and a nice end to everyone's semester.

I worked with one undergraduate research student this spring. As I mentioned while considering the role of parsing in a compilers course, this student was looking for patterns in several years of professional basketball play-by-play data. His ultimate goal was to explore ways of measuring the impact of individual defensive performance in the NBA -- fairly typical MoneyBall stuff applied to an skill that is not well measured or understood.

This project fell into my hands serendipitously. The student had approached a couple of other professors, who upon hearing the word "basketball" immediately pointed him to me. Of course, the project is really a data analytics project that just happens to involve a dataset from basketball, but... Fortunately, I am interested in both the approach and the domain!

As research sometimes does, this problem led the student to a new problem first. In order to analyze data in the way he wanted, he needed data of a particular sort. There is plenty of play-by-play data available publicly on the web, but it's mostly prepared for presentation in HTML. So he first had to collect the data by scraping the web, and then organize it into a data format amenable to analysis.

This student had taken my compiler course the last time around, and his ability to parse several files of similar but just-different-enough data proved to be invaluable. As presented on sites like, the data is no where near ready to be studied.

As the semester wore on, he and I came to realize that his project this semester wouldn't be the data analysis he originally intended to do. It was a substantial project simply to make sense of the data he had found.

As he presented his work today, I realized something further. He was using language to understand a data set.

He started by defining a grammar to model the data he found, so that he could parse it into a database. This involved recognizing categories of expression that were on the surface of the data, such as made and missed field goals, timeouts, and turnovers. When he ran this first version of his parser, he found unhandled entries and extended his grammar.

Then he looked at the semantics of the data and noticed discrepancies deeper in the data. The number of possessions his program observed in a game differed from the expected values, sometimes wildly and with no apparent pattern.

As we looked deeper, we realized that the surface syntax of the data often obscured some events that would extend or terminate a possession. A simple example is a missed FT, which sometimes ends a possession and sometimes not. It depends in part on the next event in the timeline.

To handle these case, the student created new syntactic categories that enabled his parser to resolve such issues by recognized composite events in the data. As he did this, his grammar grew, and his parser became better at building a more accurate semantic model of the game.

This turned out to be a semester-long project in its own right. He's still not done and intends to continue with this research after graduation. We were both a bit surprised at how much effort it took to corral the data, but in retrospect we should not have been too surprised. Data are collected and presented with many different purposes in mind. Having an accurate deep model of the underlying the phenomenon in question isn't always one of them.

I hope the student was pleased with his work and progress this semester. I was. In addition to its practical value toward solving a problem of mutual interest, it reminded me yet again of the value of language in understanding the world around us, and the remarkable value that the computational ideas we study in computer science have to offer. For some reason, it also reminded me, pleasantly, of the Racket Way. As I noted in that blog entry, this is really the essence of computer science.

Of course, if some NBA team were to give my student the data he needs in suitable form, he could dive into the open question of how better to measure individual defensive performance in basketball. He has some good ideas, and the CS and math skills needed to try them out.

Some NBA team should snatch this guy up.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 30, 2013 4:53 PM


A student stopped in for a chat late last week to discuss the code he was writing for a Programming Languages assignment. This was the sort of visit a professor enjoys most. The student had clearly put in plenty of time on his interpreter and had studied the code we had built in class. His code already worked. He wanted to talk about ways to make his code better.

Some students never reach this point before graduation. In Coders at Work, Bernie Cosell tells a story about leading teams of new hires at BBN:

I would get people -- bright, really good people, right out of college, tops of their classes -- on one of my projects. And they would know all about programming and I would give them some piece of the project to work on. And we would start crossing swords at our project-review meetings. They would say, "Why are you complaining about the fact that I have my global variables here, that I'm not doing this, that you don't like the way the subroutines are laid out? The program works."

They'd be stunned when I tell them, "I don't care that the program works. The fact that you're working here at all means that I expect you to be able to write programs that work. Writing programs that work is a skilled craft and you're good at it. Now, you have to learn how to program.

I always feel that we have done our students well if we can get them to the point of caring about their craft before they leave us. Some students come to us already having this mindset, which makes for a very different undergraduate experience. Professors enjoy working these students, too.

But what stood out to me most from this particular conversation was something the student said, something to this effect:

When we built the lexical addresser in class a few weeks ago, I didn't understand the idea and I couldn't write it. So I studied it over and over until I could write it myself and understand exactly why it worked. We haven't looked at lexical addressing since then, but the work I did has paid off every time we've written code to process programs in our little languages, including this assignment. And I write code more quickly on the exams now, too.

When he finished speaking, I could hardly contain myself. I wish I could bottle this attitude and give to every student who ever thinks that easy material is an opportunity to take it easy in a course for a while. Or who thinks that the best response to difficult material is to wait for something easier to come along next chapter.

Both situations are opportunities to invest energy in the course. The returns on investment are deeper understanding of the material, sharper programming skills, and the ability to get stuff done.

This student is reaping now the benefits of an investment he made five weeks ago. It's a gift that will keep on giving long after this course is over.

I encourage students to approach their courses and jobs in this way, but the message doesn't always stick. As Clay Stone from City Slickers might say, I'm happy as a puppy with two peters whenever it does.

While walking this morning, I coined a word for this effect: exceleration. It's a portmanteau combining "excellence" and "acceleration", which fits this phenomenon well. As with compound interest and reinvested dividends, this sort of investment builds on its self over time. It accelerates learners on their path to mastering their craft.

Whatever you call it, that conversation made my week.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 25, 2013 4:03 PM

Toward a Course on Reading Code

Yesterday, I tweeted absent-mindedly:

It would be cool to teach a course called "Reading Code".

Reading code has been on mind for a few months now, as I've watched my students read relatively small pieces of code in my Programming Languages course and as I've read a couple of small libraries while reading the exercise bike. Then I ran across John Regehr's short brainstorm on the topic, and something clicked. So I tweeted.

Reading code, or learning to do it, must be on the minds of a lot people, because my tweet elicited quite a few questions and suggestions. It is an under-appreciated skill. Computer science programs rarely teach students how to do it, and then usually only implicitly, by hearing a prof or other students talk about code they've read.

Several readers wanted to know what the course outline would be. I don't know. That's one of the things about Twitter or even a blog: it is easy to think out loud absent-mindedly without having much content in mind yet. It's also easier to express an interest in teaching a course than to design a good one.

Right now, I have only a few ideas about how I'd start. Several readers suggested Code Reading by Spinellis, which is the only textbook I know on then topic. It may be getting a little old these days, but many of the core techniques are still sound.

I was especially pleased that someone recommended Richard Gabriel's idea for an MFA in Software, in which reading plays a big role. I've used some of Dick's ideas in my courses before. Ironically, the last time I mentioned the MFA in Software idea in my blog was in the context of a "writing code" course, at the beginning of a previous iteration of Programming Languages!

That's particularly funny to me because someone replied to my tweet about teaching a course called "Reading Code" with:

... followed by a course "Writing Readable Code".

Anyone who has tried to grade thirty evolving language interpreters each week appreciates this under-appreciated skill.

Chris Demwell responded to my initial tweet with direct encouragement: Write the course, or at least an outline, and post it. I begged indulgence for lack of time as the school year ends and said that maybe I can take a stab this summer. Chris's next tweet attempted to pull me into the 2010s:

1. Write an outline. 2. Post on github. 3. Accept pull requests. Congrats, you're an editor!

The world has indeed changed. This I will do. Watch for more soon. In the meantime, feel free to e-mail me your suggestions. (That's an Old School pull request.)

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

April 21, 2013 10:25 AM

Catnip for Programmers

This morning, Maciej Ceglowski of Pinboard introduced me to the Matasano crypto challenges, a set of exercises created by Thomas Ptacek and his team as a tool for teaching programmers a little about cryptography, some of its challenges, and the need for more awareness of how easy it is to do it wrong. With the coming break from the grind of the academic year, I plan on giving them a try.

After having completed the exercises himself, Ceglowski observes:

Crypto is like catnip for programmers. It is hard to keep us away from it, because it's challenging and fun to play with. And programmers respond very badly to the insinuation that they're not clever enough to do something. We see the F-16 just sitting there, keys in the ignition, no one watching, lights blinking, ladder extended. And some infosec nerd is telling us we can't climb in there, even though we just want to taxi around a little and we've totally read the manual.

I've noticed this with a number of topics in computing. In addition to cryptography, data compression and sorting/searching are sirens to the best programmers among our students. "What do you mean we can't do better?"

For many undergrads, the idea of writing a compiler seems a mystery. Heck, I admit to my students that even after years of teaching the course I remain in awe of my language tools and the people who build them. This challenge keeps a steady if relatively small stream of programmers flowing into our "Translation of Programming Languages" project course.

One of the great things about all these challenges is that after we tackle them, we have not only the finished product in hand but also what we learn about the topic -- and ourselves -- along the way. Then we are ready for a bigger challenge, and another program to write.

For CS faculty, catnip topics are invaluable ways to draw more students into the spell of computing, and more deeply. We are always on the lookout.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 10, 2013 4:03 PM

Minor Events in the Revolution at Universities

This morning I ran across several articles that had me thinking yet again about the revolution I see happening in the universities (*).

First, there was this recent piece in the New York Times about software that grades essays. Such software is probably essential for MOOCs in many disciplines, but it would also be useful in large lecture sections of traditional courses at many universities. The software isn't perfect, and skeptics abound. But the creator of the EdX software discussed in the article says:

This is machine learning and there is a long way to go, but it's good enough and the upside is huge.

It's good enough, and the upside is huge. Entrenched players scoff. Classic disruption at work.

Then there was this piece from the Nieman Journalism Lab about an online Dutch news company that wants readers to subscribe to individual journalists. Is this really news in 2013? I read a lot of technical and non-technical material these days via RSS feeds from individual journalists and bloggers. Of course, that's not the model yet for traditional newspapers and magazines.

... but that's the news business. What about the revolution in universities? The Nieman Lab piece reminded me of an old article in Vanity Fair about Politico, a news site founded by a small group of well-known political journalists who left their traditional employers to start the company. They all had strong "personal brands" and journalistic credentials. Their readers followed them to their new medium. Which got me to thinking...

What would happen if the top 10% of the teachers at Stanford or Harvard or Williams College just walked out to start their own university?

Of course, in the time since that article was published, we have seen something akin to this, with the spin-off of companies like Coursera and Udacity. However, these new education companies are partnering with traditional universities and building off the brands of their partners. At this point in time, the brand of a great school still trumps the individual brands of most all its faculty. But one can imagine a bolder break from tradition.

What happens when technology gives a platform to a new kind of teacher who bypasses the academic mainstream to create and grow a personal brand? What happens when this new kind of teacher bands together with a few like-minded renegades to use the same technology to scale up to the size of a traditional university, or more?

That will never happen, or so many of us in the academy are saying. This sort of thinking is what makes the Dutch news company mentioned above seem like such a novelty in the world of journalism. Many journalists and media companies, though, now recognize the change that has happened around them.

Which leads to a final piece I read this morning, a short blog entry by Dave Winer about Ezra Klein's epiphany on how blogging and journalism are now part of a single fabric. Winer says:

It's tragic that it took a smart guy like Klein so long to understand such a basic structural truth about how news, his own profession, has been working for the last 15 years.

I hope we aren't saying the same thing about the majority of university professors fifteen or twenty years from now. As we see in computers that grade essays, sometimes a new idea is good enough, and the upside is huge. More and more people will experiment with good-enough ideas, and even ideas that aren't good enough yet, and as they do the chance of someone riding the upside of the wave to something really different increases. I don't think MOOCs are a long-term answer to any particular educational problem now or in the future, but they are one of the laboratories in which these experiments can be played out.

I also hope that fifteen or twenty years from now someone isn't saying about skeptical university professors what Winer says so colorfully about journalists skeptical of the revolution that has redefined their discipline while they worked in it:

The arrogance is impressive, but they're still wrong.


(*).   Nearly four years later, Revolution Out There -- and Maybe In Here remains one of my most visited blog entries, and one that elicits more reader comments than most. I think it struck a chord.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 07, 2013 2:23 PM

To Solve Or Not To Solve

This week I bumped into a familiar tension that arises whenever I assign a new homework problem to students.

Path 1. I assign a new problem before I solve it myself. The problem turns out to too difficult, or includes a complication or two that I didn't anticipate. Students become frustrated, especially the weaker ones. Because of the distraction, most everyone misses the goal of the assignment.

Path 2. I solve a new problem before I assign it. I run into a couple of unexpected wrinkles, things that make the problem less straightforward than I had planned. "This will distract my students," I think, so I iron out a wrinkle here and eliminate a distraction there. Then I assign the problem to my students. The result feels antiseptic, unchallenging. Some students are bored with the problem, especially the stronger ones.

In this week's case, I followed the second path. I assigned a new problem in my Programming Languages course before solving it myself. When I sat down to whip up my solutions, I realized the problem held a couple of surprises for me. I had a lot of fun with those surprises and was happy with the code that resulted. But I also realized that my students will have to do more fancy thinking than I had planned on. Nothing too tricky, just a couple of non-obvious steps along the way to an obvious solution.

Will that defeat the point of assigning the problem? Will my stronger students be happy for the challenge? Will my weaker students be frustrated, or angry at their inability to solve a problem that looks so much like another they have already solved? Will the problem help students learn something new about the topic at hand?

I ultimately believe that students benefit strongly from the challenge of problems that have not been pre-digested for them by the instructor or textbook. When we take too much care in preparing assignments ahead of time, we rob students of the joy of solving a problem that has rough edges.

Joy, and the sense of accomplishment that comes from taking on a real challenge. Problems in the world typically have rough edges, or throw wrinkles at us when we aren't looking for them.

Affording students the joy of solving a real problem is perhaps especially important for the stronger students, who often have to live with a course aimed at the middle of the curve. But it's just as important for the rest of the class. Skill and confidence grow out of doing something worth doing, even if it takes a little help from the professor.

I continue to use both approaches when I create assignments, sometimes solving a new problem first and sometimes trusting my instinct. The blend keeps assignments from veering too far in one direction or the other, I think, which gives students some balance.

However, I am usually most happy when I let a new problem surprise us all. I try to keep these problems on track by paying closer attention to students as they begin to work on them. When we run into an unexpected rough edge, I try to intervene with just enough assistance to get them over the complexity, but not so much as to sterilize the problem.

Finding the right balance between too clean and too rough is a tough problem for a teacher to solve. It is a problem worth solving, and a source of both disappointment and joy.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

April 05, 2013 3:21 PM

The Role of Parsing in a Compilers Course

I teach compilers again this fall. I'm looking forward to summer, when I'll have a chance to write some code and play with some ideas for the course.

This morning I thought a bit about a topic that pops up every time I prep the course. The thoughts were prompted by a tweet from James Coglan, which said "Really wish this Compilers course weren't half about parsing. ... get on with semantics." The ellipsis is mine; James's tweet said something about using lex/yacc to get something up and running fast. Then, presumably, we could get on to the fun of semantics.

This is a challenge for my compilers course, too. I know I don't want to rush through scanning and parsing, yet I also wish I had more time for static analysis, optimization, and code generation. Even though I know the value of parsing, I wish I had equal time for a lot of other cool topics.

Geoff Wozniak's response expressed one of the reasons parsing still has such a large role in my compilers course, and so many others:

Parsing is like assembly language: it seems superfluous at the time, but provides deep understanding later. It's worth it.

That's part of what keeps me from de-emphasizing it in my course. Former students often report back to me that they have used their skill at writing parsers frequently in their careers, whether for parsing DSLs they whip up or for making sense of a mess of data they want to process.

A current student is doing an undergrad research project that involves finding patterns in several years of professional basketball play-by-play data, and his ability to parse several files of similar but just-different-enough data proved invaluable. Of course, he was a bit surprised that corralling the data took as much effort as it did. Kind of like how scanning and parsing are such a big part of a compiler project.

I see now that James has tweeted a retraction:

... ppl are RTing something I said about wishing the Compilers course would get done with parsing ASAP. Don't believe this any more.

I understand the change of opinion. After going writing a compiler for a big language and learning the intricacies that are possible, it's easy to reach Geoff's position: a deep understanding comes from the experience.

That doesn't mean I don't wish my semester were twenty weeks instead of fifteen, so that I could go deeper on some other topics, too. I figure there will always be some tension in the design of the course for just that reason.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

March 30, 2013 8:43 AM

"It's a Good Course, But..."

Earlier this week I joined several other department heads to eat lunch with a bunch of high school teachers who were on campus for the Physics Olympics. The teachers were talking shop about the physics courses at their schools, and eventually the conversation turned to AP Physics. One of the teachers said, "It's a good course, but..."

A lot of these teachers would rather not offer AP Physics at all. One teacher described how in earlier days they were able to teach an advanced physics course of their own design. They had freedom to adapt to the interest of their students and to try out new ideas they encountered at conferences. Even though the advanced physics course had first-year physics as a prerequisite, they had plenty of students interested and able to take the second course.

The introduction of AP Physics created some problems. It's a good course, they all agreed, but it is yet another AP course for their students to take, and yet another AP exam for the students to prepare for. Most students can't or don't want to take all the AP courses, due to the heavier workload and often grueling pace. So in the end, they lose potential students who choose not to take the physics class.

Several of these teachers tried to make this case to heads of their divisions or to their principals, but to no avail.

This makes me sad. I'd like to see as many students taking science and math courses in high school as possible, and creating unnecessary bottlenecks hurts that effort.

There is a lot of cultural pressure these days to accelerate the work that HS students do. K-12 school districts and their administrators see the PR boon of offering more, and more advanced courses. State legislators are creating incentives for students to earn college credit while in high school, and funding for schools can reflect that. Parents love the idea of their children getting a head start on college, both because it might save money down the line and because they earn some vicarious pleasure in the achievement of their children.

On top of all this, the students themselves often face a lot of peer pressure from their friends and other fellow students to be doing and achieving more. I've seen that dynamic at work as my daughters have gone through high school.

Universities don't seem as keen about AP as they used to, but they send a mixed message to parents and students. On the one hand, many schools give weight in their admission decisions to the number of AP courses completed. This is especially true with more elite schools, which use this measure as a way to demonstrate their selectivity. Yet many of those same schools are reluctant to give full credit to students who pass the AP exam, at least as major credit, and require students to take their intro course anyway.

This reluctance is well-founded. We don't see any students who have taken AP Computer Science, so I can't commit on that exam but I've talked with several Math faculty here about their experiences with calculus. They say that, while AP Calculus teaches a lot of good material, but the rush to cover required calculus content often leaves students with weak algebra skills. They manage to succeed in the course despite these weaknesses, but when they reach more advanced university courses -- even Calc II -- these weaknesses come back to haunt them.

As a parent of current and recent high school students, I have observed the student experience. AP courses try to prepare students for the "college experience" and as a result cover a lot of material. The students see them as grueling experiences, even when they enjoy the course content.

That concerns me a bit. For students who know they want to be math or science majors, these courses are welcome challenges. For the rest of the students, who take the courses primarily to earn college credit or to explore the topic, these courses are so grueling that this dampen the fun of learning.

Call me old-fashioned, but I think of high school as a time to learn about a lot of different things, to sample broadly from all areas of study. Sure, students should build up the skills necessary to function in the workplace and go to college, but the emphasis should be on creating a broadly educated citizen, not training a miniature college student. I'd rather students get excited about learning physics, or math, or computer science, so that they will want to dive deeper when they get to college.

A more relaxed, more flexible calculus class or physics course might attract more students than a grueling AP course. This is particularly important at a time when everyone is trying to increase interest in STEM majors.

My daughters have had a lot of great teachers, both in and out of their AP courses. I wish some of those teachers had had more freedom to spark student interest in the topic, rather than student and teacher alike facing the added pressure of taking the AP exam, earning college credits, and affecting college admission decisions

It's a good course, but feel the thrill first.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

March 22, 2013 9:17 AM

Honest Answers: Learning APL

After eighteen printed pages showing the wonders of APL in A Glimpse of Heaven, Bernard Legrand encourages programmers to give the language a serious look. But he cautions APL enthusiasts not to oversell the ease of learning the language:

Beyond knowledge of the basic elements, correct APL usage assumes knowledge of methods for organising data, and ways specific to APL, of solving problems. That cannot be learnt in a hurry, in APL or any other language.

Legrand is generous in saying that learning APL takes the same amount of time as learning any other language. In my experience, both as a learning of language and as a teacher of programmers, languages and programming styles that are quite different from one's experience take longer than more familiar topics. APL is one of those languages that requires us to develop entirely new ways of thinking about data and process, so it will take most people longer to learn than yet another C-style imperative language or OO knock-off.

But don't be impatient. Wanting to move too quickly is a barrier to learning and performing at all scales, and too often leads us to give up too soon. If you give up on APL too soon, or on functional programming, or OOP, you will never get to glimpse the heaven that experienced programmers see.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 08, 2013 3:50 PM

Honest Answers: Debugging

We have all been there:

Somehow, at some point in every serious programming project, it always comes down to the last option: stare at the code until you figure it out. I wish I had a better answer, but I don't. Anyway, it builds character.

This is, of course, the last resort. We need to teach students better ways to debug before they have to fall back on what looks a lot like wishful thinking. Fortunately, John Regehr lists this approach as the last resort in his lecture on How to Debug. Before he tells students to fall back to the place we all have to fall back to occasionally, he outlines an explicit, evidence-driven process for finding errors in a program.

I like that Regehr includes this advice for what to do after you find a bug: step back and figure out what error in thinking led to the bug.

An important part of learning from a mistake is diagnosing why you made it, and then taking steps wherever possible to make it difficult or impossible to make the same mistake again. This may involve changing your development process, creating a new tool, modifying an existing tool, learning a new technique, or some other act. But it requires an act. Learning rarely just happens.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 07, 2013 3:31 PM

A Programming Koan

Student: "I didn't have time to write 150 lines of code for the homework."

Master: "That's fine. It requires only 50."

Student: "Which 50?"

I have lived this story several times recently, as the homework in my Programming Languages has become more challenging. A few students do not complete the assignment because they do not spend enough time on the course, either in practice or performance. But most students do spend enough time, both in practice and on the assignment. Indeed, they spend much more time on the assignment than I intend.

When I see their code, I know why. They have written long solutions: code with unnecessary cases, unnecessary special cases, and unnecessary helper functions. And duplication -- lots and lots of duplication. They run out of time to write the ten lines they need to solve the last problem on the set because they spent all their time writing thirty lines on each of the preceding problems, where ten would have done quite nicely.

Don't let anyone fool you. Students are creative. The trick is o help them harness their creativity for good. The opposite of good here is not evil, but bad code -- and too much code.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 28, 2013 2:52 PM

The Power of a Good Abstract

Someone tweeted a link to Philip Greenspun's M.S. thesis yesterday. This is how you grab your reader's attention:

A revolution in earthmoving, a $100 billion industry, can be achieved with three components: the GPS location system, sensors and computers in earthmoving vehicles, and SITE CONTROLLER, a central computer system that maintains design data and directs operations. The first two components are widely available; I built SITE CONTROLLER to complete the triangle and describe it here.

Now I have to read the rest of the thesis.

You could do worse than use Greenspun's first two sentences as a template for your next abstract:

A revolution in <major industry or research area> can be achieved with <n> components: <component-1>, <component-2>, ... and <component-n>. The first <n-1> components are widely available. I built <program name> to meet the final need and describe it here.

I am adding this template to my toolbox of writing patterns, alongside Kent Beck's four-sentence abstract (scroll down to Kent's name), which generalizes the idea of one startling sentence that arrests the reader. I also like good advice on how to write concise, incisive thesis statements, such as that in Matt Might's Advice for PhD Thesis Proposals and Olin Shivers's classic Dissertation Advice.

As with any template or pattern, overuse can turn a good idea into a cliché. If readers repeatedly see the same cookie-cutter format, it begins to look stale and will cause the reader to lose interest. So play with variations on the essential theme: I have solved an important problem. This is my solution.

If you don't have a great abstract, try again. Think hard about your own work. Why is this problem important? What is the big win from my solution? That's a key piece of advice in Might's advice for graduate students: state clearly and unambiguously what you intend to achieve.

Indeed, approaching your research in a "test-driven" way makes a lot of sense. Before embarking on a project, try to write the startling abstract that will open the paper or dissertation you write when you have succeeded. If you can't identify the problem as truly important, then why start at all? Maybe you should pick something more valuable to work on, something that matters enough you can write a startling abstract for the esult. That's a key piece of advice shared by Richard Hamming in his You and Your Research.

And whatever you do, don't oversell a minor problem or a weak solution with an abstract that promises too much. Readers will be disappointed at best and angry at worst. If you oversell even a little bit too many times, you will become like the boy who cried wolf. No one will believe your startling claim even when it's on the mark.

Greenspun's startling abstract ends as strongly as it begins. Of course, it helps if you can close with a legitimate appeal to ameliorating poverty around the world:

This area is exciting because so much of the infrastructure is in place. A small effort by computer scientists could cut the cost of earthmoving in half, enabling poor countries to build roads and rich countries to clean up hazardous waste.

I'm not sure adding another automating refactoring to Eclipse or creating another database library can quite rise to the level of empowering the world's poor. But then, you may have a different audience.

Posted by Eugene Wallingford | Permalink | Categories: General, Patterns, Teaching and Learning

February 27, 2013 11:52 AM

Programming, Writing, and Clear Thinking

This Fortune Management article describes a technique Jeff Bezos uses in meetings of his executive team: everyone begins by "quietly absorbing ... six-page printed memos in total silence for as long as 30 minutes".

There is a good reason, Bezos knows, for an emphasis on reading and the written word:

There is no way to write a six-page, narratively structured memo and not have clear thinking.

This is certainly true for programming, that unique form of writing that drives the digital world. To write a well-structured, six-page computer program to perform a task, you have to be able to think clearly about your topic.

Alas, the converse is not true, at least not without learning some specific skills and practicing a lot. But then again, that makes it just like writing narratives.

My Programming Languages students this semester are learning that, for functional programming in Scheme, the size limit is somewhat south of six pages. More along the lines of six lines.

That's a good thing if your goal is clear thinking. Work hard, clarify your thoughts, and produce a small function that moves you closer to your goal. It's a bad thing if your goal is to get done quickly.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

February 20, 2013 3:32 PM

A Lesson about Learning from a Self-Aware Teacher

In a reminiscence on his experience as a student, John Cook writes:

I enjoyed learning about it as a student and I enjoyed teaching it later. (Or more accurately, I enjoyed being exposed to it as a student and really learning it later when I had to teach it.)

It is a commonplace for anyone who has taught that we learn a lot more about any topic when we teach it -- even a topic in which we are acknowledged experts. Between organizing material for instruction and interacting with people as they learn, we learn an awful lot ourselves.

There is, however, a hidden gem in John's comment that is not so commonly talked about: "I enjoyed being exposed to it as a student...".

As teachers, we do well to remember that our students aren't really learning something when they take our courses, especially when the course is their first encounter with the material. We are merely exposing them to the topic, giving them the lay of the land and a little vocabulary. The course is an opportunity to engage with the material, perhaps again. If we don't keep this in mind, we may deceive ourselves with unrealistic expectations about what students will know and be able to do at the end of the course.

A second advantage of remembering this truth is that we may be on guard to create opportunities to deepen their exposure, through homework and projects and readings that pull them in. It is through our students' own efforts that learning takes place, and that our courses succeed.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 18, 2013 12:59 PM

Code Duplication as a Hint to Think Differently

Last week, one of my Programming Languages students sent me a note saying that his homework solution worked correctly but that he was bothered by some duplicated code.

I was so happy.

Any student who has me for class for very long hears a lot about the dangers of duplication for maintaining code, and also that duplication is often a sign of poor design. Whenever I teach OOP or functional programming, we learn ways to design code that satisfy the DRY principle and ways to eliminate it via refactoring when it does sneak in.

I sent the student an answer, along with hearty congratulations for recognizing the duplication and wanting to eliminate it. My advice

When I sat down to blog the solution, I had a sense of deja vu... Hadn't I written this up before? Indeed I had, a couple of years ago: Increasing Duplication to Eliminate Duplication. Even in the small world of my own teaching, it seems there is nothing new under the sun.

Still, there was a slightly different feel to the way I talked about this in class later that day. The question had come earlier in the semester this time, so the code involved was even simpler. Instead of processing a vector or a nested list of symbols, we were processing with a flat list of symbols. And, instead of applying an arbitrary test to the list items, we were simply counting occurrences of a particular symbol, s.

The duplication occurred in the recursive case, where the procedure handles a pair:

    (if (eq? s (car los))
        (+ 1 (count s (cdr los)))      ; <---
        (count s (cdr los)))           ; <---

Then we make the two sub-cases more parallel:

    (if (eq? s (car los))
        (+ 1 (count s (cdr los)))      ; <---
        (+ 0 (count s (cdr los))))     ; <---

And then use distributivity to push the choice down a level:

    (+ (if (eq? s (car los)) 1 0)
       (count s (cdr los)))            ; <--- just once!

This time, I made a point of showing the students that not only does this solution eliminate the duplication, it more closely follows the command to follow the shape of the data:

When defining a program to process an inductively-defined data type, the structure of the program should follow the structure of the data.

This guideline helps many programmers begin to write recursive programs in a functional style, rather than an imperative style.

Note that in the first code snippet above, the if expression is choosing among two different solutions, depending on whether we see the symbol s in the first part of the pair or not. That's imperative thinking.

But look at the list-of-symbols data type:

    <list-of-symbols> ::= ()
                        | (<symbol> . <list-of-symbols>)

How many occurrences of s are in a pair? Obviously, the number of s's found in the car of the list plus the number of s's found in the cdr of the list. If we design our solution to match the code to the data type, then the addition operation should be at the top to begin:

    (+ ; number of s's found in the car
       ; number of s's found in the cdr )

If we define the answer for the problem in terms of the data type, we never create the duplication-by-if in the first place. We think about solving the subproblems for the car and the cdr, fill in the blanks, and arrive immediately at the refactored code snippet above.

I have been trying to help my students begin to "think functionally" sooner this semester. There is a lot or room for improvement yet in my approach. I'm glad this student asked his question so early in the semester, as it gave me another chance to model "follow the data" thinking. In any case, his thinking was on the right track.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

February 17, 2013 12:16 PM

The Disruption of Education: B.F. Skinner, MOOCs, and SkillShare

Here are three articles, all different, but with a connection to the future of education.

•   Matthew Howell, Teaching Programming

Howell is a software developer who decided to start teaching programming on the side. He offers an on-line course through SkillShare that introduces non-programmers to the basic concepts of computer programming, illustrated using Javascript running in a browser. This article describes some of his reasons for teaching the course and shares a few things he has learned. One was:

What is the ideal class size? Over the year, I've taught classes that ranged in size from a single person to as many as ten. Through that experience, I've settled on five as my ideal.

Anyone who has taught intro programming in a high school or university is probably thinking, um, yeah, that would be great! I once taught an intermediate programming section with fifty or so people, though most of my programming courses have ranged from fifteen to thirty-five students. All other things being equal, smaller is better. Helping people learn to write and make things almost usually benefits from one-on-one time and time for small groups to critique design together.

Class size is, of course, one of the key problems we face in education these days, both K-12 and university. For a lot of teaching, n = 5 is just about perfect. For upper-division project courses, I prefer four groups of four students, for a total of sixteen. But even at that size, the costs incurred by a university offering sections of are rising a lot faster than its revenues.

With MOOCs all the rage, Howell is teaching at the other end of spectrum. I expect the future of teaching to see a lot of activity at both scales. Those of us teaching in the middle face bleaker prospects.

•   Mike Caulfield, B. F. Skinner on Teaching Machines (1954)

Caulfield links to this video of B.F. Skinner describing a study on the optimal conditions for self-instruction using "teaching machines" in 1954. Caulfield points out that, while these days people like to look down on Skinner's behaviorist view of learning, he understood education better than many of his critics, and that others are unwittingly re-inventing many of his ideas.

For example:

[Skinner] understands that it is not the *machine* that teaches, but the person that writes the teaching program. And he is better informed than almost the entire current educational press pool in that he states clearly that a "teaching machine" is really just a new kind of textbook. It's what a textbook looks like in an age where we write programs instead of paragraphs.

That's a great crystallizing line by Caulfield: A "teaching machine" is what a textbook looks like in an age where we write programs instead of paragraphs.

Caulfield reminds us that Skinner said these things in 1954 and cautions us to stop asking "Why will this work?" about on-line education. That question presupposes that it will. Instead, he suggests we ask ourselves, "Why will this work this time around?" What has changed since 1954, or even 1994, that makes it possible this time?

This is a rightly skeptical stance. But it is wise to be asking the question, rather than presupposing -- as so many educators these days do -- that this is just another recursion of the "technology revolution" that never quite seems to revolutionize education after all.

•   Clayton Christensen in Why Apple, Tesla, VCs, academia may die

Christensen didn't write this piece, but reporter Cromwell Schubarth quotes him heavily throughout on how disruption may be coming to several companies and industries of interest to his Silicon Valley readership.

First, Christensen reminds young entrepreneurs that disruption usually comes from below, not from above:

If a newcomer thinks it can win by competing at the high end, "the incumbents will always kill you".

If they come in at the bottom of the market and offer something that at first is not as good, the legacy companies won't feel threatened until too late, after the newcomers have gained a foothold in the market.

We see this happening in higher education now. Yet most of my colleagues here on the faculty and in administration are taking the position that leaves legacy institutions most vulnerable to overthrow from below. "Coursera [or whoever] can't possibly do what we do", they say. "Let's keep doing what we do best, only better." That will work, until it doesn't.

Says Christensen:

But now online learning brings to higher education this technological core, and people who are very complacent are in deep trouble. The fact that everybody was trying to move upmarket and make their university better and better and better drove prices of education up to where they are today.

We all want to get better. It's a natural desire. My university understands that its so-called core competency lies in the niche between the research university and the liberal arts college, so we want to optimize in that space. As we seek to improve, we aspire to be, in our own way, like the best schools in their niches. As Christensen pointed out in The Innovator's Dilemma, this is precisely the trend that kills an institution when it meets a disruptive technoology.

Later in the article, Christensen talks about how many schools are getting involved in online learning, sometimes investing significant resources, but almost always in service of the existing business model. Yet other business models are being born, models that newcomers are willing -- and sometimes forced -- to adopt.

One or more of these new models may be capable of toppling even the most successful institutions. Christensen describes one such candidate, a just-in-time education model in which students learn something, go off to use it, and then come back only when they need to learn what they need to know in order to take their next steps.

This sort of "learn and use", on-the-job learning, whether online or in person, is a very different way of doing things from school as we know it. It id not especially compatible with the way most universities are organized to educate people. It is, however, plenty compatible with on-line delivery and thus offers newcomers to the market the pebble they may use to bring down the university.


The massively open on-line course is one form the newcomers are taking. The smaller, more intimate offering enabled by the likes of SkillShare is another. It may well be impossible for legacy institutions caught in the middle to fend off challenges from both directions.

As Caulfield suggests, though, we should be skeptical. We have seen claims about technology upending schools before. But we should adopt the healthy skepticism of the scientist, not the reactionary skepticism of the complacent or the scared. The technological playing field has changed. What didn't work in 1954 or 1974 or 1994 may well work this time.

Will it? Christensen thinks so:

Fifteen years from now more than half of the universities will be in bankruptcy, including the state schools. In the end, I am excited to see that happen.

I fear that universities like mine are at the greatest risk of disruption, should the wave that Christensen predicts come. I don't know many university faculty are excited to see it happen. I just hope they aren't too surprised if it does.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

February 12, 2013 2:53 PM

Student Wisdom on Monad Tutorials

After class today, a few of us were discussing the market for functional programmers. Talk turned to Clojure and Scala. A student who claims to understand monads said:

To understand monad tutorials, you really have to understand monads first.

Priceless. The topic of today's class was mutual recursion. I think we are missing a base case here.

I don't know whether this is a problem with monads, a problem with the writers of monad tutorials, or a problem with the rest of us. If it is true, then it seems a lot of people are unclear on the purpose of a tutorial.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

February 10, 2013 11:18 AM

Meaningful Interaction and On-Line Education

After tweeting a jewel from Clay Shirky's latest article, I read the counterpoint article by Maria Bustillos, Venture Capital's Massive, Terrible Idea For The Future Of College, and an earlier piece by Darryl Tippens in the Chronicle of Higher Education, Technology Has Its Place: Behind a Caring Teacher. I share these writers' love of education and sympathetic to many of their concerns about the future of the traditional university. In the end, though, I think that economic and technological forces will unbundle university education whether we like it or not, and our best response is not simply to lament the loss. It is to figure out how best to preserve the values of education in the future, not to mention its value to future citizens.

While reading Bustillos and Tippens, I thought about how the lab sciences (such as physics, biology, and chemistry) are in many ways at a bigger disadvantage in an on-line world than disciplines that traffic primarily in ideas, such as the humanities. Lab exercises are essential to learning science, but they require equipment, consumable supplies, and dedicated space that are not typically available to us in our homes. Some experiments are dangerous enough that we don't want students trying them without the supervision of trained personnel.

Humanities courses have it relatively easier. Face-to-face conversation is, of course, a huge part of the educational experience there. But the sharing of ideas, and the negotiation of shared understanding, can be conducted in a number of ways that are amenable to on-line communication. Reading and writing have long played a central role in the growth of knowledge, alongside teaching in a classroom and conversation with like-minded individuals in close personal settings.

I soon realized something. Bustillos and Tippens, like so many others, seem to assume that collaboration and meaningful interaction cannot happen on-line.

Bustillos puts it harshly, and inaccurately:

MOOCs are an essentially authoritarian structure; a one-way process in which the student is a passive recipient required to do nothing except "learn."

Tippens expresses the sentiment in a more uplifting way, quoting Andrew Delbanco:

Learning is a collaborative rather than a solitary process.

On-line education does not have to be passive any more than a classroom has to be passive. Nor must it be solitary; being alone a lot of the time does not always mean doing alone.

A few faculty in my department have begun to create on-line versions of their courses. In these initial efforts, interaction among students and teacher have been paramount. Chat rooms, e-mail, wikis, and discussion boards all provide avenues for students to interact with the instructor and among themselves. We are still working at a small scale and primarily with students on-campus, so we have had the safety valve of face-to-face office hours available. Yet students often prefer to interact off hours, after work or over the weekend, and so the on-line channels prove to be most popular.

Those in the software world have seen how collaboration can flourish on-line. A lot of the code that makes our world go is developed and maintained by large, distributed communities whose only opportunities to collaborate are on-line. These developers may be solitary in the sense that they work in a different room from their compatriots, but they are not solitary in the sense of being lonesome, desolate, or secluded. They interact as a matter of course. Dave Humphrey has been using this model and its supporting technology as part of his teaching at Seneca College for a few years now. It's exciting.

My own experience with on-line interaction goes back to the 1980s, when I went to graduate school and discovered Usenet. Over the next few years, I made many good friends, some of whom I see more often in-person than I see most friends from my high school and college years. Some, I have never met in person, yet I consider them good friends. Usenet enabled me to interact with people on matters of purely personal interest, such as basketball and chess, but also on matters of academic value.

In particular, I was able to discuss AI, my area of study, with researchers from around the world. I learned a lot from them, and those forums gave me a chance to sharpen my ability to express ideas. The time scale was between the immediate conversation of the classroom and the glacial exchange of conference and journal papers. These on-line conversations gave me time to reflect before responding, while still receiving feedback in a timely fashion. They were invaluable.

Young people today grow up in a world of on-line interaction. Most of their interactions on-line are not deep, to be sure, but some are. And more could be, if someone could show them the way. That's the educator's job. The key is that these youth know that on-line technology allows them to be active, to create, and to learn. Telling them that on-line learning must be passive or solitary will fall on deaf ears.

Over twenty years of teaching university courses has taught me how important face-to-face interaction with students can be. How well experiments in on-line education address the need for interpersonal communication will go a long way to determining whether they succeed as education. But assuming that collaboration and meaningful interaction cannot happen on-line is surely a losing proposition.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 03, 2013 11:10 AM

Faulkner Teaches How to Study

novelist William Faulkner, dressed for work

From this Paris Review interview with novelist William Faulkner:


Some people say they can't understand your writing, even after they read it two or three times. What approach would you suggest for them?


Read it four times.

The first three times through the book are sunk cost. At this moment, you don't understand. What should you do? Read it again.

I'm not suggesting you keep doing the same failing things over and over. (You know what Einstein said about insanity.) If you read the full interview, you'll see that Faulkner isn't suggesting that, either. We're suggesting you get back to work.

Studying computer science is different from reading literature. We can approach our study perhaps more analytically than the novel reader. And we can write code. As an instructor, I try to have a stable of ideas that students can try when they are having trouble grasping a new concept or understanding a reading, such as:

  • Assemble a list of specific questions to ask your prof.
  • Talk to a buddy who seems to understand what you don't.
  • Type the code from the paper in character-by-character, thinking about it as you do.
  • Draw a picture.
  • Try to explain the parts you do understand to another student.
  • Focus on one paragraph, and work backward from there to the ideas it presumes you already know.
  • Write your own program.

One thing that doesn't work very well is being passive. Often, students come to my office and say, "I don't get it." They don't bring much to the session. But the best learning is not passive; it's active. Do something. Something new, or just more.

Faulkner is quite matter-of-fact about creating and reading literature. If it isn't right, work to make it better. Technique? Method? Sure, whatever you need. Just do the work.

This may seem like silly advice. Aren't we all working hard enough already? Not all of us, and not all the time. I sometimes find that when I'm struggling most, I've stopped working hard. I get used to understanding things quickly, and then suddenly I don't. Time to read it again.

I empathize with many of my students. College is a shock to them. Things came easily in high school, and suddenly they don't. These students mean well but seem genuinely confused about what they should do next. "Why don't I understand this already?"

Sometimes our impatience is born from such experience. But as Bill Evans reminds us, some problems are too big to conquer immediately. He suggests that we accept this up front and enjoy the whole trip. That's good advice.

Faulkner shrugs his shoulders and tells us to get back to work.


PHOTO. William Faulkner, dressed for work. Source: The Centered Librarian.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

January 24, 2013 4:32 PM

Real-Life Examples of Quotation in Scheme

The new semester is fully underway, and I'm already enjoying Programming Languages. My Tuesday session this week felt like a hodgepodge of topics, including Scheme definitions and conditionals, and didn't inspire my students much. Today's session on pairs and lists seemed to go much more smoothly, at least from my side of the classroom.

One thing that has been different the first two weeks this time around has been several questions about the quote character in Scheme, which is shorthand for the special form quote.

The purpose of the quote is to tell the interpreter to take its argument literally. When the argument is a list, say, '(* 2 3), quotation prevents the interpreter from evaluating the list as a Scheme procedure call. When the argument is a symbol, say, 'a, the quote lets the interpreter know not to treat the a as an identifier, looking up the value bound to that name in the current environment. Instead, it is treated as the literal symbol a. Most of our students have not yet worked in languages where symbols are first-class data values, so this idea takes some getting used to.

In the course of talking about quotation with them, I decided to relate this idea to an example of quotation from real life. The first thing that came to mind at that instant was the distinction between these two sentences:

Lincoln was disappointing.
"Lincoln" was disappointing.

In the former, Lincoln is a name to be evaluated. Depending on the context, it could refer to the 16th president of the United States, the capital of Nebraska, or some other object in the world. (The sentence doesn't have to be true, of course!)

In the latter, quoting Lincoln makes it a title. I intended for this "literal" reference to the word Lincoln to evoke the current feature film of that name.

Almost immediately I began to second-guess my example. The quoted Lincoln is still a name for something -- a film, or a boo, or some such -- and so still needs to be "dereferenced" to retrieve the object signified. It's just that we treat titles differently than other names.

So it's close to what I wanted to convey, but it could mislead students in a dangerous way.

The canonical real-world example of quotation is to quote a word so that we treat the utterance as the word itself. Consider:

Creativity is overused.
"Creativity" is overused.

In the former, creativity is a name to be evaluated. It signifies an abstract concept, a bundle of ideas revolving around creation, originality, art, and ingenuity. We might say creativity is overused in a context where people should be following the rules but are instead blazing their own trails.

In the latter, the quoted creativity signifies the word itself, taken literally. We might say "creativity" is overused to suggest an author improve a piece of writing by choosing a near-synonym such as "cleverness" or "originality", or by rephrasing a sentence so that the abstract concept is recast as the verb in an active statement.

This example stays more faithful to the use of quote in Scheme, where an expression is taken literally, with no evaluation of of any kind needed.

I like giving examples of how programming concepts exist in other parts of our lives and world. Even when they are not perfect matches, they can sometimes help a student's mind click on the idea as it works in a programming language or style.

I like it better when I use better examples!

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 18, 2013 2:42 PM

Alive with Infinite Possibilities

the PARC 5-key Chord Keyboard, courtesy the Buxton collection

Engelbart's Violin tells the interesting story of Douglas Engelbart's chorded keyboard, or "chorder", an input device intended as a supplement to the traditional keyboard. Engelbart was part of a generation that saw computing as a universe of unlimited possibilities, and more than many others he showed us glimpses of what it could be.

I grew up in an age when an unadorned BASIC interpreter was standard equipment on any computer, and with so little software available to us, we all wrote programs to make the machine do our bidding. In a narrower way, we felt the sense of unlimited possibilities that drove Engelbart, Sutherland, and the generations that came before us. If only we all had vision as deep.

Unfortunately, not many teenagers get to have that kind of experience anymore. BASIC became VB.Net, a corporate language for a corporate world. The good news is that languages like Python and even JavaScript make programming accessible to more people again, but the ethos of anyone learning to program on his or her own at home seems to have died off.

Engelbart's Violin uses strong language to judge the current state of computing, with some of its strongest lamenting the "cruel discrepancy" between the experience of a creative child learning to program and the world of professional programming:

When you are a teenager, alone with a (programmable) computer, the universe is alive with infinite possibilities. You are a god. Master of all you survey. Then you go to school, major in "Computer Science", graduate -- and off to the salt mines with you, where you will stitch silk purses out of sow's ears in some braindead language, building on the braindead systems created by your predecessors, for the rest of your working life. There will be little room for serious, deep creativity. You will be constrained by the will of your master (whether the proverbial "pointy-haired boss", or lemming-hordes of fickle startup customers) and by the limitations of the many poorly-designed systems you will use once you no longer have an unconstrained choice of task and medium.

Ouch. We who teach CS at the university find ourselves trapped between the needs of a world that employs most of our graduates and the beauty that computing offers. Alas, what Alan Kay said about Engelbart applies more broadly: "Engelbart, for better or for worse, was trying to make a violin.... [M]ost people don't want to learn the violin." I'm heartened to see so many people, including my own colleagues, working so hard to bring the ethos and joy of programming back to children, using Scratch, media computation, and web programming.

This week, I began a journey with thirty or so undergraduate CS students, who over the next four months will learn Scheme and -- I hope -- get a glimpse of the infinite possibilities that extend beyond their first jobs, or even their last. At the very least, I hope I don't shut any more doors on them.


PHOTO. The PARC 5-key Chord Keyboard, from the Buxton collection at Microsoft Research.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 04, 2013 5:54 PM

Impatience, A Barrier At All Scales

One of the barriers to learning that Dan Heisman sees in chessplayers is finding the patience to play slower. When playing any game, a person has to play slow enough to discern the consequences of moves. The chess world is complicated by the fact that games in tournament settings are timed, with a limit on the time a player can take for a certain number of moves. Over-the-board chess is played at a varieties of time limit, historically ranging from five minutes for the entire game (lightning chess) to forty moves in two and a half hours (world championship matches). Different time controls lead to different kinds of game.

Improvement at "serious" chess -- slow chess -- requires playing slower, at longer time controls. You have to practice thinking about moves and plans at a deeper level. Just as we have to train our bodies to run long distances, we have to train our chess brains to work at longer time controls.

This requires both stamina and patience. Sometimes, our brains are capable of thinking for long periods about a position, but our psyche wants to push, move faster. The barrier here is impatience "in the small", at scale of an individual game.

We see the same thing in novice programmers, who think they should be able to write a complicated program as quickly as they read a web comic or watch a YouTube video. Read the problem description; write the code. One of the important parts of most introductory programming courses is helping students learn patience at the level of writing a single program.

Another important kind of patience plays a role in the large, at learning scale. Some people want to get good fast: learn some syntax, write a few programs, and -- bam! -- be an expert. Peter Norvig has written the canonical treatment of long-term patience in learning to program.

jazz pianist Bill Evans, with Miles Davis

Of course, some talented people do get good fast, or at least they seem to become good faster than we do. That can be frustrating, especially when we are struggling. But that fact is, most of us have to take time to get good at anything.

Even the most accomplished artists know that. I'm reminded of this comment from Bill Evans, one of the greatest jazz pianists of the second half of the 20th century, in The Universal Mind of Bill Evans:

"Most people just don't realize the immensity of the problem," Evans says, "and either because they can't conquer immediately they think they haven't got the ability, or they're so impatient to conquer it that they never do see it through. But if you do understand the problem, then I think you can enjoy your whole trip through."

Learning to write software well is an immense task. The most successful programmers recognize this early and enjoy the trip. This kind of patience, over the long term, makes it a lot easier to take on the barriers that inevitably appear along the way.


PHOTO. Jazz pianist Bill Evans with Miles Davis, courtesy Photos of musicians at Tumblr.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 03, 2013 1:36 PM

Breaking Down Barriers to Learning

Or: What a CS Student -- or Teacher -- Can Learn from a Chess Coach

All teachers, whatever their discipline, encounter many of the same challenges. Chess coach Dan Heisman once wrote an instructive article called Breaking Down Barriers in which he discussed a split that he saw among his students:

  • players who are frozen by barriers to improvement, or even actively erect barriers to improvement
  • players who work to eliminate barriers, whatever that might require

I once wrote a short entry, Some People Get Stuff Done, about a similar phenomenon I have noticed over many years of teaching: some students find a way to get things done: to write the program, to solve the problem, to pass the course. These are usually the same students who find -- and make -- ways to get better at their craft. When they encounter barriers to their learning, they find a way to go over, around, or through them.

These are also the students who tend to succeed after they graduate and leave the shelter of a school system that often does more to help them succeed than they realize. The business world isn't always so forgiving.

The starting point for students who break down barriers to learning is a particular mindset, an attitude that they can learn. This is the mindset that enables these students to persevere when they encounter a roadblock. In some students, this attitude seems to encourage them to persevere.

A positive attitude isn't sufficient on its own. Believing that you can defeat a better player over the chessboard doesn't mean you will. You still have to do it. But, as Heisman says, a defeatist attitude can be a self-fulfilling prophecy during a game.

The same is true for learning computer science, or any other academic discipline. Positive attitude isn't the endpoint. It is the starting line. It is what allows you to continue working when things get difficult. The hard work is what gets you over, around, or through the barriers.

The good news is that hard work is more important than innate ability. Dick Gabriel is fond of saying that talent doesn't determine how good you get; it determines how fast you get good. How hard you work determines how good you get.

So, the right mindset is useful, and working hard is essential. But even these are not enough. Doing the right kind of work is essential. Students who don't realize this can demotivate themselves quickly. They throw themselves into their work, spend a lot of time and energy doing counterproductive things... and don't get better. They figure they "don't have what it takes" and move on to something else.

As a teacher, few things make me sadder than to see a student who wants to learn and is willing to work burn him- or herself out by spinning wheels with unhelpful, unproductive, uninformed study. When these students are in my class, I wonder, "Why didn't they come get help?" Or, if they did, "Why didn't they follow my advice?"

Some practices support learning better than others. There is a skill to learning, a discipline and a way of working that succeeds better than others. Often, the students do best in my courses are not the most gifted students but rather the ones who have learned how to attend class, how to read, and how to study. Many of these students figured out these skills early on in their school careers, or got lucky. It doesn't matter which. In any case, these skills are multipliers that enable these students to accelerate in every course they take. Knowledge and skill accumulate, and pretty soon these students look like something apart from their struggling colleagues.

But they aren't. They are doing things any student can do. "But I'll never catch up to those students, no matter how hard I work." Maybe, and maybe not. That doesn't matter, either. This is about your learning. You have to start from where you are, and go forward from there.

A good teacher or coach really can help students, by helping them get over incidental barriers and helping them learn how to get over the essential barriers in a productive way. One of the most important things a teacher can do is to provide feedback loops of both short and long duration, and then help students identify mistakes and minimize their repetition. This is obviously invaluable in playing chess, where pattern recognition and speed are fundamental capabilities. But pattern recognition and speed are fundamental capabilities in any kind of learning.

Some people these days like to downplay the importance of a human teacher or coach in learning to develop software. "We have the compiler, which gives us all the feedback we need any time of day." Yes, indeed. We have access to tools that our predecessor could only have dreamed of, and these tools empower us as learners. But are they always enough?

Chess players have something akin to our compiler: the chess computer. When I got my first chess computer in high school, I was ecstatic. I could play any time, anywhere, against a player that could be set to any level from beatable novice to untouchable master. I played a lot of games. A lot. And I learned a lot, too, as I did any time I played a lot of games and thought about them. But there were times when I didn't understand why something worked, or didn't. In those cases, I consulted the writings of a good teacher, or asked a human coach for some help. They helped me get over the obstacle faster than I could on my own.

That's another role that our teachers can play: they can help us to understand why.

Some Barriers

In his article, Heisman talks about some of the common barriers that chess students face and how to overcome them. Some seem specific to the game or competitive activities, but remarkably all of them apply equally well to students of computing or software development.

The major difference is that the barriers outlined are ones encountered by people who do most of their learning on their own, not in the classroom. Chess players don't usually go to school full time. If they have a teacher at all, it's more like piano lessons than a university program. The student meets with the teacher once a week (or less) and then studies, practices, and plays a lot in between sessions.

A lot of people these days believe that learning to create software is or should be more like this model than the university model, too. Movements like Software Apprenticeship and Codecademy are all about enabling individuals to take control of their own learning. But even in a university setting, learning works best when students study, practice, and program a lot in between class sessions or meetings with the professor.

Heisman discusses ten barriers. See if you can map them onto learning CS, or how to write software:

  1. Finding Good Practice and Feedback at a Club
  2. Worrying About Your Rating
  3. Competition Too Good at Tournaments
  4. Some Players Unfriendly on the Internet
  5. Players Only Want to Play Fast on the Internet
  6. Homework is Not Fun
  7. Finding Patience to Play Slower
  8. Don't Have the Time to Study or Play Enough
  9. Not That Talented
  10. Find Competitive Chess Daunting

It's not that hard, is it? ("Some players unfriendly on the internet"? We invented that one!)

A few of these stand out in my experience as a teacher and student. Consider #2. Chess players have ratings that reflect the level of their play. When you perform better than expected at a tournament, your rating goes up. When you perform poorer than expected, your rating goes down.

In the university, this corresponds to grades and grade-point average. Some students are paralyzed by the idea that their GPA might go down. So they take easier courses, or seek out professors who don't grade as hard as others. The negative outcome of this approach is that the courses don't challenge the student enough, and it is the challenge that pushes them to learn. Protecting your GPA in this way can cause you to learn less.

Ironically, this barrier most often affects students who have historically done well in school. They are the ones used to having high GPAs, and too often they have invested way too much of their self-worth in their grades. Students who expect to do well and to have "ratings" that confirm their status face an extra psychological barrier when the material becomes more challenging. This is one of those situations in which another person -- even a teacher! -- can be of enormous value by providing emotional support.

Homework isn't fun is a universal barrier to learning. There are many strategies for trying to get through the drudgery of work, and a good teacher tries them all. But, in the end, it is work. You just have to get over it. The good news is that there is a self-reinforcing cycle between doing the work and enjoying the work. Hugh MacLeod captures it well:

I love what I do, because I'm good at what I do, because...

Hard work is also scary. We might fail. But if we keep working, eventually we can succeed. What can keep us from doing the work?

Time is a limiting factor. Students often tell me, "I don't have the time..." to study, or write more code, or come in to get help. This can be a real challenge for students who have to work thirty hours a week to pay the bills, or who have families to care for.

I ask my students to be honest with themselves. Do you not have the time, or do you not make the time? Sometimes it's bad habit, frittering away time at video games or socializing. Sometimes it's bad habit that uses time inefficiently or ineffectively. These are habits you can change -- if you want to.

A student came to me just after we took our midterm exam last semester. He had done poorly and was worried about his grade. He asked if there was any chance left that he could succeed in the course. I said "yes", contingent on the answer to this question: What are you willing to change in your life to make success possible? Clearly, what he was doing wasn't enough. Either he was not devoting enough time to the course, or he was putting in enough time but not doing the right things. Was he willing to change what needed to be changed, if only for the eight weeks left in the course? He was honest with himself and dropped the course. A couple of other students made changes to how they worked on the course and recovered quite nicely.

Ultimately, if you want to succeed at something, you make time to do the work.


Twenty-five years ago, Rolling Stone panned the re-release of rocker John Mellencamp's debut album, Chestnut Street Incident. In the ten years since its original release, Mellencamp had become a star. His work in the intervening years made the quality of his debut look all the worse. It was cocky and klutzy, less than the sum of his musical influences. "Back then," the reviewer wrote, Mellencamp's "ambition definitely exceeded his grasp". The album wasn't very good, and the review said so, harshly.

Mellencamp wouldn't have disagreed. In interviews, he has often talked about how bad a songwriter he was when he first started in the business. On top of that, he faced obstacles similar to those facing many other young artists, including promoters and handlers more interested in music industry fashion and short-term profit than art. They even made him take a stage name, "Johnny Cougar", in the interest of marketability.

But he didn't settle for his initial condition. He continued to learn, honing his songwriting skills and learning to manage his own affairs. Along the way, he learned the sort of humility that many artists confident in their art have. Eventually, his grasp matched his ambition. He succeeded commercially and achieved a small measure of critical acceptance.

I must say that I have always liked Chestnut Street Incident, in part for its raw, desperate ambition. Here was a kid from small-town Indiana who wanted to grow beyond his roots and become something more. The lyrics are not profound, and the emotion is not complicated. I've always admired Mellencamp for his attitude about work and craft, and his willingness to try and to fail.

To learn is to break down barriers. Whether you are learning to play chess or to write software, you will encounter obstacles. Breaking them down is a matter of mindset, effort, and specific skills -- all of which are within reach.

Even if you aren't a chess player, you may well enjoy reading Heisman's article. CS students and teachers alike can benefit from its lessons.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 28, 2012 10:03 AM

Translating Code Gibberish to Human-Speak

Following an old link to Ridiculous Fish's Unix shell fish, I recently stumbled upon the delightful cdecl, a service that translates C declarations, however inscrutable, into plain English (and vice versa). As this introductory post says,

Every C declaration will be as an open book to you! Your coworkers' scruffy beards and suspenders will be nigh useless!

The site even provides permalinks so that you can share translations of your thorniest C casts with friends and family.

These pages are more than three years old, so I'm surely telling you something you already know. How did I just find this?

I don't program in C much these days, so cdecl itself is of use to me only as humorous diversion. But it occurs to me that simple tools like this could be useful in a pedagogical setting. Next semester, my students will be learning Scheme and functional programming style. The language doesn't have much syntax, but it does have all those parentheses. Whatever I say or do, they disorient many of my students for a while. Some them will look at even simple code such as

     (let ((x (square 4))
           (y 7))
       (+ x y))

... and feel lost. We spend time in class learning how to read code, and talk about the semantics of such expressions, which helps. But in a pinch, wouldn't it be nice for a student to hit a button and have that code translated into something more immediately comprehensible? Perhaps:

Let x be the square of 4 and y be 7 in the sum of x and y.

This might be a nice learning tool for students as they struggle with a language that seems to them -- at least early on -- to be gibberish on a par with char (*(*(* const x[3])())[5])(int).

Some Scheme masters might well say, "But the syntax and semantics of a let are straightforward. You don't really need this tool." At one level, this is true. Unfortunately, it ignores the cognitive and psychological challenges that most people face when they learn something that is sufficiently unfamiliar to them.

Actually, I think we can use the straightforwardness of the translation as a vehicle to help students learn more than just how a let expression works. I have a deeper motive.

Learning Scheme and functional programming are only a part of the course. Its main purpose is to help students understand programming languages more generally, and how they are processed by interpreters and compilers.

When we look at the let expression above, we can see that translating it into the English expression is not only straightforward, it is 100% mechanical. If it's a mechanical process, then we can write a program to do it for us! Following a BNF description of the expression's syntax, we can write an interpreter that exposes the semantics of the expression.

In many ways, that is the essence of this course.

At this point, this is only a brainstorm, perhaps fueled by holiday cooking and several days away from the office. I don't know yet how much I will do with this in class next term, but there is some promise here.

Of course, we can imagine using a cdecl-like tool to help beginners learn other languages, too. Perhaps there are elements of writing OO code in Java that confuse students enough to make a simple translator useful. Surely public static void main( String[] args) deserves some special treatment! Ruby is complex enough that it might require dozens of little translators to do it justice. Unfortunately, it might take Matz's inside knowledge to write them.

(The idea of translating inscrutable code into language understandable by humans is not limited to computer code, of course. There is a popular movement, to write laws and other legal code in Plain English. This movement is occasionally championed by legislators -- especially in election years. The U.S. Securities and Exchange Commission has its own Plain English Initiative and Plain English Handbook. At seventy-seven pages, the SEC handbook is roughly the same size as R6RS description of Scheme.)

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

December 12, 2012 4:18 PM

Be a Driver, Not a Passenger

Some people say that programming isn't for everyone, just as knowing how to tinker under the hood of one's car isn't for everyone. Some people design and build cars; other people fix them; and the rest of us use them as high-level tools.

Douglas Rushkoff explains why this analogy is wrong:

Programming a computer is not like being the mechanic of an automobile. We're not looking at the difference between a mechanic and a driver, but between a driver and a passenger. If you don't know how to drive the car, you are forever dependent on your driver to take you where you want to go. You're even dependent on that driver to tell you when a place exists.

This is CS Education week, "a highly distributed celebration of the impact of computing and the need for computer science education". As a part of the festivities, Rushkoff was scheduled to address members of Congress and their staffers today about "the value of digital literacy". The passage quoted above is one of ten points he planned to make in his address.

As good as the other nine points are -- and several are very good -- I think the distinction between driver and passenger is the key, the essential idea for folks to understand about computing. If you can't program, you are not a driver; you are a passenger on someone else's trip. They get to decide where you go. You may want to invent a new place entirely, but you don't have the tools of invention. Worse yet, you may not even have the tools you need to imagine the new place. The world is as it is presented to you.

Don't just go along for the ride. Drive.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

December 10, 2012 2:54 PM

Brief Flashes of Understanding, Fully Awake

The programmers and teachers among you surely know this feeling well:

As I drift back to sleep, I can't help thinking that it's a wonderful thing to be right about the world. To weigh the evidence, always incomplete, and correctly intuit the whole, to see the world in a grain of sand, to recognize its beauty, its simplicity, its truth. It's as close as we get to God in this life, and we reside in the glow of such brief flashes of understanding, fully awake, sometimes, for two or three seconds, at peace with our existence. And then we go back to sleep.

Or tackle the next requirement.

(The passage is from Richard Russo's Straight Man, an enjoyable send-up of modern man in an academic life.)

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 30, 2012 3:49 PM

Passing Out the Final Exam on Day One

I recently ran across an old blog posting called Students learn what they need, not what is assigned, in which a Ted Dunning described a different sort of "flipped course" than is usually meant: He gave his students the final exam on Day 1, passed out the raw materials for the course, and told them to "get to work". They decided what they needed to learn, and when, and asked for instruction and guidance on their own schedule.

Dunning was happy with the results and concluded that...

... these students could learn vastly more than was expected of them if they just wanted to.

When students like what they are doing, they can surprise most everyone with what they will do to learn. Doing something cool like building a robot (as Dunning's students did) can be all the motivation some students need.

I'm sometimes surprised by just what catches my students' fancy. A few weeks ago, I asked my sophomore- and junior-level OOP class to build the infrastructure for a Twitter-like app. It engaged them like only graphical apps usually do. They've really dug into the specs to figure out what they mean. Many of them don't use Twitter, which has been good, because it frees them of too many preconceived limitations on where they can take their program.

They are asking good questions, too, about design: Should this object talk to that one? The way I divided up the task led to code that feels fragile; is there a better way? It's so nice not to still be answering Java questions. I suspect that some are still encountering problems at the language level, but they are solving them on their own and spending more time thinking about the program at a higher level.

I made this a multi-part project. They submitted Iteration 1 last weekend, will submit Iteration 2 tomorrow, and will work on Iteration 3 next week. That's a crucial element, I think, in getting students to begin taking their designs more seriously. It matters how hard it easy to change the code, because they have to change it now -- and tomorrow!

The point of Dunning's blog is that students have to discover the need to know something before they are really interesting in learning it. This is especially true if the learning process is difficult or tedious. You can apply this idea to a lot of software development, and even more broadly to CS.

I'm not sure when I'll try the give-the-final-exam-first strategy. My compiler course already sort of works that way, since we assign the term project upfront and then go about learning what we need to build the compiler. But I don't make my students request lectures; I still lay the course out in advance and take only occasional detours.

I think I will go at least that far next semester in my programming languages course, too: show them a language on day one and explain that our goal for the semester is to build an interpreter for it by the end of the semester, along with a few variations that explore the range of possibilities that programming languages offer. That may create a different focus in my mind as I go through the semester. I'm curious to see.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 28, 2012 6:34 PM

Converting Lecture Notes into an Active Website

... in which the author seeks pointers to interactive Scheme materials on-line.

Last summer, I fiddled around a bit with Scribble, a program for writing documentation in (and for) Racket. I considered using it to write the lecture notes and website for my fall OOP course, but for a variety of reasons set it aside.

the icon for Slideshow

In the spring I'll be teaching Programming Languages again, and using Racket with my students. This seems like the perfect time to dive in and use Scribble and Slideshow to create all my course materials. This will create a synergy between what I do in class and how I prep, which will be good for me. Using Racket tools will also set a good example for my students.

After seeing The Racket Way, Matthew Flatt's talk at StrangeLoop, I am inspired to do more than simply use Racket tools to create text and slides and web pages. I'd like to re-immerse myself in a world where everything is a program, or nearly so. This would set an even more important example for my students, and perhaps help them to see more clearly that they don't ever to settle for the programs, the tools, or the languages that people give them. That is the Computer Science way as well as the Racket way.

I've also been inspired recently by the idea of an interactive textbook a lá Miller and Ranum. I have a pretty good set of lecture notes for Programming Languages, but the class website should be more than a 21st-century rendition of a 19th-century presentation. I think that using Scribble and Slideshow are a step in the right direction.

So, a request: I am looking for examples of people using the Racket presentation tools to create web pages that have embedded Scheme REPLs, perhaps even a code stepper of the sort Miller and Ranum use for Python. Any pointers you might have are welcome.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 26, 2012 3:24 PM

Quotes of the Day: Constraints on Programming

Obligation as constraint

Edward Yang has discovered the Three Bears pattern. He calls it "extremist programming".

When learning a new principle, try to apply it everywhere. That way, you'll learn more quickly where it does and doesn't work well, even if your initial intuitions about it are wrong.

Actually, you don't learn in spite of your initial intuitions being wrong. You learn because your initial intuitions were wrong. That's when learning happens best.

(I mention Three Bears every so often, such as Bright Lines in Learning and Doing, and whenever I discuss limiting usage of language features or primitive data values.)

Blindness as constraint

In an interview I linked to in my previous entry, Brian Eno and Ha-Joon Chang talk about the illusion of freedom. Whenever you talk about freedom, as in a "free market" or "free jazz",

... what you really mean is "constrained by rules that we've stopped thinking about".

Free jazz isn't entirely free, because you are constrained by what your muscles can do. Free markets aren't entirely free, because there are limits we simply choose not to talk about. Perhaps we once did talk about them and have chosen not to any more. Perhaps we never talked about them and don't even recognize that they are present.

I can't help but think of computer science faculty who claim we shouldn't be teaching OO programming in the first course, or any other "paradigm"; we should just teach basic programming first. They may be right about not teaching OOP first, but not because their approach is paradigm-free. It isn't.

(I mention constraints as a source of freedom every so often, including the ways in which patterns free students to create and the way creativity needs to be developed.)

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 17, 2012 12:23 PM

Why a CS Major Might Minor in Anthropology

Paul Klipp wrote a nice piece recently on emic and etic approaches to explaining team behavior. He explains what emic and etic approaches are and then shows how they apply to the consultant's job. For example:

Let's look at an example closer to home for us software folks. You're an agile coach, arriving in a new environment with a mission from management to "make this team more agile". If you, like so many consultants in most every field, favor an etic approach, you will begin by doing a gap analysis between the behaviors and artifacts that you see and those with which you are most familiar. That's useful, and practically inevitable. The next natural step, however, may be less helpful. That is to judge the gaps between what this team is doing and what you consider to be normal as wrong.... By deciding, as a consultant or coach, to now attempt to prepare an emic description of the team's behaviors, you force yourself to set aside your preconceptions and engage in meaningful conversations with the team in order to understand how they see themselves. Now you have two tools in your kit, where you might before have had one, and more tools prepares you for more situations.

When I speak to HS students and their parents, and when I advise freshmen, I suggest that the consider picking up a minor or a second major. I tell them that it almost doesn't matter which other discipline they choose. College is a good time to broaden oneself, to enjoy learning for its own sake. Some minors and second majors may seem more directly relevant to a CS grad's career interests, but you never know what domain or company you will end up working in. You never know when having studied a seemingly unrelated discipline will turn out to be useful.

Many students are surprised when I recommend social sciences such as psychology, sociology, and anthropology as great partners for CS. Their parents are, too. Understanding people, both individually and in groups, is important in any profession, but it is perhaps more important for CS grads than many. We build software -- for people. We teach new languages and techniques -- to people. We contract out our services to organizations -- of people. We introduce new practices and methodologies to organizations -- of people. Ethnography may be a more important to a software consultant's success than any set of business classes.

I had my first experience with this when I was a graduate student working in the area of knowledge-based systems. We built systems that aimed to capture the knowledge of human experts, often teams of experts. We found that they relied a lot on tacit knowledge, both in their individual expertise and in the fabric of their teams. It wasn't until I read some papers from John McDermott's research group at Carnegie Mellon that I realized we were all engaged in ethnographic studies. It would have been so useful to have had some background in anthropology!

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 15, 2012 4:04 PM

Teaching Students to Read and Study in a New Way

Mark Guzdial's How students use an electronic book, reports on the research paper "Performance and Use Evaluation of an Electronic Book for Introductory Python Programming" [ pdf ]. In this paper, Alvarado et al. evaluate how students used the interactive textbook How to Think Like a Computer Scientist by Ranum and Miller in an intro CS course. The textbook integrates traditional text with embedded video, "active" examples using an embedded Python interpreter, and empirical examples using a code stepper a lá a debugger.

The researchers were surprised to find how little some students used the book's interactive features:

One possible explanation for the less-than-anticipated use of the unique features may be student study skills. The survey results tend to suggest that students "study" by "reading". Few students mention coding or tracing programs as a way of "studying" computer science.

I am not using an interactive textbook in my course this semester, but I have encountered the implicit connection in many students' minds between studying and reading. It caught me off-guard, too.

After lengthy searching and some thought, I decided to teach my sophomore-level OOP course without a required text. I gave students links to two on-line books they could use as Python references, but neither covers the programming principles and techniques that are at the heart of the course. In lieu of a traditional text, I have been giving my students notes for each session, written up carefully in a style that resembles a textbook, and source code -- lots and lots of source code.

Realizing that this would be an unusual way for students to study for a CS class, at least compared to their first-year courses, I have been pretty consistent in encouraging them to work this way. Daily I suggest that they unpack the code, read it, compile it, and tinker with it. The session notes often include little exercises they can do to test or extend their understanding of a topic we have covered in class. In later sessions, I often refer back to an example or use it as the basis for something new.

I figured that, without a textbook to bog them down, they would use my session notes as a map and spend most of their time in the code spelunking, learning to read and write code, and seeing the ideas we encounter in class alive in the code.

a snapshot of Pousse cells in two dimensions

Like the results reported in the Alvarado paper, my experiences have been mixed, and in many ways not what I expected. Some students read very little, and many of those who do read the lecture notes spend relatively little time playing with the code. They will spend plenty of time on our homework assignments, but little or no time on code for the purposes of studying. My data is anecdotal, based on conversations with the subset of students who visit office hours and e-mail exchanges with students who ask questions late at night. But performance on the midterm exam and some of the programming assignments are consistent with my inference.

OO programs are the literature of this course. Textbooks are like commentaries and (really long) Cliff Notes. If indeed the goal is to get students to read and write code, how should we proceed? I have been imagining an even more extreme approach:

  • no textbook, only a language reference
  • no detailed lecture notes, only cursory summaries of what we did in class
  • code as a reading assignment before each session
  • every day in class, students do tasks related to the assigned reading -- engaging, fun tasks, but tasks they can't or wouldn't want to do without having studied the assigned code

A decade or so ago, I taught a course that mixed topics in user interfaces and professional ethics using a similar approach. It didn't provide magic results, but I did notice that once students got used to the unusual rhythm of the course they generally bought in to the approach. The new element here is the emphasis on code as the primary literature to read and study.

Teaching a course in a way that subverts student expectations and experience creates a new pedagogical need: teaching new study skills and helping students develop new work habits. Alvarado et al. recognize that this applies to using a radically different sort of textbook, too:

Might students have learned more if we encouraged them to use codelens more? We may need to teach students new study skills to take advantage of new learning resources and opportunities.


Another interesting step would be to add some meta-instruction. Can we teach students new study skills, to take advantage of the unique resources of the book? New media may demand a change in how students use the media.

I think those of us who teach at the university level underestimate how important meta-level instruction of this sort is to most of students. We tend to assume that students will figure it out on their own. That's a dangerous assumption to make, at least for a discipline that tends to lose too many good students on the way to graduation.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

November 03, 2012 11:17 AM

When "What" Questions Presuppose "How"

John Cook wrote about times in mathematics when maybe you don't need to do what you were asked to do. As one example, he used remainder from division. In many cases, you don't need to do division, because you can find the answer using a different, often simpler, method.

We see a variation of John's theme in programming, too. Sometimes, a client will ask for a result in a way that presupposes the method that will be used to produce it. For example, "Use a stack to evaluate these nested expressions." We professors do this to students a lot, because they want the students to learn the particular technique specified. But you see subtle versions of this kind of request more often than you might expect outside the classroom.

An important part of learning to design software is learning to tease apart the subtle conflation of interface and implementation in the code we write. Students who learn OO programming after a traditional data structures course usually "get" the idea of data abstraction, yet still approach large problems in ways that let implementations leak out of their abstractions in the form of method names and return values. Kent Beck talked about how this problem afflicts even experienced programmers in his blog entry Naming From the Outside In.

Primitive Obsession is another symptom of conflating what we need with how we produce it. For beginners, it's natural to use base types to implement almost any behavior. Hey, the extreme programming principle You Ain't Gonna Need It encourages even us more experienced developers not to create abstractions too soon, until we know we need them and in what form. The convenience offered by hashes, featured so prominently in the scripting languages that many of us use these days, makes it easy to program for a long time without having to code a collection of any sort.

But learning to model domain objects as objects -- interfaces that do not presuppose implementation -- is one of the powerful stepping stones on the way to writing supple code, extendible and adaptable in the face of reasonable changes in the spec.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 30, 2012 4:22 PM

Mathematical Formulas, The Great Gatsby, and Small Programs

... or: Why Less Code is Better

In Don't Kill Math, Evan Miller defends analytic methods in the sciences against Bret Victor's "visions for computer-assisted creativity and scientific understanding". (You can see some of my reactions to Victor's vision in a piece I wrote about his StrangeLoop talk.)

Miller writes:

For the practicing scientist, the chief virtue of analytic methods can be summed up in a single word: clarity. An equation describing a quantity of interest conveys what is important in determining that quantity and what is not at all important.

He goes on to look at examples such as the universal law of gravitation and shows that a single formula gives even a person with "minimal education in physics" an economical distillation of what matters. The clarity provided by a good analytic solution affords the reader two more crucial benefits: confident understanding and memorable insights.

Poet Peter Turchi describes a related phenomenon in fiction writing, in his essay You and I Know, Order is Everything. A story can pull us forward by omitting details and thus creating in the reader a desire to learn more. Referring to a particularly strategic paragraph, he writes:

That first sentence created a desire to know certain information: What is this significant event? ... We still don't have an answer, but the context for the question is becoming increasingly clear -- so while we're eager to have those initial questions answered, we're content to wait a little longer, because we're getting what seems to be important information. By the third paragraph, ... we think we've got a clue; but by that time the focus of the narrative is no longer the simple fact of what's going on, but the [larger setting of the story]. The story has shifted our attention from a minor mystery to a more significant one. On some level or another nearly every successful story works this way, leading us from one mystery to another, like stepping stones across a river.

In a good story, eventually...

... we recognize that the narrator was telling us more than we could understand, withholding information but also drawing our attention to the very thing we should be looking at.

In two very different contexts, we see the same forces at play. The quality of a description follows from the balance it strikes between what is in the description and what is left out.

To me, this is another example of how a liberal education can benefit students majoring in both the sciences and the humanities [ 1 | 2 ]. We can learn about many common themes and patterns of life from both traditions. Neither is primary. A student can encounter the idea first in the sciences, or first in the humanities, whichever interests the student more. But apprehending a beautiful pattern in multiple domains of discourse can reinforce the idea and make it more salient in the preferred domain. This also broadens our imaginations, allowing us to see more patterns and more subtlety in the patterns we already know.

So: a good description, a good story, depends in some part on the clarity attendant in how it conveys what is important and what is not important. What are the implications of this pattern for programming? A computer program is, after all a description: an executable description of a domain or phenomenon.

I think this pattern gives us insight into why less code is usually better than more code. Given two functionally equivalent programs of different lengths, we generally prefer the shorter program because it contains only what matters. The excess code found in the longer program obscures from the reader what is essential. Furthermore, as with Miller's concise formulas, a short program offers its reader the gift of more confident understanding and the opportunity for memorable insights.

What is not in a program can tell us a lot, too. One of the hallmarks of expert programmers is their ability to see the negative space in a design and program and learn from it. My students, who are generally novice programmers, struggle with this. They are still learning what matters and how to write descriptions at all, let alone concise one. They are still learning how to focus their programming tools, and on what.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

October 19, 2012 3:08 PM

Computer Programming, Education Reform, and Changing Our Schools

Seymour Papert

You almost can't go wrong by revisiting Seymour Papert's work every so often. This morning I read Why School Reform Is Impossible, which reminds us that reform and change are different things. When people try to "reform" education by injecting a new idea from outside, schools seem to assimilate the reform into its own structure, which from the perspective of the reformer blunts or rejects the intended reform. Yet schools and our education system do change over time, evolving as the students, culture, and other environmental factors change.

As people such as Papert and Alan Kay have long argued, a big part of the problem in school reform involving computers is that we misunderstand what a computer is:

If you ask, "Which is not like the other two?" in the list "educational movie, textbook, computer", it is pretty obvious from my perspective that the answer must be "computer."

... not "textbook", which is how most people answer, including many people who want to introduce more computers into the classroom. Textbooks and movies are devices for receiving content that someone else made. Computers are for creating content. It just so happens that we can use them to communicate ideas in new ways, too.

This misunderstanding leads people to push computers for the wrong reasons, or at least for reasons that miss their game-changing power. We sometimes here that "programming is the new Latin". Papert reminds us that the reasons we used to teach Latin in schools changed over time:

In recent times, Latin was taught in schools because it was supposed to be good for the development of general cognitive skills. Further back, it was taught because it was the language in which all scholarly knowledge was expressed, and I have suggested that computational language could come to play a similar role in relation to quite extensive areas of knowledge.

If programming is the new Latin, it's not Latin class, circa 1960, in which Latin taught us to be rigorous students. It's Latin class, circa 1860 or 1760 or 1560, in which Latin was the language of scholarly activity. As we watch computing become a central part of the language of science, communication, and even the arts and humanities, we will realize that students need to learn to read and write code because -- without that skill -- they are left out of the future.

No child left behind, indeed.

In this essay, Paper gives a short version of his discussion in Mindstorms of why we teach the quadratic equation of the parabola to every school child. He argues that its inclusion in the curriculum has more to do with its suitability to the medium of the say -- pencil and paper -- than to intrinsic importance. I'm not too sure that's true; knowing how parabolas and ellipses work is pretty important for understanding the physical world. But it is certainly true that how and when we introduce parabolas to students can change when we have a computer and a programming language at hand.

Even at the university we encounter this collision of old and new. Every student here must take a course in "quantitative reasoning" before graduating. For years, that was considered to be "a math course" by students and advisors alike. A few years ago, the CS department introduced a new course into the area, in which students can explores a lot of the same quantitative issues using computation rather than pencil and paper. With software tools for modeling and simulation, many students can approach and even begin to solve complex problems much more quickly than they could working by hand. And it's a lot more fun, too.

To make this work, of course, students have to learn a new programming language and practice using it in meaningful ways. Papert likens it to learning a natural language like French. You need to speak it and read it. He says we would need the programming analog of "the analog of a diverse collection of books written in French and access to French-speaking people".

the Scratch logo cat

The Scratch community is taking at shot at this. The Scratch website offers not only a way to download the Scratch environment and a way to view tutorials on creating with Scratch. It also offers -- front and center, the entire page, really -- links to shared projects and galleries. This gives students a chance first to be inspired by other kids and then to download and read the actual Scratch programs that enticed them. It's a great model.

The key is to help everyone see that computers are not like textbooks and televisions and movie projectors. As Mitch Resnick has said:

Computers for most people are black boxes. I believe kids should understand objects are "smart" not because they're just smart, but because someone programmed them to be smart.

What's most important ... is that young children start to develop a relationship with the computer where they feel they're in control. We don't want kids to see the computer as something where they just browse and click. We want them to see digital technologies as something they can use to express themselves.

Don't just play with other people's products. Make your own.

Changes in the world's use of computing may do more to cause schools to evolve in a new direction than anyone's educational reforms ever could. Teaching children that they can be creators and not simply consumers is a subversive first step.


IMAGE 1: Seymour Papert at the OLPC offices in Cambridge, Massachusetts, in 2006. Source: Wikimedia Commons License: Creative Commons Attribution-Share Alike 2.0.

IMAGE 2: The Scratch logo. Source: Wikimedia Commons License: Creative Commons Attribution-Share Alike 2.0.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 17, 2012 3:32 PM

What Our Students Think of Us

Following a recommendation on Twitter, I recently read Am I A Product Of The Institutions I Attended?, the text of a speech by Amitabha Bagchi. Bagchi is a novelist who studied computer science in school. It is a reflection on what we learn in and out of school, which isn't always what our schools intend. He closes the paper with some thoughts on being a teacher.

... as all of us who have been teachers for even a short while know, all we can do is give people an opportunity to learn. And if they don't learn, we can give them another opportunity, and another.

Students learn on their schedules, not ours. All we can do is to keep providing opportunities, so that when they are ready, an opportunity awaits them.

This passage:

Like so many other teachers I spend a lot of time thinking about my students, and, also like many other teachers, I don't spend enough time thinking about what they think of me.

... launches a discussion that touched a chord in me. As a high school student, Bagchi realized that students see their teacher as a figure of authority and decorum no matter the reality on any given day. The teacher may be young, or inexperienced, or emotionally out of sorts. But to them, the teacher is The Teacher.

So there you are, you poor teacher, frozen in eternal adulthood, even on those days when you wish you could just curl into a fetal position and suck your thumb instead of having to stand up and talk for an hour to a room full of young people who are looking at you, or at least should be looking at you. Sometimes in the nitty-gritty of the syllabus, the announcements about exams and homework, the clearing of the last class's doubts, you forget about the current that emerges from your body and flows out into the class. You forget what you mean to them.

It's wise to step back occasionally and remember what your students mean to you, and you to them. Long after the details of any homework assignment or midterm exam have left our minds, these relationships remain.

(And, as important as these relationships are, they are not the most important relationship in the classroom.)

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

October 16, 2012 4:45 PM

The Parable of the OO Programming Student

As The Master was setting out on a journey, a young man ran up, knelt down before him, and asked him, "Good teacher, what must I do to inherit the eternal bliss of OO?"

The Master answered him, "Why do you call me good? No OO programmer is good but The Creator alone.

"You know the commandments: "'An object should have only a single responsibility.'

"'Software entities should be open for extension, but closed for modification.'

"'Objects should be replaceable with instances of their subtypes without altering the correctness of that program.'

"'Tell, don't ask.'

"'You shall not indulge in primitive obsession.'

"'All state is private.'"

The young man replied and said to Him, "Teacher, all of these I have observed from my youth when first I learned to program."

The Master, looking at him, loved him and said to him, "You are lacking in one thing. Go, surrender all primitive types, and renounce all control structures. Write all code as messages passed between encapsulated objects, with extreme late-binding of all things. Then will you have treasure in Heaven; then come, follow me."

At that statement the young man's face fell, and he went away sad, for he possessed many data structures and algorithms.

The Master looked around and said to his disciples, "How hard it is for those who have a wealth of procedural programming experience to enter the kingdom of OO."

... with apologies to The Gospel of Mark.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 05, 2012 4:02 PM

Ready for a Weekend

After a week in which I left the classroom one day feeling like a pretty good teacher and and another day feeling like I'll never get this teaching thing right, I was looking for some inspiration.

While looking at a really simple blog this morning, I got to thinking about writing some code for fun this weekend, something I don't get to do very often these days.

Then came an e-mail exchange with a former student now thinking about computing in another field. He has interesting thoughts about how computing can reach people doing real work in other disciplines, and how computing itself needs to change in order to be relevant in a changing world. It was just what I needed. Some days, the student is the teacher. Other days, it's the former student.

This all brought to mind a passage from Mark Edmondson's Why Read?:

True teachers of literature become teachers because their lives have been changed by reading. They want to offer others the same chance to be transformed. "What we have loved, / Others will love," says Wordsworth in The Prelude, "and we will teach them how."

That's how I feel about programming. I hope that most days my students can sense that in the classroom. It's good to know that at least occasionally a student is transformed, and just as often I am transformed again.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 30, 2012 12:45 PM

StrangeLoop 8: Reactions to Brett Victor's Visible Programming

The last talk I attended at StrangeLoop 2012 was Bret Victor's Visible Programming. He has since posted an extended version of his presentation, as a multimedia essay titled Learnable Programming. You really should read his essay and play the video in which he demonstrates the implementation of his ideas. It is quite impressive, and worthy of the discussion his ideas have engendered over the last few months.

In this entry, I give only a high-level summary of the idea, react to only one of his claims, and discuss only one of his design principles in ay detail. This entry grew much longer than I originally intended. If you would like to skip most of my reaction, jump to the mini-essay that is the heart of this entry, Programing By Reacting, in the REPL.


Programmers often discuss their productivity as at least a partial result of the programming environments they use. Victor thinks this is dangerously wrong. It implies, he says, that the difficulty with programming is that we aren't doing it fast enough.

But speed is not the problem. The problem is that our programming environments don't help us to think. We do all of our programming in our minds, then we dump our ideas into code via the editor.

Our environments should do more. They should be our external imagination. They should help us see how our programs work as we are writing them.

This is an attractive guiding principle for designing tools to help programmers. Victor elaborates this principle into a set of five design principles for an environment:

  • read the vocabulary -- what do these words mean?
  • follow the flow -- what happens when?
  • see the state -- what is the computer thinking?
  • create by reacting -- start somewhere, then sculpt
  • create by abstracting -- start concrete, then generalize

Victor's talk then discussed each design principle in detail and showed how one might implement the idea using JavaScript and Processing.js in a web browser. The demo was cool enough that the StrangeLoop crowd broke into applause at leas twice during the talk. Read the essay.


As I watched the talk, I found myself reacting in a way I had not expected. So many people have spoken so highly of this work. The crowd was applauding! Why was I not as enamored? I was impressed, for sure, and I was thinking about ways to use these ideas to improve my teaching. But I wasn't falling head over heels in love.

A Strong Claim

First, I was taken aback by a particular claim that Victor made at the beginning of his talk as one of the justifications for this work:

If a programmer cannot see what a program is doing, she can't understand it.

Unless he means this metaphorically, seeing "in the mind's eye", then it is simply wrong. We do understand things we don't see in physical form. We learn many things without seeing them in physical form. During my doctoral study, I took several courses in philosophy, and only rarely did we have recourse to images of the ideas we were studying. We held ideas in our head, expressed in words, and manipulated them there.

We did externalize ideas, both as a way to learn them and think about them. But we tended to use stories, not pictures. By speaking an idea, or writing it down, and sharing it with others, we could work with them.

So, my discomfort with one of Victor's axioms accounted for some of my unexpected reaction. Professional programmers can and do manipulate ideas abstractly. Visualization can help, but when is it necessary, or even most helpful?

Learning, Versus Doing

This leads to a second element of my concern. I think I had a misconception about Victor's work. His talk and its title, "Visible Programming", led me to think his ideas are aimed primarily at working programmers, that we need to make programs visible for all programmers.

The title of his essay, "Learnable Programming", puts his claims into a different context. We need to make programs visible for people who are learning to program. This seems a much more reasonable position on its face. It also lets me see the axiom that bothered me so much in a more sympathetic light: If a novice programmer cannot see what a program is doing, then she may not be able to understand it.

Seeing how a program works is a big part of learning to program. A few years ago, I wrote about "biction" and the power of drawing a picture of what code does. I often find that if I require a student to draw a picture of what his code is doing before he can ask me for debugging help, he will answer his own question before getting to me.

The first time a student experiences this can be a powerful experience. Many students begin to think of programming in a different way when they realize the power of thinking about their programs using tools other than code. Visible programming environments can play a role in helping students think about their programs, outside their code and outside their heads.

I am left puzzling over two thoughts:

  • How much of the value my students see in pictures comes from not from seeing the program work but from drawing the picture themselves -- the act of reflecting about the program? If our tools visualizes the code for them, will we see the same learning effect that we see in drawing their own pictures?

  • Certainly Victor's visible programming tools can help learners. How much will they help programmers once they become experts? Ben Shneiderman's Designing the User Interface taught me that novices and experts have different needs, and that it's often difficult to know what works well for experts until we run experiments.

Mark Guzdial has written a more detailed analysis of Victor's essay from the perspective of a computer science educator. As always, Mark's ideas are worth reading.

Programming By Reacting, in the REPL

My favorite parts of this talk were the sections on creating by reacting and abstracting. Programmers, Victor says, don't work like other creators. Painters don't stare at a blank canvas, think hard, create a painting in their minds, and then start painting the picture they know they want to create. Sculptors don't stare at a block of stone, envision in their mind's eye the statue they intend to make, and then reproduce that vision in stone. They start creating, and react, both to the work of art they are creating and to the materials they are using.

Programmers, Victor says, should be able to do the same thing -- if only our programming environments helped us.

As a teacher, I think this is an area ripe for improvement in how we help students learn to program. Students open up their text editor or IDE, stare at that blank screen, and are terrified. What do I do now? A lot of my work over the last fifteen to twenty years has been in trying to find ways to help students get started, to help them to overcome the fear of the blank screen.

My approaches haven't been through visualization, but through other ways to think about programs and how we grow them. Elementary patterns can give students tools for thinking about problems and growing their code at a scale larger than characters or language keywords. An agile approach can help them start small, add one feature at a time, proceed in confidence with working tests, and refactor to make their code better as they go along. Adding Victor-style environment support for the code students write in CS1 and CS2 would surely help as well.

However, as I listened to Victor describe support for creating by reacting, and then abstracting variables and functions out of concrete examples, I realized something. Programmers don't typically write code in an environment with data visualizations of the sort Victor proposes, but we do program in the style that such visualizations enable.

We do it in the REPL!

A simple, interactive computer programming environment enables programmers to create by reacting.

  • They write short snippets of code that describe how a new feature will work.
  • They test the code immediately, seeing concrete results from concrete examples.
  • They react to the results, shaping their code in response to what the code and its output tell them.
  • They then abstract working behaviors into functions that can be used to implement another level of functionality.

Programmers from the Lisp and Smalltalk communities, and from the rest of the dynamic programming world, will recognize this style of programming. It's what we do, a form of creating by reacting, from concrete examples in the interaction pane to code in the definitions pane.

In the agile software development world, test-first development encourages a similar style of programming, from concrete examples in the test case to minimal code in the application class. Test-driven design stimulates an even more consciously reactive style of programming, in which the programmer reacts both to the evolving program and to the programmer's evolving understanding of it.

The result is something similar to Victor's goal for programmers as they create abstractions:

The learner always gets the experience of interactively controlling the lower-level details, understanding them, developing trust in them, before handing off that control to an abstraction and moving to a higher level of control.

It seems that Victor would like to perform even more support for novices than these tools can provide, down to visualizing what the program does as they type each line of code. IDEs with autocomplete is perhaps the closest analog in our current arsenal. Perhaps we can do more, not only for novices but also professionals.


I love the idea that our environments could do more for us, to be our external imaginations.

Like many programmers, though, as I watched this talk, I occasionally wondered, "Sure, this works great if you creating art in Processing. What about when I'm writing a compiler? What should my editor do then?"

Victor anticipated this question and pre-emptively answered it. Rather than asking, How does this scale to what I do?, we should turn the question inside out and ask, These are the design requirements for a good environment. How do we change programming to fit?

I doubt such a dogmatic turn will convince skeptics with serious doubts about this approach.

I do think, though, that we can reformulate the original question in a way that focuses on helping "real" programmers. What does a non-graphical programmer need in an external imagination? What kind of feedback -- frequent, even in-the-moment -- would be most helpful to, say, a compiler writer? How could our REPLs provide even more support for creating, reacting, and abstracting?

These questions are worth asking, whatever one thinks of Victor's particular proposal. Programmers should be grateful for his causing us to ask them.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 20, 2012 8:09 PM

Computer Science is a Liberal Art

Over the summer, I gave a talk as part of a one-day conference on the STEM disciplines for area K-12, community college, and university advisors. They were interested in, among other things, the kind of classes that CS students take at the university and the kind of jobs they get when they graduate.

In the course of talking about how some of the courses our students take (say, algorithms and the theory of computing) seem rather disconnected from many of the jobs they get (say, web programmer and business analyst), I claimed that the more abstract courses prepare students to understand the parts of the computing world that never change, and the ones that do. The specific programming languages or development stack they use after they graduate to build financial reporting software may change occasionally, but the foundation they get as a CS major prepares them to understand what comes next and to adapt quickly.

In this respect, I said, a university CS education is not job training. Computer Science is a liberal art.

This is certainly true when you compare university CS education with what students get at a community college. Students who come out of a community college networking program often possess specific marketable skills at a level we are hard-pressed to meet in a university program. We bank our program's value on how well it prepares students for a career, in which networking infrastructure changes multiple times and our grads are asked to work at the intersection of networks and other areas of computing, some of which may not exist yet.

It is also true relative to the industries they enter after graduation. A CS education provides a set of basic skills and, more important, several ways to think about problems and formulate solutions. Again, students who come out of a targeted industry or 2-year college training program in, say, web dev, often have "shovel ready" skills that are valuable in industry and thus highly marketable. We bank our program's value on how well it prepares students for a career in which ASP turns to JSP turns PHP turns to JavaScript. Our students should be prepared to ramp up quickly and have a shovel in the hands producing value soon.

And, yes, students in a CS program must learn to write code. That's a basic skill. I often hear people comment that computer science programs do not prepare students well for careers in software development. I'm not sure that's true, at least at schools like mine. We can't get away with teaching all theory and abstraction; our students have to get jobs. We don't try to teach them everything they need to know to be good software developers, or even many particular somethings. That should and will come on the job. I want my students to be prepared for whatever they encounter. If their company decides to go deep with Scala, I'd like my former students to be ready to go with them.

In a comment on John Cook's timely blog entry How long will there be computer science departments?, Daniel Lemire suggests that we emulate the model of medical education, in which doctors serve several years in residency, working closely with experienced doctors and learning the profession deeply. I agree. Remember, though, that aspiring doctors go to school for many years before they start residency. In school, they study biology, chemistry, anatomy, and physiology -- the basic science at the foundation of their profession. That study prepares them to understand medicine at a much deeper level than they otherwise might. That's the role CS should play for software developers.

(Lemire also smartly points out that programmers have the ability to do residency almost any time they like, by joining an open source project. I love to read about how Dave Humphrey and people like him bring open-source apprenticeship directly into the undergrad CS experience and wonder how we might do something similar here.)

So, my claim that Computer Science is a liberal arts program for software developers may be crazy, but it's not entirely crazy. I am willing to go even further. I think it's reasonable to consider Computer Science as part of the liberal arts for everyone.

I'm certainly not the first person to say this. In 2010, Doug Baldwin and Alyce Brady wrote a guest editors' introduction to a special issue of the ACM Transactions on Computing Education called Computer Science in the Liberal Arts. In it, they say:

In late Roman and early medieval times, seven fields of study, rooted in classical Greek learning, became canonized as the "artes liberales" [Wagner 1983], a phrase denoting the knowledge and intellectual skills appropriate for citizens free from the need to labor at the behest of others. Such citizens had ample leisure time in which to pursue their own interests, but were also (ideally) civic, economic, or moral leaders of society.


[Today] people ... are increasingly thinking in terms of the processes by which things happen and the information that describes those processes and their results -- as a computer scientist would put it, in terms of algorithms and data. This transformation is evident in the explosion of activity in computational branches of the natural and social sciences, in recent attention to "business processes," in emerging interest in "digital humanities," etc. As the transformation proceeds, an adequate education for any aspect of life demands some acquaintance with such fundamental computer science concepts as algorithms, information, and the capabilities and limitations of both.

The real value in a traditional Liberal Arts education is in helping us find better ways to live, to expose us to the best thoughts of men and women in hopes that we choose a way to live, rather than have history or accident choose a way to live for us. Computer science, like mathematics, can play a valuable role in helping students connect with their best aspirations. In this sense, I am comfortable at least entertaining the idea that CS is one of the modern liberal arts.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

September 16, 2012 9:17 AM

An Advantage To Needing to Ask For Help

A week or so ago I tweeted about a student who came by the office for some help with a programming assignment. When told that he'd be hearing this explanation again in class later in the day, he said, that's okay, "that way I get to learn it better".

This episode came to mind again when I wrote this paragraph in my entry yesterday about learning to talk about their programs, the ideas they embody, and the reasons they wrote the code they did:

To learn learn such skills, students need practice. The professor needs to ask open-ended questions in class and out. Asking good questions outside of class is especially important because it's in the hallway, the lab, and office hours where students find themselves one on one with the professor and have to answer the questions themselves. They can't fall back on the relative anonymity of even a small class to avoid the hard work of forming a thought and trying to say it out loud.

This is another good reason for students to go to office hours and otherwise to engage with the professor outside of class: Not only do they get answers to the questions. they also get more and better practice talking about problems and solutions than students who don't.

This offers an unexpected advantage to the student who doesn't quite "get it" yet over the student who just barely gets it: The former might well come in to get help. The latter probably won't. The former gets more interaction and more individualized practice. The latter gets neither.

A lot of professors encourage all students to talk to them about their programs. Obviously, students doing poorly obviously can benefit from help. Students at the top of the curve are ones who often can benefit from going beyond what they see in class, going farther or deeper. But perhaps we should work even harder to encourage an unlikely group of students to come see us: the ones who are doing just well enough. They don't stand out in any way that makes the prof seek them out, and that may place them at risk of losing ground to other students.

That's a tough sell, though. It's human nature not to seek help when we don't think we need it. If we seem to understand the material and are doing okay on the assignments, do we really need help? There are also pragmatic issues like time management. Student who are doing okay in my course are likely to focus their efforts on other courses, where they feel like they need more work and help.

So, it becomes even more important for the professor to engage all of the students in a course, both as a group and as individuals, in reflective thinking, speaking, and writing. Otherwise, some students who need practice with these skills might not seek it out on their own.

Most of us know that there was an advantage to being willing to ask questions. Rarely do we think about how there might be an advantage to needing to ask questions.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 15, 2012 10:04 AM

Express Yourself

One of my CS colleagues gives only multiple-choice and true/false tests. He swears by how well scores on them track with performance on programming assignments, which he views as a valuable form of consistency. And, boy, are they easy to grade.

I'm not a fan. I can't recall the last I time used them. If I ever did, it was surely for a narrow purpose, say, to test quickly the student's familiarity with vocabulary. "Choose one" questions are too limiting to be used wholesale, because they test only recognition, not recall. They certainly can't test the synthetic skills we need as programmers and solvers of problems, and those are the most important part of computer science.

My course this semester has brought this idea to the forefront of my mind again. I've helped students a lot on their first two programming assignments, as they learn a little Java and revisit the concept of modularity they learned in their Data Structures course. I've been struck by how often they cannot talk about their code.

When I ask "What is this statement doing?" or "Why are you <x>?", many struggle. They want to answer in terms of the code they have written, either reading it back to me in English or telling me the code that surrounds it. This is true for almost any value of <x>. This week, "assigning a value to this variable" and "using an if statement" were common topics.

Most of these students had a reason that they wrote the code they did, even if it was merely a reaction to a specific behavior they observed in the compiler or executable. But even in the best cases the idea is usually not clear in their minds. Unclear ideas are nebulous and rambling. This is why the students have so much trouble trying to express them at all, let alone concisely.

Perhaps it's not surprising that students at this level struggle with this problem. Most are coming directly out of their first-year courses, where they learned a single programming language and used it to build data structures and small programs. They are often able to solve the tasks we set before them in these courses without having to understand their own programs at a deep level. The tasks are relatively simple, and the programs are small enough that bits of code cobbled together from examples they've seen get the job done.

It takes practice to learn how to think at a higher level, to express ideas about solutions and code without speaking in terms of code, or even the solutions they have settled on.

To learn learn such skills, students need practice. The professor needs to ask open-ended questions in class and out. Asking good questions outside of class is especially important because it's in the hallway, the lab, and office hours where students find themselves one on one with the professor and have to answer the questions themselves. They can't fall back on the relative anonymity of even a small class to avoid the hard work of forming a thought and trying to say it out loud.

They also need practice in the form of homework questions, quizzes, and even tests. Each of these exacts an increasing level of accountability, which creates an increasing sense of urgency for their efforts. One of my favorite experiences as a teacher is when students visit after a course is over and tells me the pride they felt while taking the final exam and finally -- finally! -- feeling like they could express the thoughts they had in their minds. That's a proud moment for the teacher, too.

The first four weeks of this course reminds me that one of the essential goals for the intermediate computing course is to strengthen the skills needed to reason about ideas and express them clearly. When students make the move to a second programming language and a second programming style, they have to come to grips with the fact that programming is not just about Python or Ada anymore. In fact, it never was.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 11, 2012 3:47 PM

The Kind of Teacher We All Want To Have

In an old Geek of the Week interview, the interviewer asks Donald Knuth what gets him started writing. Knuth gives an answer that puts his writing in the context of teaching someone who is reading the book because she wants to, not because she must. His answer culminates in:

Instead of trying to impress the reader with what I know, I try to explain why the things I've learned impress me.

That's not only the kind of teacher we all want to have, it's the kind of teacher we all want want to be.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 05, 2012 5:24 PM

Living with the Masters

I sometimes feel guilty that most of what I write here describes connections between teaching or software development and what I see in other parts of the world. These connections are valuable to me, though, and writing them down is valuable in another way.

I'm certainly not alone. In Why Read, Mark Edmondson argues for the value of reading great literature and trying on the authors' view of the world. Doing so enables us to better understand our own view of the world, It also gives us the raw material out of which to change our worldview, or build a new one, when we encounter better ideas. In the chapter "Discipline", Edmondson writes:

The kind of reading that I have been describing here -- the individual quest for what truth a work reveals -- is fit for virtually all significant forms of creation. We can seek vital options in any number of places. They may be found for this or that individual in painting, in music, in sculpture, in the arts of furniture making or gardening. Thoreau felt he could derive a substantial wisdom by tending his bean field. He aspired to "know beans". He hoed for sustenance, as he tells us, but he also hoed in search of tropes, comparisons between what happened in the garden and what happened elsewhere in the world. In his bean field, Thoreau sought ways to turn language -- and life -- away from old stabilities.

I hope that some of my tropes are valuable to you.

The way Edmondson writes of literature and the liberal arts applies to the world of software in a much more direct ways too. First, there is the research literature of computing and software development. One can seek truth in the work of Alan Kay, David Ungar, Ward Cunningham, or Kent Beck. One can find vital options in the life's work of Robert Floyd, Peter Landin, or Alan Turing; Herbert Simon, Marvin Minsky, or John McCarthy. I spent much of my time in grad school immersed in the writings and work of B. Chandrasekaran, which affected my view of intelligence in both humans and machines.

Each of these people offers a particular view into a particular part of the computing world. Trying out their worldviews can help us articulate our own worldviews better, and in the process of living their truths we sometimes find important new truths for ourselves.

We in computing need not limit ourselves to the study of research papers and books. As Edmondson says the individual quest for the truth revealed in a work "is fit for virtually all significant forms of creation". Software is a significant form of creation, one not available to our ancestors even sixty years ago. Live inside any non-trivial piece of software for a while, especially one that has withstood the buffets of human desire over a period of time, and you will encounter truth -- truths you find there, and truths you create for yourself. A few months trying on Smalltalk and its peculiar view of the world taught me OOP and a whole lot more.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Software Development, Teaching and Learning

September 04, 2012 4:31 PM

The Trust Between Student and Teacher

or, "Where's Your Github Repo?"

The last couple of class sessions have taught me a lot about what my students know and what they are ready to learn about programming. As a result, I'm adjusting my expectations and plans for the course. Teachers have to be agile.

A couple of my students are outliers. They have a lot of programming experience, sometimes with relatively exotic languages like Clojure. I hope that their time with me this semester is as useful to them as it is to less experienced students. Even the more advanced students usually have a lot to learn about building big systems, and especially about OOP.

After an interesting conversation with one of the more advanced students today after class, I was reminded of the role that trust plays in learning. A student has to trust that his or her professor knows enough of the right stuff to teach the class.

Most students trust their professors implicitly, based on their academic degrees and their employment at a university. (*) That makes teaching much easier. If I as a teacher start every new topic or even every course having to establish my credibility, or build trust from scratch, progress is slow.

Once the course gets going, every interaction between professor or student either reinforces that implicit trust or diminishes it. That's one of the most important features of every class day, and one we professors don't always think about explicitly as we prepare.

Working with more advanced students can be a challenge, especially in a course aimed at less experienced students. Much of what we talk about in class is at a more basic level than the one on which the advanced students are already working. "Why listen to a prof talk about the design of a method when I've already written thousands of lines of code in a complex language?"

That's a fair question. It must be tough for some students at that level to accord the same level of implicit trust to the professor as a beginner. This wasn't a problem for me when I was among the more advanced students in the class. I found it easy to trust my professors' expertise. That's how I was raised. But not all students have that initial disposition.

I've learned over the years to empathize more in this regard with the experienced students in my classes and to make efforts to earn and maintain their trust. Without that, I have little chance of helping them to learn anything in my course.

Of course, we also live increasingly in a world in which the tools of our trade give us a vehicle for establishing trust. I love it when students or prospective students ask me, "What are you working on in your spare time?" I'm waiting for the day when our upper-division students routinely ask their profs, "Where's your Github repo?"


(*) The department head in me is acutely aware that that the department also puts its credibility on the line every time a professor walks in the classroom. That is a big challenge even for departments with strong faculties.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 01, 2012 10:18 AM

Making Assumptions

a dartboard

Patrick Honner has been writing a series of blog posts reviewing problems from the June 2012 New York State Math Regents exams. A recent entry considered a problem in which students were asked to compute the probability that a dart hits the bull's eye on a dartboard. This question requires the student to make a specific assumption: "that every point on the target is equally likely to be hit". Honner writes:

... It's not necessarily bad that we make such assumptions: refining and simplifying problems so they can be more easily analyzed is a crucial part of mathematical modeling and problem solving.

What's unfortunate is that, in practice, students are kept outside this decision-making process: how and why we make such assumptions isn't emphasized, which is a shame, because exploring such assumptions is a fundamental mathematical process.

The same kinds of assumptions are built into even the most realistic problems that we set before our students. But discussing assumptions is an essential part of doing math. Which assumptions are reasonable? Which are necessary? What is the effect of a particular assumption on the meaning of the problem, on the value of the answer we will obtain? This kind of reasoning is, in many ways, the real math in a problem. Once we have a formula or two, we are down to crunching numbers. That's arithmetic.

Computer science teachers face the risks when we pose problems to our students, including programming problems. Discovering the boundaries of a problem and dealing with the messy details that live on the fringe are an essential part of making software. When we create assignments that can be neatly solved in a week or two, we hide "a fundamental computing process" from our students. We also rob them of a lot of fun.

As Honner says, though, making assumptions is not necessarily bad. In the context of teaching a course, they are necessary. Sometimes, we need to focus our students' attention on a specific new skill to be learned or honed. Tidying up the boundaries of a problem bring that skill into greater relief and eliminate what are at the moment unnecessary distractions.

It is important, though, for a computing curriculum to offer students increasing opportunities to confront the assumptions we make and begin to make assumptions for themselves. That level of modeling is also a specific skill to be learned and honed. It also can make class more fun for the professor, if a lot messier when it comes time to evaluating student work and assigning grades.

Even when we have to make assumptions prior to assigning a problem, discussing them explicitly with students can open their eyes to the rest of the complexity in making software. Besides, some students already sense or know that we are hiding details from them, and having the discussion is a way to honor their knowledge -- and earn their respect.

So, the next time you assign a problem, ask yourself: What assumptions have I made in simplifying this problem? Are they necessary? If not, can I loosen them? If yes, can my students benefit from discussing them?

And be prepared... If you leave a few messy assumptions lying around a problem for your students to confront and make on their own, some students will be unhappy with you. As Honner says, we teachers spend a lot of time training students to make implicit assumptions unthinkingly. In some ways, we are too successful for our own good.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 31, 2012 3:22 PM

Two Weeks Along the Road to OOP

The month has flown by, preparing for and now teaching our "intermediate computing" course. Add to that a strange and unusual set of administrative issues, and I've found no time to blog. I did, however manage to post what has become my most-retweeted tweet ever:

I wish I had enough money to run Oracle instead of Postgres. I'd still run Postgres, but I'd have a lot of cash.

That's an adaptation of tweet originated by @petdance and retweeted my way by @logosity. I polished it up, sent it off, and -- it took off for the sky. It's been fun watching its ebb and flow, as it reaches new sub-networks of people. From this experience I must learn at least one lesson: a lot of people are tired of sending money to Oracle.

The first two weeks of my course have led the students a few small steps toward object-oriented programming. I am letting the course evolve, with a few guiding ideas but no hard-and-fast plan. I'll write about the course's structure after I have a better view of it. For now, I can summarize the first four class sessions:

  1. Run a simple "memo pad" app, trying to identify behavior (functions) and state (persistent data). Discuss how different groupings of the functions and data might help us to localize change.
  2. Look at the code for the app. Discuss the organization of the functions and data. See a couple of basic design patterns, in particular the separation of model and view.
  3. Study the code in greater detail, with a focus on the high-level structure of an OO program in Java.
  4. Study the code in greater detail, with a focus on the lower-level structure of classes and methods in Java.

The reason we can spend so much time talking about a simple program is that students come to the course without (necessarily) knowing any Java. Most come with knowledge of Python or Ada, and their experiences with such different languages creates an interesting space in which to encounter Java. Our goal this semester is for students to learn their second language as much as possible, rather than having me "teach" it to them. I'm trying to expose them to a little more of the language each day, as we learn about design in parallel. This approach works reasonably well with Scheme and functional programming in a programming languages course. I'll have to see how well it works for Java and OOP, and adjust accordingly.

Next week we will begin to create things: classes, then small systems of classes. Homework 1 has them implementing a simple array-based class to an interface. It will be our first experience with polymorphic objects, though I plan to save that jargon for later in the course.

Finally, this is the new world of education: my students are sending me links to on-line sites and videos that have helped them learn programming. They want me to check them and and share with the other students. Today I received a link to The New Boston, which has among its 2500+ videos eighty-seven beginning Java and fifty-nine intermediate Java titles. Perhaps we'll come to a time when I can out-source all instruction on specific languages and focus class time on higher-level issues of design and programming...

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

August 27, 2012 12:53 PM

My Lack of Common Sense

In high school, I worked after school doing light custodial work for a local parochial school for a couple of years. One summer, a retired guy volunteered to lead me and a couple of other kids doing maintenance projects at the school and church.

One afternoon, he found me in trying to loosen the lid on a paint can using one of my building keys. Yeah, that was stupid. He looked at me as if I were an alien, got a screwdriver, and opened the can.

Later that summer, I overheard him talking to the secretary. She asked how I was working out, and he said something to the effect of "nice kid, but he has no common sense".

That stung. He was right, of course, but no one likes to be thought of as not capable, or not very smart. Especially someone who likes to be someone who knows stuff.

I still remember that eavesdropped conversation after all these years. I knew just what he meant at the time, and I still do. For many years I wondered, what was wrong with me?

It's true that I didn't have much common sense as a handyman back then. To be honest, I probably still don't. I didn't have much experience doing such projects before I took that job. It's not something I learned from my dad. I'd never seen a bent key before, at least not a sturdy house key or car key, and I guess it didn't occur to me that one could bend.

The A student in me wondered why I hadn't deduced the error of my ways from first principles. As with the story of Zog, it was immediately obvious as soon as it was pointed out to me. Explanation-based learning is for real.

Over time, though, I have learned to cut myself some slack. Voltaire was right: Common sense is not so common. These days, people often say that to mean there are far too many people like me who don't have the sense to come in out of the rain. But, as the folks at Wikipedia recognize, that sentence can mean something else even more important. Common sense isn't shared whenever people have not had the same experiences, or have not learned it some other way.

Maybe there are still some things that most of us can count on as common, by virtue of living in a shared culture. But I think we generally overestimate how much of any given person's knowledge is like that. With an increasingly diverse culture, common experience and common cultural exposure are even harder to come by.

That gentleman and secretary probably forgot about their conversation within minutes, but the memory of his comment still stings a little. I don't think I'd erase the memory, though, even if I could. Every so often, it reminds me not to expect my students to have too much common sense about programs or proofs or programming languages or computers.

Maybe they just haven't had the right experiences yet. It's my job to help them learn.

Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

August 13, 2012 3:56 PM

Lessons from Unix for OO Design

Pike's and Kernighan's Program Design in the UNIX Environment includes several ideas I would like for my students to learn in my Intermediate Computing course this fall. Among them:

... providing the function in a separate program makes convenient options ... easier to invent, because it isolates the problem as well as the solution.

In OO, objects are the packages that create possibilities for us. The beauty of this lesson is the justification: because a class isolates the problem as well as the solution.

This solution affects no other programs, but can be used with all of them.

This is one of the great advantages of polymorphic objects.

The key to problem-solving on the UNIX system is to identify the right primitive operations and to put them at the right place.

Methods should live in the objects whose data they manipulate. One of the hard lessons for novice OO programmers coming from a procedural background is putting methods with the thing, not a faux actor.

UNIX programs tend to solve general problems rather than special cases.

Objects that are too specific should be rare, at least for beginning programmers. Specificity in interface often indicates that implementation detail is leaking out.

Merely adding features does not make it easier for users to do things -- it just makes the manual thicker.

Keep objects small and focused. A big interface is often evidence of an object waiting to be born.


In many ways, The Unix Way is contrary to object-oriented programming. Or so many of Linux friends tell me. But I'm quite comfortable with the parallels found in these quotes, because they are more about good design in general than about Unix or OOP themselves.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 09, 2012 1:36 PM

Sentences to Ponder

In Why Read?, Mark Edmundson writes:

A language, Wittgenstein thought, is a way of life. A new language, whether we learn it from a historian, a poet, a painter, or a composer of music, is potentially a new way to live.

Or from a programmer.

In computing, we sometimes speak of Perlis languages, after one of Alan Perlis's best-known epigrams: A language that doesn't affect the way you think about programming is not worth knowing. A programming language can change how we think about our craft. I hope to change how my students think about programming this fall, when I teach them an object-oriented language.

But for those of us who spend our days and nights turning ideas into programs, a way of thinking is akin to a way of life. That is why the wider scope of Wittgenstein's assertion strikes me as so appropriate for programmers.

Of course, I also think that programmers should follow Edmundson's advice and learn new languages from historians, writers, and artists. Learning new ways to think and live isn't just for humanities majors.

(By the way, I'm enjoying reading Why Read? so far. I read Edmundson's Teacher many years ago and recommend it highly.)

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

August 08, 2012 1:50 PM

Examples First, Names Last

Earlier this week, I reviewed a draft chapter from a book a friend is writing, which included a short section on aspect-oriented programming. The section used common AOP jargon: "cross cutting", "advice", and "point cut". I know enough about AOP to follow his text, but I figured that many of his readers -- young software developers from a variety of backgrounds -- would not. On his use of "cross cutting", I commented:

Your ... example helps to make this section concrete, but I bet you could come up with a way of explaining the idea behind AOP in a few sentences that would be (1) clear to readers and (2) not use "cross cutting". Then you could introduce the term as the name of something they already understand.

This may remind you of the famous passage from Richard Feynman about learning names and understanding things. (It is also available on a popular video clip.) Given that I was reviewing a chapter for a book of software patterns, it also brought back memories of advice that Ralph Johnson gave many years ago on the patterns discussion list. Most people, he said, learn best from concrete examples. As a result, we should write software patterns in such a way that we lead with a good example or two and only then talk about the general case. In pattern style, he called this idea "Concrete Before Abstract".

I try to follow this advice in my teaching, though I am not dogmatic about it. There is a lot of value in mixing up how we organize class sessions and lectures. First, different students connect better with some approaches than others, so variety increases the chances that of connecting with everyone a few times each semester. Second, variety helps to keep students in interested, and being interested is a key ingredient in learning.

Still, I have a preference for approaches that get students thinking about real code as early as possible. Starting off by talking about polymorphism and its theoretical forms is a lot less effective at getting the idea across to undergrads than showing students a well-chosen example or two of how plugging a new object into an application makes it easier to extend and modify programs.

So, right now, I have "Concrete Before Abstract" firmly in mind as I prepare to teaching object-oriented programming to our sophomores this fall.

Classes start in twelve days. I figured I'd be blogging more by now about my preparations, but I have been rethinking nearly everything about the way I teach the course. That has left my mind more muddled that settled for long stretches. Still, my blog is my outboard brain, so I should be rethinking more in writing.

I did have one crazy idea last night. My wife learned Scratch at a workshop this summer and was talking about her plans to use it as a teaching tool in class this fall. It occurred to me that implementing Scratch would be a fun exercise for my class. We'll be learning Java and a little graphics programming as a part of the course, and conceptually Scratch is not too many steps from the pinball game construction kit in Budd's Understanding Object-Oriented Programming with Java, the textbook I have used many times in the course. I'm guessing that Budd's example was inspired by Bill Budge's game for Electronic Arts, Pinball Construction Set. (Unfortunately, Budd's text is now almost as out of date as that 1983 game.)

Here is an image of a game constructed using the pinball kit and Java's AWT graphics framework:

a pinball game constructed using a simple game kit

The graphical ideas needed to implement Scratch are a bit more complex, including at least:

  • The items on the canvas must be clickable and respond to messages.
  • Items must be able to "snap" together to create units of program. This could happen when a container item such as a choice or loop comes into contact with an item it is to contain.

The latter is an extension of collision-detecting behavior that students would be familiar with from earlier "ball world" examples. The former is something we occasionally do in class anyway; it's awfully handy to be able to reconfigure the playing field after seeing how the game behaves with the ball in play. The biggest change would be that the game items are little pieces of program that know how to "interpret" themselves.

As always, the utility of a possible teaching idea lies in the details of implementing it. I'll give it a quick go over the next week to see if it's something I think students would be able to handle, either as a programming assignment or as an example we build and discuss in class.

I'm pretty excited by the prospect, though. If this works out, it will give me a nice way to sneak basic language processing into the course in a fun way. CS students should see and think about languages and how programs are processed throughout their undergrad years, not only in theory courses and programming languages courses.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

August 03, 2012 3:23 PM

How Should We Teach Algebra in 2012?

Earlier this week, Dan Meyer took to task a New York Times opinion piece from the weekend, Is Algebra Necessary?:

The interesting question isn't, "Should every high school graduate in the US have to take Algebra?" Our world is increasingly automated and programmed and if you want any kind of active participation in that world, you're going to need to understand variable representation and manipulation. That's Algebra. Without it, you'll still be able to clothe and feed yourself, but that's a pretty low bar for an education. The more interesting question is, "How should we define Algebra in 2012 and how should we teach it?" Those questions don't even seem to be on Hacker's radar.

"Variable representation and manipulation" is a big part of programming, too. The connection between algebra and programming isn't accidental. Matthias Felleisen won the ACM's Outstanding Educator Award in 2010 for his long-term TeachScheme! project, which has now evolved into Program by Design. In his SIGCSE 2011 keynote address, Felleisen talked about the importance of a smooth progression of teaching languages. Another thing he said in that talk stuck with me. While talking about the programming that students learned, he argued that this material could be taught in high school right now, without displacing as much material as most people think. Why? Because "This is algebra."

Algebra in 2012 still rests fundamentally on variable representation and manipulation. How should we teach it? I agree with Felleisen. Programming.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

July 23, 2012 3:14 PM

Letting Go of Old Strengths

Ward Cunningham commented on what it's like to be "an old guy who's still a programmer" in his recent Dr. Dobb's interview:

A lot of people think that you can't be old and be good, and that's not true. You just have to be willing to let go of the strengths that you had a year ago and get some new strengths this year. Because it does change fast, and if you're not willing to do that, then you're not really able to be a programmer.""

That made me think of the last comment I made in my posts on JRubyConf:

There is a lot of stuff I don't know. I won't run out of things to read and learn and do for a long, long, time.

This is an ongoing theme in the life of a programmer, in the life of a teacher, and the life of an academic: the choice we make each day between keeping up and settling down. Keeping up is a lot more fun, but it's work. If you aren't comfortable giving up what you were awesome at yesterday, it's even more painful. I've been lucky mostly to enjoy learning new stuff more than I've enjoyed knowing the old stuff. May you be so lucky.

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 16, 2012 3:02 PM

Refactoring Everywhere: In Code and In Text

Charlie Stross is a sci-fi writer. Some of my friends have recommended his fiction, but I've not read any. In Writing a novel in Scrivener: lessons learned, he, well, describes what he has learned writing novels using Scrivener, an app for writers well known in the Mac OS X world.

I've used it before on several novels, notably ones where the plot got so gnarly and tangled up that I badly needed a tool for refactoring plot strands, but the novel I've finished, "Neptune's Brood", is the first one that was written from start to finish in Scrivener...

... It doesn't completely replace the word processor in my workflow, but it relegates it to a markup and proofing tool rather than being a central element of the process of creating a book. And that's about as major a change as the author's job has undergone since WYSIWYG word processing came along in the late 80s....

My suspicion is that if this sort of tool spreads, the long-term result may be better structured novels with fewer dangling plot threads and internal inconsistencies. But time will tell.

Stross's lessons don't all revolve around refactoring, but being able to manage and manipulate the structure of the evolving novel seems central to his satisfaction.

I've read a lot of novels that seemed like they could have used a little refactoring. I always figured it was just me.

The experience of writing anything in long form can probably be improved by a good refactoring tool. I know I find myself doing some pretty large refactorings when I'm working on the set of lecture notes for a course.

Programmers and computer scientists have the advantage of being more comfortable writing text in code, using tools such as LaTex and Scribble, or homegrown systems. My sense, though, is that fewer programmers use tools like this, at least at full power, than might benefit from doing so.

Like Stross, I have a predisposition against using tools with proprietary data formats. I've never lost data stored in plaintext to version creep or application obsolescence. I do use apps such as VoodooPad for specific tasks, though I am keenly aware of the exit strategy (export to text or RTFD ) and the pain trade-off at exit (the more VoodooPad docs I create, the more docs I have to remember to export before losing access to the app). One of the things I like most about MacJournal is that it's nothing but a veneer over a set of Unix directories and RTF documents. The flip side is that it can't do for me nearly what Scrivener can do.

Thinking about a prose writing tool that supports refactoring raises an obvious question: what sort of refactoring operations might it provide automatically? Some of the standard code refactorings might have natural analogues in writing, such as Extract Chapter or Inline Digression.

Thinking about automated support for refactoring raises another obvious question, the importance of which is surely as clear to novelists as to software developers: Where are the unit tests? How will we know we haven't broken the story?

I'm not being facetious. The biggest fear I have when I refactor a module of a course I teach is that I will break something somewhere down the line in the course. Your advice is welcome!

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

July 14, 2012 11:01 AM

"Most Happiness Comes From Friction"

Last time, I mentioned again the value in having students learn broadly across the sciences and humanities, including computer science. This is a challenge going in both directions. Most students like to concentrate on one area, for a lot of different reasons. Computer science looks intimidating to students in other majors, perhaps especially to the humanities-inclined.

There is hope. Earlier this year, the Harvard Magazine ran The Frisson of Friction, an essay by Sarah Zhang, a non-CS student who decided to take CS 50, Harvard's intro to computer science. Zhang tells the story of finding a thorny, semicolon-induced bug in a program (an extension for Google's Chrome browser) on the eve of her 21st birthday. Eventually, she succeeded. In retrospect, she writes:

Plenty of people could have coded the same extension more elegantly and in less time. I will never be as good a programmer as -- to set the standard absurdly high -- Mark Zuckerberg. But accomplishments can be measured in terms relative to ourselves, rather than to others. Rather than sticking to what we're already good at as the surest path to résumé-worthy achievements, we should see the value in novel challenges. How else will we discover possibilities that lie just beyond the visible horizon?

... Even the best birthday cake is no substitute for the deep satisfaction of accomplishing what we had previously deemed impossible -- whether it's writing a program or writing a play.

The essay addresses some of the issues that keep students from seeking out novel challenges, such as fear of low grades and fear of looking foolish. At places like Harvard, students who are used to succeeding find themselves boxed in by their friends' expectations, and their own, but those feelings are familiar to students at any school. Then you have advisors who subtly discourage venturing too far from the comfortable, out of their own unfamiliarity and fear. This is a social issue as big as any pedagogical challenge we face in trying to make introductory computer science more accessible to more people.

With work, we can help students feel the deep satisfaction that Zhang experienced. Overcoming challenges often leads to that feeling. She quotes a passage about programmers in Silicon Valley, who thrive on such challenges: "Most happiness probably comes from friction." Much satisfaction and happiness come out of the friction inherent in making things. Writing prose and writing programs share this characteristic.

Sharing the deep satisfaction of computer science is a problem with many facets. Those of us who know the satisfaction know it's a problem worth solving.

Posted by Eugene Wallingford | Permalink | Categories: Computing, General, Teaching and Learning

July 05, 2012 2:48 PM

The Value of a Liberal Education, Advertising Edition

A few days ago, I mentioned James Webb Young's A Technique for Producing Ideas. It turns out that Young was in advertising. He writes:

The construction of an advertisement is the construction of a new pattern in this kaleidoscopic world in which we live. The more of the elements of that world which are stored away in that pattern-making machine, the mind, the more the chances are increased for the production of new and striking combinations, or ideas. Advertising students who get restless about the "practical" value of general college subjects might consider this.

Computer Science students, too.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

July 03, 2012 3:28 PM

A Little Zen, A Little Course Prep

I listened to about 3/4 of Zen and the Art of Motorcycle Maintenance on a long-ish drive recently. It's been a while since I've read the whole book, but I listen to it on tape once a year or so. It always gets my mind in the mood to think about learning to read, write, and debug programs.

This fall, I will be teaching our third course for first time since became head seven years ago. In that time, we changed the name of the course from "Object-Oriented Programming" to "Intermediate Computing". In many ways, the new name is an improvement. We want students in this course to learn a number of skills and tools in the service of writing larger programs. At a fundamental level, though OOP remains the centerpiece of everything we do in the course.

As I listened to Pirsig make his way across the Great Plains, a few ideas stood out as prepare to teach one of my favorite courses:

The importance of making your own thing, not just imitating others. This is always a challenge in programming courses, but for most people it is essential if we hope for students to maximize their learning. It underlies several other parts of Pirsig's zen and art, such as caring about our artifacts, and the desire to go beyond what something is to what it means.

The value of reading code, both good and bad. Even after only one year of programming, most students have begun to develop a nose for which is which, and nearly all have enough experience that they can figure out the difference with minimal interference from the instructor. If we can get them thinking about what features of a program make it good or bad, we can move on to the more important question: How can we write good programs? If we can get students to think about this, then they can see the "rules" we teach them for what they really are: guidelines, heuristics that point us in the direction of good code. They can learn the rules with an end in mind, and not as an end in themselves.

The value of grounding abstractions in everyday life. When we can ground our classwork in their own experiences, they are better prepared to learn from it. Note that this may well involve undermining their naive ideas about how something works, or turning a conventional wisdom from their first year on its head. The key is to make what they see and do matter to them.

One idea remains fuzzy in my head but won't let me go. While defining the analytic method, Pirsig talks briefly about the difference between analysis by composition and analysis by function. Given that this course is teaches object-oriented programming in Java, there are so many ways in which this distinction could matter: composition and inheritance, instance variables and methods, state and behavior. I'm not sure whether there is anything particular useful in Pirsig's philosophical discussion of this, so I'll think some more about it.

I'm also thinking a bit about a non-Zen idea for the course: Mark Guzdial's method of worked examples and self-explanation. My courses usually include a few worked examples, but Mark has taken the idea to another level. More important, he pairs it with an explicit step in which students explain examples to themselves and others. This draws on results from research in CS education showing that learning and retention are improved when students explain something in their own words. I think this could be especially valuable in a course that asks students to learn a new style of writing code.

One final problem is on my mind right now, a more practical matter: a textbook for the course. When I last taught this course, I used Tim Budd's Understanding Object-Oriented Programming with Java. I have written in the past that I don't like textbooks much, but I always liked this book. I liked the previous multi-language incarnation of the book even more. Unfortunately, one of the purposes of this course is to have students learn Java reasonably well.

Also unfortunate is that Budd's OOP/Java book is now twelve years old. A lot has happened in the Java world in the meantime. Besides, as I found while looking for a compiler textbook last fall, the current asking price of over $120 seems steep -- especially for a CS textbook published in 2000!

So I persist in my quest. I'd love to find something that looks like it is from this century, perhaps even reflecting the impending evolution of the textbook we've all been anticipating. Short of that, I'm looking for a modern treatment of both OO principles and Java.

Of course, I'm a guy who still listens to books on tape, so take my sense of what's modern with a grain of salt.

As always, any pointers and suggestions are appreciated.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

June 26, 2012 4:23 PM

Adventures in Advising

Student brings me a proposed schedule for next semester.

Me: "Are you happy with this schedule?"

Student: "If I weren't, why would I have made it?"

All I can think is, "Boy, are you gonna have fun as a programmer."

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

June 20, 2012 4:54 PM

Becoming Engrossed

Moneyball author Michael Lewis gave the graduation address at his alma mater, Princeton, this spring. Unlike so many others, his address is short and to the point. He wants us to remember the role that luck and accident play in our lives, and not to assume that every time live presents us with a gift we deserve it. It's worth a quick read.

The teacher in me was struck by a line about something else. It appears in the background story that describes Lewis's own good fortune. As an undergrad, Lewis wrote his senior thesis under the direction of archaeologist William Childs, about whom he says:

God knows what Professor Childs actually thought of [my thesis], but he helped me to become engrossed.

"He helped me to become engrossed." What a fine compliment to pay a teacher.

It's not easy to help students become engrossed in a project, a topic, or a discipline. It requires skill. I think I'm pretty good at working with students who are already engrossed, but then so are a lot of people, I imagine. These students make it easy for us.

I want to get better at helping students become engrossed, to help light the new fire. I try all the time, and every once in a while I succeed. I'd like to be more reliable at it.

Whatever else goes into this skill, I'm pretty sure that connecting with students and their interests is usually a good first step, and that being curious myself is a good next step. I also think it's good for students to see me being engrossed with a problem is helpful. In fact, being engrossed is almost certainly more important than being engrossing.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

June 19, 2012 3:04 PM

Basic Arithmetic, APL-Style, and Confident Problem Solvers

After writing last week about a cool array manipulation idiom, motivated by APL, I ran across another reference to "APL style" computation yesterday while catching up with weekend traffic on the Fundamentals of New Computing mailing list. And it was cool, too.

Consider the sort of multi-digit addition problem that we all spend a lot of time practicing as children:

     +  366

The technique requires converting two-digit sums, such as 6 + 5 = 11 in the rightmost column, into a units digit and carrying the tens digit into the next column to the left. The process is straightforward but creates problems for many students. That's not too surprising, because there is a lot going on in a small space.

David Leibs described a technique, which he says he learned from something Kenneth Iverson wrote, that approaches the task of carrying somewhat differently. It takes advantage of the fact that a multi-digit number is a vector of digits times another vector of powers.

First, we "spread the digits out" and add them, with no concern for overflow:

        3   6   5
     +  3   6   6
        6  12  11

Then we normalize the result by shifting carries from right to left, "in fine APL style".

        6  12  11
        6  13   1
        7   3   1

According to Leibs, Iverson believed that this two-step approach was easier for people to get right. I don't know if he had any empirical evidence for the claim, but I can imagine why it might be true. The two-step approach separates into independent operations the tasks of addition and carrying, which are conflated in the conventional approach. Programmers call this separation of concerns, and it makes software easier to get right, too.

Multiplication can be handled in a conceptually similar way. First, we compute an outer product by building a digit-by-digit times table for the digits:

     |   |  3  6  6|
     | 3 |  9 18 18|
     | 6 | 18 36 36|
     | 5 | 15 30 30|

This is straightforward, simply an application of the basic facts that students memorize when they first learn multiplication.

Then we sum the diagonals running southwest to northeast, again with no concern for carrying:

     (9) (18+18) (18+36+15) (36+30) (30)
      9      36         69      66   30

In the traditional column-based approach, we do this implicitly when we add staggered columns of digits, only we have to worry about the carries at the same time -- and now the carry digit may be something other than one!

Finally, we normalize the resulting vector right to left, just as we did for addition:

         9  36  69  66  30
         9  36  69  69   0
         9  36  75   9   0
         9  43   5   9   0
        13   3   5   9   0
     1   3   3   5   9   0

Again, the three components of the solution are separated into independent tasks, enabling the student to focus on one task at a time, using for each a single, relatively straightforward operator.

(Does this approach remind some of you of Cannon's algorithm for matrix multiplication in a two-dimensional mesh architecture?)

Of course, Iverson's APL was designed around vector operations such as these, so it includes operators that make implementing such algorithms as straightforward as the calculate-by-hand technique. Three or four Greek symbols and, voilá, you have a working program. If you are Dave Ungar, you are well on your way to a compiler!

the cover of High-Speed Math Self-Taught, by Lester Meyers

I have a great fondness for alternative ways to do arithmetic. One of the favorite things I ever got from my dad was a worn copy of Lester Meyers's High-Speed Math Self-Taught. I don't know how many hours I spent studying that book, practicing its techniques, and developing my own shortcuts. Many of these techniques have the same feel as the vector-based approaches to addition and multiplication: they seem to involve more steps, but the steps are simpler and easier to get right.

A good example of this I remember learning from High-Speed Math Self-Taught is a shortcut for finding 12.5% of a number: first multiply by 100, then divide by 8. How can a multiplication and a division be faster than a single multiplication? Well, multiplying by 100 is trivial: just add two zeros to the number, or shift the decimal point two places to the right. The division that remains involves a single-digit divisor, which is much easier than multiplying by a three-digit number in the conventional way. The three-digit number even has its own decimal point, which complicates matters further!

To this day, I use shortcuts that Meyers taught me whenever I'm making updating the balance in my checkbook register, calculating a tip in a restaurant, or doing any arithmetic that comes my way. Many people avoid such problems, but I seek them out, because I have fun playing with the numbers.

I am able to have fun in part because I don't have to worry too much about getting a wrong answer. The alternative technique allows me to work not only faster but also more accurately. Being able to work quickly and accurately is a great source of confidence. That's one reason I like the idea of teaching students alternative techniques that separate concerns and thus offer hope for making fewer mistakes. Confident students tend to learn and work faster, and they tend to enjoy learning more than students who are handcuffed by fear.

I don't know if anyone was tried teaching Iverson's APL-style basic arithmetic to children to see if it helps them learn faster or solve problems more accurately. Even if not, it is both a great demonstration of separation of concerns and a solid example of how thinking about a problem differently opens the door to a new kind of solution. That's a useful thing for programmers to learn.


Postscript. If anyone has a pointer to a paper or book in which Iverson talks about this approach to arithmetic, I would love to hear from you.

IMAGE: the cover of Meyers's High-Speed Math Self-Taught, 1960. Source:

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

May 30, 2012 4:11 PM

Learning About Classroom Teaching from Teaching On-Line

In his blog entry Recording a Class at Udacity, John Regehr offers some initial impressions on the process. For example, he liked the bird's eye view he had of the course as a whole over the compressed production schedule:

Recording for 8-12 hours a day was intense and left me fried, but on the other hand this has advantages. When spreading course prep across an entire semester, it's sometimes hard to see the big picture and there are often some unfortunate compromises due to work travel and paper deadlines.

But the following lesson stood out to me, due to my own experience learning how to teach:

... it became clear that designing good programming quizzes is one of the keys to turning lecture material into actual learning.

I think this is also true of traditional classroom teaching!

In the classroom, though, there are so many ways for us to fool ourselves. We tell a good story and feel good about it. Students seems to be paying attention, nodding their heads knowingly at what seem to be appropriate moments. That makes us feel good, too. Surely they are learning what we are teaching. Right?

In all honesty, we don't know. But we all feel good about the lecture, so we leave the room thinking learning has taken place.

On-line teaching has the advantage of not providing the same cues. Students may be sitting in their dorm rooms nodding their heads enthusiastically, or not. We may be connecting with everyone, or not. We can't see any of that, so it becomes necessary to punctuate our lecture -- now a sequence of mini-lectures -- and demos with quizzes. And the data speak truth.

The folks at Udacity have figured out that they can improve student learning by integrating listening and doing. Hurray!

Regehr suggests:

Tight integration between listening and hacking is one of the reasons that online learning will -- in some cases -- end up being superior to sitting in class.

I'll suggest something else: We should be doing that in our classrooms, too.

Rather than lecturing for fifty minutes and assuming (or hoping) that students are learning only by listening, we should spend time designing good programming activities and quizzes that lead students through the material and then integrate these tightly into a cycle of listening and doing.

This is an example of how teaching on-line may help some instructors become better classroom teachers. The constraints it imposes on teacher-student interaction cause us to pay closer attention to what is really happening with our students. That's a good thing.

As more courses move on-line, I think that we will all be re-learning and re-discovering many pedagogical patterns, instantiated in a new teaching environment. If that helps us to improve our classroom teaching, too, all the better.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

May 15, 2012 3:22 PM

What Teachers Make

Last night I attended my daughter's high school orchestra concert. (She plays violin.) Early on I found myself watching the conductor rather than the performers. He was really into the performance, as many conductors are. He's a good teacher and gets pretty good sound out of a few dozen teenagers. Surely he must be proud of their performance, and at least a little proud of his own.

Maybe it's just the end of another academic year, but my next thought was, "This concert will be over in an hour." My mind flashed to a movie from the 1990s, Mr. Holland's Opus. What does the conductor feel like when it's over? Is there a sense of emptiness? What does he think about, knowing that he'll being doing this all again next year, just as he did last year? The faces will change, and maybe the musical selections, but the rest will be eerily familiar.

Then it occurred to me: This is the plight of every teacher. It is mine.

Sometimes I envy people who make things for a living. They create something that people see and use. In the case of software, they may have the opportunity to grow their handiwork, to sustain it. It's tangible. It lasts, at least for a while.

Teachers live in a different world. I think about my own situation, teaching one class a semester, usually in our upper division. Every couple of years, I see a new group of students. I have each of them in class once or twice, maybe even a few times. Then May comes, and they graduate.

To the extent that I create anything, it resides in someone else. In this way, being a teacher less like being a writer or a creator and more like being a gardener. We help prepare others to make and do.

Like gardeners, we plant seeds. Some fall on good soil and flourish. Some fall on rocks and die. Sometimes, you don't even know which is which; you find out only years later. I have been surprised in both ways, more often pleasantly than not.

Sure, we build things, too. We CS profs write software. We university profs build research programs. These are tangible products, and they last for a while.

(We administrators create documents and spreadsheets. Let's not go there.)

But these products are not our primary task, at least not at schools like mine. It is teaching. We help students exercise their minds and grow their skills. If we are lucky, we change them in ways that go beyond our particular disciplines.

There is a different sort of rhythm to being a teacher than to being a maker. You need to be able to delay gratification, while enjoying the connections you make with students and ideas. That's one reason it's not so easy for just anyone to be a teacher, at least not for an entire career. My daughter's orchestra teacher seems to have that rhythm. I have been finding this rhythm over the course of my career, without quite losing the desire also to make things I can touch and use and share.

Posted by Eugene Wallingford | Permalink | Categories: Personal, Teaching and Learning

May 14, 2012 3:26 PM

Lessons Learned from Another Iteration of the Compiler Course

I am putting the wrap on spring semester, so that I can get down to summer duties and prep for fall teaching. Here are a few lessons I learned this spring.

•  A while back, I wrote briefly about re-learning the value of a small source language for the course. If I want to add a construct or concept, then I need also to subtract a corresponding load from language. In order to include imperative features, I need to eliminate recursive functions, or perhaps eliminate functions altogether. Eliminating recursion may be sufficient, as branching to a function is not much more complex than branching in loops. It is the stack of activation records that seems to slow down most students.

•  Using a free on-line textbook worked out okay. The main problem was that this particular book contained less implementation detail than books we have used in the past, such as Louden, and that hurt the students. We used Louden's TM assembly language and simulator, and the support I gave them for that stage of the compiler in particular was insufficient. The VM and assembly language themselves are simple enough, but students wanted more detailed examples of a code generator than I gave them.

•  I need to teach code generation better. I felt that way as the end of the course approached, and then several students suggested as much in our final session review. This is the most salient lesson I take from this iteration of the course.

I'm not sure at this moment if I need only to do a better job of explaining the process or if I need a different approach to the task more generally. That's something I'll need to think about between now and next time. I do think that I need to show them how to implement function calls in a bit more detail. Perhaps we could spend more time in class with statically-allocated activation records, and then let the students extend those ideas for a run-time stack and recursion.

•  For the first time ever, a few students suggested that I require something simpler than a table-driven parser. Of course, I can address several issues with parsing and code generation by using scaffolding: parser generators, code-generation frameworks and the like. But I still prefer that students write a compiler from scratch, even if only a modest one. There is something powerful in making something from scratch. A table-driven parser is a nice blend of simplicity (in algorithm) and complexity (in data) for learning how compilers really work.

I realize that I have to draw the abstraction line somewhere, and even after several offerings of the course I'm ready to draw it there. To make that work as well as possible, I may have to improve parts of the course to make it work better.

•  Another student suggestion that seems spot-on is that, as we learn each stage of the compiler, we take some time to focus on specific design decisions that the teams will have to make. This will alway them, as they said in their write-ups, "to make informed decisions". I do try to introduce key implementation decisions that they face and offer advice on how to proceed. Clearly I can do better. One way, I think, is to connect more directly with the programming styles they are working in.


As usual, the students recognized some of the same shortcomings of the course that I noticed and suggested a couple more that had not occurred to me. I'm always glad I ask for their feedback, both open and anonymous. They are an indispensable source of information about the course.

Writing your first compiler is a big challenge. I can't help but recall something writer Samuel Delany said when asked if he "if it was fun" to write a set of novellas on the order of Eliot's The Waste Land, Pound's The Cantos, and Joyce's Ulysses:

No, not at all. ... But at least now, when somebody asks, "I wonder if Joyce could have done all the things he's supposed to have done in Ulysses," I can answer, "Yes, he could have. I know, because I tried it myself. It's possible."

Whatever other virtues there are in learning to write a compiler, it is valuable for computer science students to take on big challenges and know that it is possible to meet the challenge, because they have tried it themselves.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

May 08, 2012 3:22 PM

Quality and Quantity, Thoroughbred Edition

I'll Have Another was not highly sought after as a yearling, when he was purchased for the relatively small sum of $11,000.

On Saturday, I'll Have Another rallied down the stretch to win the 2012 Kentucky Derby, passing Bodemeister, one of the race favorites that had led impressively from the gate. Afterward, a television commentator asked the horse's trainer, "What did you and the owner see in the horse way back that made you want to buy it?" The trainer's answer was unusually honest. He said something to the effect,

We buy a lot of horses. Some work out, and some don't. There is a lot of luck involved. You do the right things and see what happens.

This is as a good an example as I've heard in a while of the relationship between quantity and quality, which my memory often connects with stories from the book Art and Fear. People are way too fond of mythologizing successes and then romanticizing the processes that lead to them. In most vocations and most avocations, the best way to succeed is to do the right things, to work hard, be unlucky a lot, and occasionally get lucky.

This mindset does not to diminish the value of hard work and good practices. No, it exalts their value. What it diminishes is our sense of control over outcomes in a complex world. Do your best and you will get better. Just keep in mind that we often have a lot less control over success and failure than our mythology tends to tell us.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

May 07, 2012 3:21 PM

The University as a Gym for the Mind

In recent years, it is becoming even more common for people to think of students as "paying customers" at the university. People inside of universities, especially the teachers, have long tried to discourage this way of thinking, but it is becoming much harder to make the case. Students and parents are being required to shoulder an ever larger share of the bill for higher education, and with that comes a sense of ownership. Still, educators can't help but worry. The customer isn't always right.

Rob Miles relates a story that might help us make the case:

the gym: a university for the body

You can join a gym to get fit, but just joining doesn't make you fit. It simply gives you access to machinery and expertise that you can use to get fit. If you fail to listen to the trainer or make use of the equipment then you don't get a better body, you just get poorer.

You can buy all the running shoes you like, but if you never lace them up and hit the road, you won't become a runner.

I like this analogy. It also puts into perspective a relatively recent phenomenon, the assertion that software developers may not need a university education. Think about such an assertion in the context of physical fitness:

A lot of people manage to get in shape physically without joining a gym. To do so, all you need is the gumption (1) to learn what they need to do and (2) to develop and stick to a plan. For example, there is a lot of community support among runners, who are willing to help beginners get started. As runners become part of the community, they find opportunities to train in groups, share experiences, and run races together. The result is an informal education as good as most people could get by paying a trainer at a gym.

The internet and the web have provided the technology to support the same sort of informal education in software development. Blogs, user groups, codeathons, and GitHub all offer the novice opportunities to get started, "train" in groups, share experiences, and work together. With some gumption and hard work, a person can become a pretty good developer on his or her own.

But it takes a lot of initiative. Not all people who want to get in shape are ready or able to take control of their own training. A gym serves the useful purpose of getting them started. But each person has to do his or her own hard work.

Likewise, not all learners are ready to manage their own educations and professional development -- especially at age 18, when they come out of a K-12 system that can't always prepare them to be completely independent learners. Like a gym, a university serves the useful purpose of helping such people get started. And just as important, as at the gym, students have to do their own hard work to learn, and to prepare to learn on their own for the rest of their careers.

Of course, other benefits may get lost when students bypass the university. I am still idealistic enough to think that a liberal education, even a liberal arts education, has great value for all people. [ 1 | 2 | 3 ]. We are more than workers in an economic engine. We are human beings with a purpose larger than our earning potentials.

But the economic realities of education these days and the concurrent unbundling of education made possible by technology mean that we will have to deal with issues such as these more and more in the coming years. In any case, perhaps a new analogy might help us help people outside the university understand better the kind of "customer" our students need to be.

(Thanks to Alfred Thompson for the link to Miles's post.)

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

May 05, 2012 11:53 AM

On the Virtues of a Small Source Language in the Compiler Course

I have not finished grading my students' compilers yet. I haven't even looked at their public comments about the project. (Anonymous feedback comes later in the summer when course assessment data arrives.) Still, one lesson has risen to the surface:

Keep the source language small. No, really.

I long ago learned the importance of assigning a source language small enough to be scanned, parsed, and translated completely in a single semester. Over the years, I had pared the languages I assigned down to the bare essentials. That leaves a small language, one that creates some fun programming challenges. But it's a language that students can master in fifteen weeks.

My students this term were all pretty good programmers, and I am a weak man. So I gave in to the temptation to add just a few of more features to the language, to make it a bit more interesting for my students: variables, an assignment statement, a sequence construct, and a single loop form. It was as if I had learned nothing from all my years teaching this course.

The effect of processing a larger language manifested itself in an expected way: the more students have to do, the more likely that they won't get it all done. This affected a couple of the teams more than the others, but it wasn't so bad. It meant that some teams didn't get as far along with function calls and with recursion than we had hoped. Getting a decent subset of such a language running is still an accomplishment for students.

But the effect of processing a larger language manifested itself in a way I did not expect, too, one more detrimental to student progress: a "hit or miss" quality to the correctness of their implementations. One team had function calls mostly working, but not recursion. Another team had tail recursion mostly working(!), but ordinary function calls didn't work well. One team had local vars working fine but not global variables, while most teams knocked out globals early and, if they struggled at all, it was with locals.

The extra syntactic complexity in the language created a different sort of problems for the teams.

While a single new language feature doesn't seem like too much in isolation, but it interacts with all the existing features and all the other new features to create a much more complex language for the students to understand and for the parser to recognize and get right. Sure, our language had regular tokens and a context-free grammar, which localizes the information the scanner and parser need to see in order to do their jobs. Like all of us, though, students make errors when writing their code. In the more complex space, it is harder to track down the root cause of an error, especially when there are multiple errors present and complicate the search. (Or should I say complect?)

This is an important lesson in language design more generally, especially for a language aimed at beginners. But it also stands out when a compiler for the language is being written by beginning compiler writers.

I am chastened and will return to the True Path of Small Language the next time I teach this course.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

April 24, 2012 4:55 PM

Recursive Discussions about Recursion

The SIGCSE listserv has erupted today with its seemingly annual discussion of teaching recursion. I wrote about one of the previous discussions a couple of years ago. This year's conversation has actually included a couple of nice ideas, so it was worth following along.

Along the way, one prof commented on an approach he has seen used to introduce students to recursion, often in a data structures course. First you cover factorial, gcd, and the Fibonacci sequence. Then you cover the Towers of Hanoi and binary search. Unfortunately, such an approach is all too common. The poster's wistful analysis:

Of the five problems, only one (binary search) is a problem students might actually want to solve. Only two (Fibonacci and Hanoi) are substantially clearer in recursive than iterative form, and both of them take exponential time. In other words, recursion is a confusing way to solve problems you don't care about, extremely slowly.

Which, frankly, I think is the lesson [some CS profs] want to convey.

And this on a day when I talked with my compiler students about how a compiler can transform many recursive programs into iterative ones, and even eliminate the cost of a non-recursive function call when it is in a tail position.

The quoted passage contains my favorite line of the week thus far: In other words, recursion is a confusing way to solve problems you don't care about, extremely slowly. If that's not the message you want to convey to your students, then please don't introduce them to recursion in this way. If that is the message you want to send to your students, then I am sad for you, but mostly for your students.

I sometimes wonder about the experiences that some computer scientists bring to the classroom. It only takes a little language processing to grok the value of recursion. And a data structures course is a perfectly good place for students to see and do a little language processing. Syntax, abstract or not, is a great computer science example of trees. And students get to learn a little more CS to boot!

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 07, 2012 7:08 PM

Teaching the Perfect Class, Again

Some days, I walk out of the classroom thinking, "Well, that didn't go the way I wanted." I'm aware of everything I did wrong, and my mind begins swirling with ideas for next time -- both the next class session and the next time I teach the course. I won't make the same mistakes again, or so I think.

Other days are just the opposite. The stars aligned, and the session seemed to have gone perfectly. My questions provoked discussion. My examples caused every head to nod in appreciation. My jokes brought laughs, not eye rolls.

Ironically, those days are harder to follow. There is a temptation to try to repeat perfection, but that rarely works. Whenever I try, my timing seems off. When my questions, examples, and jokes don't elicit the same responses as the first time, I am confused. I end up trying too hard.

Teachers aren't the only people who face this problem. In this article about the science of music meeting the mind, Yo-Yo Ma describes why there are no immutable laws for expressive performance:

"Every day I'm a slightly different person," Mr. Ma said. "The instrument, which is sensitive to weather and humidity changes, will act differently. There's nothing worse than playing a really a great concert and the next day saying, 'I'm going to do exactly the same thing.' It always falls flat."

Most of the time, it is easier to fix broken things than it is to improve on good ones. Brokenness gives us cues about what to do next. Wholeness doesn't. Trying to repeat perfection in a world that always changes usually leaves us dissatisfied.

So: Treat each performance, each class session, as a chance to create, not maintain. Use ideas that have worked in the past, of course, but use them to create something new, not to try to re-create something that no longer exists.

Fortunately for me, I have far more imperfect days than perfect ones.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 25, 2012 11:18 AM

The Relationship at the Center of the Classroom

Sometimes, people think that the most important relationship in a classroom is the one between the student and the teacher. But it's not. The most important relationship in a classroom is the one between the student and the ideas that make up the course.

The teacher's job isn't to tell the student what to think. It is more the role of a matchmaker: to introduce the student to the ideas and to stir up interest. To stoke the fire and keep it burning when interest wanes. And to throw a log on the fire occasionally so that the young love can grow bigger and stronger.

The center of it all is the relationship between the student and the ideas, the discipline. The teacher's most important lasting effect is in the strength of that relationship.

Students who don't understand this never seem to realize that it is their work and their interest that make learning possible. Ultimately, students make a course successful, or not.

Teachers who don't understand this, or who forget over the course of a career, are easily disillusioned. Sometimes they think their most important job is to relay more knowledge to their students. It's usually more important to provoke the students to engage a few powerful ideas and let them seek out what they need when they need it.

Other times, teachers come almost to depend on their relationship with the students. They feel empty when the connection seems lacking. In most such cases, the best way to solve that problem isn't to work on the teacher-student connection, but to work on the connection between students and ideas. Ironically, the best way to improve that connection is often for teachers to reinvigorate their interest and passion in the ideas they think about and teach.

Occasionally something more happens. Teachers and students become fellow travelers on a journey, and that fellowship can outlast a single course or a four-year program of study. Sometimes, lifelong friendships develop. But that relationship is distinct from what happens between student, teacher, and ideas.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 18, 2012 5:20 PM

Thinking Out Loud about the Compiler in a Pure OO World

John Cook pokes fun at OO practice in his blog post today. The "Obviously a vast improvement." comment deftly ignores the potential source of OOP's benefits, but then that's the key to the joke.

A commenter points to a blog entry by Smalltalk veteran Travis Griggs. I agree with Griggs's recommendation to avoid using verbs-turned-into-nouns as objects, especially lame placeholder words such as "manager" and "loader". As he says, they usually denote objects that fiddle with the private parts of other objects. Those other objects should be providing the services we need.

Griggs allows reasonable linguistic exceptions to the advice. But he also acknowledges the pull of pop culture which, given my teaching assignment this semester, jumped out at me:

There are many 'er' words that despite their focus on what they do, have become so commonplace, that we're best to just stick with them, at least in part. Parser. Compiler. Browser.

I've thought about this break in my own OO discipline before, and now I'm thinking about it again. What would it be like to write compiles without creating parsers and code generators -- and compilers themselves -- as objects?

We could ask a program to compile itself:

     program.compileTo( targetMachine )
But is the program a program, or does it start life as a text file? If the program starts as a text file, perhaps we say
to create an abstract syntax tree, which we then could ask
  ast.compileTo( targetMachine )

(Instead of sending a parse() message, we might send an asAbstractSyntax() message. There may be no functional difference, but I think the two names indicate subtle differences in mindset.)

When my students write their compilers in Java or another OO language, we discuss in class whether abstract syntax trees ought to be able to generate target code for themselves. The danger lies in binding the AST class to the details of a specific target machine. We can separate the details of the target machine for the AST by passing an argument with the compileTo() message, but what?

Given all the other things we have to learn in the course, my students usually end up following Griggs's advice and doing the traditional thing: pass the AST as an argument to a CodeGenerator object. If we had more time, or a more intensive OO design course prior to the compilers course, we could look at techniques that enable a more OO approach without making things worse in the process.

Looking back farther to the parse behavior, would it ever make sense to send an argument with the parse() message? Perhaps a parse table for an LL(1) or LR(1) algorithm? Or the parsing algorithm itself, as a strategy object? We quickly run the risk of taking steps in the direction that Cook joshes about in his post.

Or perhaps parsing is a natural result of creating a Program object from a piece of text. In that approach, when we say

     Program source = new Program( textfile );
the internal state of source is an abstract syntax tree. This may sound strange at first, but a program isn't really a piece of text. It's just that we are conditioned to think that way by the languages and tools most of us learn first and use most heavily. Smalltalk taught us long ago that this viewpoint is not necessary. (Lisp, too, though in a different way.)

These are just some disconnected thoughts on a Sunday afternoon. There is plenty more to say, and plenty of prior art. I think I'll fire up a Squeak image sometime soon and spend some time reacquainting myself with its take on parsing and code generation, in particular the way it compiles its core out to C.

I like doing this kind of "what if?" thinking. It's fun to explore along the boundary between the land where OOP works naturally and the murky territory where it doesn't seem to fit so well. That's a great place to learn new stuff.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 13, 2012 8:00 PM

The Writer's Mindset for Programmers

Several people have pointed out that these tips on writing from John Steinbeck are useful for programmers; Chris Freeman even mapped them to writing code. I like to make such connections, most recently to the work of Roger Rosenblatt (in several entries, including The Summer Smalltalk Taught Me OOP) and John McPhee, a master of non-fiction (in an entry on writing, teaching, and programming). Lately I have been reading about and from David Foster Wallace, as I wrote a few weeks ago. Several quotes from interviews he gave in the 1990s and 2000s reminded me of programming, both doing it and learning to do it.

The first ties directly into the theme from the entry on my summer of Smalltalk. As Wallace became more adept at making the extensive cuts to his wide-ranging stories suggested by his editor, he adopted a familiar habit:

Eventually, he learned to erase passages that he liked from his hard drive, in order to keep himself from putting them back in.

It's one thing to kill your darlings. It's another altogether to keep them from sneaking back in. In writing as in programming, sometimes rm -r *.* is your friend.

A major theme in Wallace's work -- and life -- was the struggle not to fall into the comfortable patterns of thought engendered by the culture around us. The danger is, those comfortable ruts separate us from what is real:

Most 'familiarity' is mediated and delusive.

Programmers need to keep this in mind when they set out to learn a new programming language and or a new style of programming. We tend to prefer the familiar, whether it is syntax or programming model. Yet familiarity is conditioned by so many things, most prominently recent experience. It deludes us into thinking some things are easier or better than others, often for no other reason than the accident of history that brought us to a particular language or style first. When we look past the experience that gets in the way of seeing the new thing as it is, we enable ourselves to appreciate the new thing as it is, and not as the lens of our experience distorts it.

Of course, that's easier said than done. This struggle consumed Wallace the writer his entire life.

Even so, we don't want to make the mistake of floating along the surface of language and style. Sometimes, we think that makes us free to explore all ideas unencumbered by commitment to any particular syntax, programming model, or development methodology. But it is in focusing our energies and thinking to use specific tools, to write in a specific cadre of languages, and to use a particular styles that we enable ourselves to create, to do something useful:

If I wanted to matter -- even just to myself -- I would have to be less free, by deciding to choose in some kind of definite way.

This line is a climactic revelation of the protagonist in Wallace's posthumously published unfinished novel, The Pale King. It reminds us that freedom is not always so free.

It is much more important for a programmer to be a serial monogamist than a confirmed bachelor. Digging deep into language and style is what makes us stronger, whatever language or style we happen to work in at any point in time. Letting comfortable familiarity mediate our future experiences is simply a way of enslaving ourselves to the past.

In the end, reading Wallace's work and the interviews he gave shows us again that writers and programmers have a lot in common. Even after we throw away all the analogies between our practices, processes, and goals, we are left with an essential identity that we programmers share with our fellow writers:

Writing fiction takes me out of time. That's probably as close to immortal as we'll ever get.

Wallace said this in the first interview he gave after the publication of his first novel. It is a feeling I know well, and one I never want to live without.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

March 09, 2012 3:33 PM

This and That from Douglas Hofstadter's Visit

Update: In the original, I conflated two quotes in
"Food and Hygiene". I have un-conflated them.

In addition to his lecture on Gödel's incompleteness theorem, Douglas Hofstadter spent a second day on campus, leading a seminar and giving another public talk. I'll blog on those soon. In the meantime, here are a few random stories I heard and impressions I formed over the two days.

The Value of Good Names.   Hofstadter told a story about his "favorite chapter on Galois theory" (don't we all have one?), from a classic book that all the mathematicians in the room recognized. The only thing Hofstadter didn't like about this chapter was that it referred to theorems by number, and he could never remember which theorem was which. That made an otherwise good text harder to follow than it needed to be.

In contrast, he said, was a book by Galyan that gave each theorem a name, a short phrase evocative of what the theorem meant. So much better for reader! So he gave his students one semester an exercise to make his favorite chapter better: they were to give each of the numbered theorems in the chapter an evocative name.

This story made me think of my favorite AI textbook, Patrick Henry Winston's Artificial Intelligence. Winston's book stands out from the other AI books as quirky. He uses his own vocabulary and teaches topics very much in the MIT AI fashion. But he also gives evocative names to many of the big ideas he wants us to learn, among them the representation principle, the principle of least commitment, the diversity principle, and the eponymous "Winston's principle of parallel evolution". My favorite of all is the convergent intelligence principle:

The world manifests constraints and regularities. If a computer is to exhibit intelligence, it must exploit those constraints and regularities, no matter of what the computer happens to be made.

To me, that is AI.

Food and Hygiene.   The propensity of mathematicians to make their work harder for other people to understand, even other mathematicians, reminded Doug Shaw of two passages, from famed mathematicians Gian-Carlo Rota and André Weil. Rota said that we must guard ... against confusing the presentation of mathematics with the content of mathematics. More colorfully, Weil cautioned [If] logic is the hygiene of the mathematician, it is not his source of food. Theorems, proofs, and Greek symbols are mathematical hygiene. Pictures, problems, and understanding are food.

A Good Gig, If You Can Get It.   Hofstadter holds a university-level appointment at Indiana, and his research on human thought and the fluidity of concepts is wide enough to include everything under the sun. Last semester, he taught a course on The Catcher in the Rye. He and his students read the book aloud and discussed what makes it great. Very cool.

If You Need a Research Project...   At some time in the past, Hofstadter read, in a book or article about translating natural language into formal logic, that 'but' is simply a trivial alternative to 'and' and so can be represented as such. "Nonsense", he said! 'but' embodies all the complexity of human thought. "If we could write a program that could use 'but' correctly, we would have accomplished something impressive."

Dissatisfied.   Hofstadter uses that word a lot in conversation, or words like it, such as 'unsatisfying'. He does not express the sentiment in a whiny way. He says it in a curious way. His tone always indicates a desire to understand something better, to go deeper to the core of the question. That's a sign of a good researcher and a deep thinker.


Let's just say that this was a great treat. Thanks to Dr. Hofstadter for sharing so much time with us here.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Patterns, Teaching and Learning

February 19, 2012 12:17 PM

The Polymorphism Challenge

Back at SIGCSE 2005, Joe Bergin and ran a workshop called The Polymorphism Challenge that I mentioned at the time but never elaborated on. It's been on my mind again for the last week. First I saw a link to an OOP challenge aimed at helping programmers move toward OO's ideal of small classes and short methods. Then Kent Beck tweeted about the Anti-IF Campaign, which, as its name suggests, wants to help programmers "avoid dangerous ifs and use objects to build a code that is flexible, changeable, and easily testable".

That was the goal of The Polymorphism Challenge. I decided it was time to write up the challenge and make our workshop materials available to everyone.


Beginning in the mid-1990s, Joe and I have been part of a cabal of CS educators trying to teach object-oriented programming style better. We saw dynamic polymorphism as one of the key advantages to be found in OOP. Getting procedural programmers to embrace it, including many university instructors, was a big challenge.

At ChiliPLoP 2003, our group was riffing on the idea of extreme refactoring, during which Joe and I created a couple of contrived examples eliminating if statements from a specific part of Karel the Robot that seemed to require them.

This led Joe to propose a programming exercise he called an étude, similar to what these days are called katas, which I summarized in Practice for Practice's Sake:

Write a particular program with a budget of n if-statements or fewer, for some small value of n. Forcing oneself to not use an if statement wherever it feels comfortable forces the programmer to confront how choices can be made at run-time, and how polymorphism in the program can do the job. The goal isn't necessarily to create an application to keep and use. Indeed, if n is small enough and the task challenging enough, the resulting program may well be stilted beyond all maintainability. But in writing it the programmer may learn something about polymorphism and when it should be used.

Motivated by the Three Bears pattern, Joe and I went a step further. Perhaps the best way to know that you don't need if-statements everywhere is not to use them anywhere. Turn the dials to 11 and make 'em all go away! Thus was born the challenge, as a workshop for CS educators at SIGCSE 2005. We think it is useful for all programmers. Below are the materials we used to run the workshop, with only light editing.


Working in pairs, you will write (or re-write) simple but complete programs that normally use if statements, to completely remove all selection structures in favor of polymorphism.


The purpose of this exercise is not to demonstrate that if statements are bad, but that they aren't necessary. Once you can program effectively this way, you have a better perspective from which to choose the right tool. It is directed at skilled procedural programmers, not at novices.


You should attempt to build the solutions to one of the challenge problems without using if statements or the equivalent.

You may use the libraries arbitrarily, even when you are pretty sure that they are implemented with if statements.

You may use exceptions only when really needed and not as a substitute for if statements. Similarly, while loops are not to be used to simulate if statements. Your problems should be solved with polymorphism: dynamic and parameterized.

Note that if you use (for example) a hash map and the program cannot control what is used as a key in a get (user input, for example). then it might return null. You are allowed to use an exception to handle this situation, or even an if. If you can't get all if statements out of your code, then see how few you really need to live with.


Participants worked in pairs. They had a choice of programming scenarios, some of which were adapted from work by others:

This pdf file contains the full descriptions given to participants, including some we did not try with workshop participants. If you come up with a good scenario for this challenge, or variations on ours, please let me know.


When participants hit a block and asked for pointers, we offered hints of various kinds, such as:

•  When you have two behaviors, put them into different objects. The objects can be created from the same class or from related classes. If they are from the same class, you may want to use parameterized polymorphism. When the classes are different, you can use dynamic polymorphism. This is the easy step. Java interfaces are your friend here.

•  When you have the behaviors in different objects, find a way to bring the right object into play at the right time. That way, you don't need to use ad hoc methods to distinguish among them. This is the hard part. Sometimes you can develop a state change diagram that makes it easier. Then you can replace one object with another at the well-defined state change points.

•  Note that you can eliminate a lot of if statements by capturing early decisions -- perhaps made using if statements -- in different objects. These objects can then act as "flags with behavior" when they are passed around the program. The flag object then encapsulates the earlier decision. Now try to capture those decisions without if statements.

(Note that this technique alone is a big win in improving the maintainability of code. You replace many if statements spread throughout a program with one, giving you a single point of change in future.)

•  Delegation from one object to another is a real help in this exercise. This leads you to the Strategy design pattern. An object M can carry with it another, S, that encapsulates the strategy M has for solving a problem. To perform the associated behavior, M delegates to S. By changing the strategy object, you can change the behavior of the object that carries it. M seems to behave polymorphically, but it is really S that does the work.

•  You can modify or enhance strategies using the Decorator design pattern. A decorator D implements the same interface as the thing it decorates, M. When sent a message, the decorator performs some action and also sends the same message to the object it decorates. Thus the behavior of M is executed, but also more. Note that D can provide additional functionality both before and after sending the message to M. A functional method can return quite a different result when sent through a decorated object.

•  You can often choose strategies or other objects that encapsulate decisions using the Factory design pattern. A hash map can be a simple factory.

•  You can sometimes use max and min to map a range of values onto a smaller set and then use an index into a collection to choose an object. max and min are library functions so we don't care here how they might be implemented.

At the end of the workshop, we gave one piece of general advice: Doing this exercise once is not enough. Like an étude, it can be practiced often, in order to develop and internalize the necessary skills and thought patterns.


I'll not give any answers or examples here, so that readers can take the challenge themselves and try their hand at writing code. In future posts, I'll write up examples of the techniques described in the hints, and perhaps a few others.

Joe wrote up his étude in Variations on a Polymorphic Theme. In it, he gives some advice and a short example.

Serge Demeyer, Stéphane Ducasse, and Oscar Nierstrasz wrote a wonderful paper that places if statements in the larger context of an OO system, Transform Conditionals: a Reengineering Pattern Language.

If you like the Polymorphism Challenge, you might want to try some other challenges that ask you to do without features of your preferred language or programming style that you consider essential. Check out these Programming Challenges.

Remember, constraints help us grow. Constraints are where creative solutions happen.

I'll close with the same advice we gave at the end of the workshop: Doing this exercise once is not enough, especially for OO novices. Practice it often, like an étude or kata. Repetition can help you develop the thought patterns of an OO programmer, internalize them, and build the skills you need to write good OO programs.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

February 13, 2012 4:13 PM

The Patterns, They Are A-Changin'

Seth Godin's recent blog entry at the Domino Project talks about how changes in the book are changing the publishing industry. He doesn't use the word 'pattern' in his discussion, in the sense of an Alexandrian pattern, but that's how I see his discussion. The forces at play in our world are changing, which leads to changes in the forms that find equilibrium. In particular, Godin mentions:

  • Patterns of length. There are tradeoffs involving the cost of binding and the minimum viable selling price for publishers versus the technology of binding and a maximum viable purchase price for consumers. These have favored certain sizes in print.
  • Patterns of self-sufficiency. "Electronic forms link." Print forms must stand on their own.
  • Patterns of consumption. These are are driven even more economic forces than the other two types, not technical forces. Consuming e-books is, he says, "more like browsing than buying."

Godin looks mostly at the forward implications of changes in the patterns of self-sufficiency, but I've been thinking about the backward implications of print publications having to stand on their own. As noted in a recent entry, I have begun to adapt a couple of my blog entries into articles for print media, such as newspaper and magazine opinion pieces. My blog entries link generously and regularly to my earlier writings, because much of what I write is part of an ongoing process of thinking out loud. I also link wherever I can to other peoples' works, whether blogs, journal articles, code, or other forms. That works reasonably well in a blog, because readers can see and following the context in which the current piece is written. It also means that I don't have to re-explain every idea that a given entry deals with; if it's been handled well in a previous entry, I link to it.

As I try to adapt individual blog entries, I find that they are missing so much context when we strip the links out. In some places, I can replace the link with a few sentences of summary. But how much should I explain? It's easy to find myself turning one- or two-page blog entry into four pages, or ten. The result is that the process of "converting an entry into an article" may become more like writing a new piece than like modifying an existing piece. That's okay, of course, but it's a different task and requires a different mindset.

For someone steeped in Alexander's patterns and the software patterns community, this sentence by Godin signals a shift in the writing and publishing patterns we are all used to:

As soon as paper goes away, so do the chokepoints that created scarcity.

Now, the force of abundance begins to dominate scarcity, and the forces of bits and links begin to dominate paper and bindings and the bookshelves of the neighborhood store.

It turns out that the world has for the last hundreds of years been operating within a small portion of the pattern language of writing and publishing books. As technology and people change, the equilibrium points in the publishing space have changed... and so we need to adopt a new set of patterns elsewhere in the pattern language. At first blush, this seems like unexplored territory, but in fact it is more. This part of the pattern language is, for the most part, unwritten. We have to discover these patterns for ourselves.

Thus, we have the Domino Project and others like it.

The same shifting of patterns is happening in my line of work, too. A lot of folks are beginning to explore the undiscovered country of university education in the age of the internet. This is an even greater challenge, I think, because people and how they learn are even more dominant factors in the education pattern language than in the publishing game.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Teaching and Learning

February 03, 2012 4:46 PM

Tactical Decisions That Affect Students

My recent post about teaching with authenticity and authority is about attitude. Some teaching behaviors happen in the moment. They are tactical decisions in response to specific events, but they, too, can have long-term effects on how our students see us and on how well a course proceeds.

A conversation with a colleague this week reminded me of a situation I've encountered several times over the years with faculty from across my university. Many have a lot of experience teaching yet seem to make a particular bad tactical decision.

Our university has a policy -- approved by the faculty -- that students are not to be penalized for missing class to participate in university-sanctioned events. These often involve representing the school in athletic and other extracurricular activities such as debate, but sometimes they relate important on-campus functions involving scholarships and the like.

Many faculty have attendance policies in their courses. For instance, they may take roll each day, and if a student misses more than four class meetings, then the student's grade in the course is reduced. These faculty genuinely believe that class participation is essential and want to encourage students to take attendance seriously. Because they have a large allowance for absences, many do not distinguish between "unexcused" absences and "excused" ones. That simplifies bookkeeping, and in the end it almost never matters to the student's final grade.

Still some students worry that they'll end up with unexpected reasons to miss the allotment of "free" days. When they have a university-sanctioned activity, they want the professor not to hold it against them. Don't worry, the prof tells them, it's unlikely to affect your grade, and if it does, I'll reconsider. Still the students worry, and point to the policy that they should not be penalized for the absence.

When I have talked to faculty in these situations, often in my roll as a faculty rep on my university's athletics advisory council but sometimes as department head, I am surprised by the faculty's responses when they learn of the faculty-approved policy.

"My attendance sheet is a record of fact. I cannot mark students present if they are absent."

Couldn't you treat the sheet as a record of unexcused absences?

"No. It also indicates that students have participated in class that day, and at what level. If they are absent, then they have not participated."

I am careful in these conversations to respect their autonomy and their control over the classroom, and the conversations go quite well. They understand the student's concern. They readily acknowledge that the absence is unlikely to have an effect on the student's grade and assure me that they have assured the student as much. They just don't want to mess with their systems. And they are baffled by the student's continued unhappiness with the result.

This baffles me. If attendance or absence on that one day is unlikely to have an effect on the student's grade, what is the advantage of falling on one's sword over such a small issue? And, metaphorically, falling on one's sword is what happens. The student is disenchanted at best, and unhappy enough to grieve the action at worst.

I have come to see such situations in this way: "unlikely to have an effect on the student's grade" creates an element of uncertainty. As teachers, we can bear the uncertainty, or we can leave our students to bear it. There may well be good reasons to leave uncertainty in our students' minds. We should ask ourselves, "Is this one of them?" If not, then we should bear it. Tell the student clearly, "This will not affect your grade", and make whatever change in our system of recording or grading needed to make it so.

The result of this approach is, most likely, a student with a positive outlook about us and about our course. The lack of fear or uncertainty frees them to learn better. The student may even think that the prof cares more about the student's learning than about his or her system.

Of course, all bets are off for repeated offenders, or for students who exhibit patterns of disengagement from the course. Students have to be invested in their own learning, too.

This scenario is but one of many in which instructors are required to make tactical decisions. The effects of bad decisions can accumulate, in a single student and in the class more generally. They then have negative effects on student attitude, on the atmosphere in the classroom, and on teaching and learning. The good professors I know seem to reliably make good tactical decisions in the moment, and when they err they are willing and able to make amends.

The decisions one makes in such situations is a direct result of one's general attitude, but I think that we can all learn to make better decisions. It is a skill that teachers can learn in the process of becoming better teachers.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

February 01, 2012 5:00 PM

"You Cannot Trust Your Creativity Yet"

You've got to learn your instrument.
Then, you practice, practice, practice.
And then, when you finally get up there on the bandstand,
forget all that and just wail. -- Charlie Parker

I signed up for an opportunity to read early releases of a book in progress, Bootstrapping Design. Chapter 4 contain a short passage that applies to beginning programmers, too:

Getting good at design means cultivating your taste. Right now, you don't have it. Eventually you will, but until then you cannot trust your creativity. Instead, focus on simplicity, clarity, and the cold, hard science of what works.

M.C. Escher, 'Hands'

This is hard advice for people to follow. We like to use our brains, to create, to try out what we know. I see this desire in many beginning programming students. The danger grows as our skills grow. One of my greatest frustrations comes in my Programming Languages course. Many students in the course are reluctant to use straightforward design patterns such as mutual recursion.

At one level, I understand their mindset. They have started to become accomplished programmers in other languages, and they want to think, design, and program for themselves. Oftentimes, their ill-formed approaches work okay in the small, even if the code makes the prof cringe. As our programs grow, though, the disorder snowballs. Pretty soon, the code is out of the student's control. The prof's, too.

A good example of this phenomenon, in both its positive and negative forms, happened toward the end of last semester's course. A series of homework assignments had the students growing an interpreter for a small, Scheme-like language. It eventually became the largest functional program they had ever written. In the end. there was a big difference between code written by students who relied on "the cold, hard science" we covered in class and code written by students who had wondered off into the wilderness of their own creativity. Filled with common patterns, the former was relatively easy to read, search, and grade. The latter... not so much. Even some very strong students began to struggle with their own code. They had relied too much on their own approaches for decomposing the problem and organizing their programs, but those ideas weren't scaling well.

I think what happens is that, over time, small errors, missteps, and complexities accumulate. It's almost like the effect of rounding error when working with floating point numbers. I vividly remember experiencing that in mu undergrad Numerical Analysis courses. Sadly, few of our CS students these days take Numerical Analysis, so their understanding of the danger is mostly theoretical.

Perhaps the most interesting embodiment of trusting one's own creativity too much occurred on the final assignment of the term. After several weeks and several assignments, we had a decent sized program. Before assigning the last set of requirements, I gave everyone in the class a working solution that I had written, for reference. One student was having so much trouble getting his own program to work correctly, even with reference to my code, that he decided to use my code as the basis for his assignment.

Imagine my surprise when I saw his submission. He used my code, but he did not follow the example. The code he added to handle the new requirements didn't look anything like mine, or like what we had studied in class. It repeated many of the choices that had gotten him into hot water over the course of the earlier assignments. I could not help but chuckle. At least he is persistent.

It can be hard to trust new ideas, especially when we don't understand them fully yet. I know that. I do the same thing sometimes. We feel constrained by someone else's programming patterns and want to find our own way. But those patterns aren't just constraints; they are also a source of freedom. I try to let my students grow in freedom as they progress through the curriculum, but sometimes we encounter something new like functional programming and have to step back into the role of uncultivated programmer and grow again.

There is great value in learning the rules first and letting our tastes and design skill evolve slowly. Seniors taking project courses are ready, so we turn them loose to apply their own taste and creativity on Big Problems. Freshmen usually are not yet able to trust their own creativity. They need to take it slow.

To "think outside the box", you you have to start with a box. That is true of taste and creativity as much as it is of knowledge and skill.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

January 31, 2012 4:01 PM

Teaching with Authenticity and Authority

What is this?
A new teaching with authority.
-- Mark 1:27

Over the last few weeks, we've been processing student assessments from fall semester. Reading student comments about my course and other profs' courses has me thinking about the different ways in which students "see" their instructors. Two profs can be equally knowledgable in an area yet give off very different vibes to their class. The vibe has a lot to do with how students interpret the instructor's behavior. It also affects student motivation and, ultimately, student learning.

Daniel Lemire recently offered two rules for teaching in the 21st century, one of which was to be an authentic role model. If students know that "someone ordinary" like a professor was able to master the course material, then they will have reason to believe that they can do the same. Authenticity is invaluable if we hope to model the mindset of a learner for our students.

It is also a huge factor in the classroom in another way as well. Students are also sensitive to whether we are authentic users of knowledge. If I am teaching agile approaches to software development but students perceive that I am not an agile developer when writing my own code outside the course, then they are less likely to take the agile approaches seriously. If I am teaching the use of some theoretical technique for solving a problem, say, nondeterministic finite state machines, but my students perceive that I do something else when I'm not teaching the course, then their motivation to master the technique wanes.

I think that how students "see" their instructors is affected by something as important as being authentic: being authoritative. I don't mean authority in the sense of power, in particular power granted from on high. In my experience, that sort of authority is surprisingly ineffective as a tool for getting students to follow me into a difficult forest of knowledge. They respect the authority, perhaps, but they tend to balk at orders to march into the dark.

I also don't mean authority born out of perfection. If that were the case, I could never step into a classroom. Just today, I made an error in my compilers course while demonstrating a technique for converting nondeterministic finite automata into deterministic automata. Such errors occur frequently enough to disabuse both me and my students of the thought that mastery means always getting the right answers. (But not so often, I hope, as to undermine students' confidence in my mastery of the material.)

Instead, I am thinking of the persuasive or influential sense of authority that comes out of mastery itself. This sense of the word is more faithful to the origin of the word in the Latin auctor, also the source of "author". An author is someone who originates or gives existence to something. Among the earliest uses of the "auctor" or "author" was to refer renowned scholars in medieval universities. Such scholars created new knowledge, or at least discovered it and incorporated it into the knowledge of the community. They did not simply mimic back what was already understood.

I suspect that this is what the crowds thought in the Bible passage from Mark given above. The scribes of the time could read scripture and recite it back to people. They could even teach by elaborating on it or relating it to the lives of the people. But the essence of their teaching was already present in the scripture. Along came Jesus, who taught the scripture, but more. His elaborations and parables brought new depth to the old laws and stories. They taught something new. He was speaking with authority.

The sense of authorship or origination is key. We see this sense of authority when we speak of authors. In modern times, the word "auctor" is sometimes used to denote the person who donates the genetic material that serves as the basis for a clone. This usage draws even more heavily on the idea of creation residing in the root word.

That's all fine as philosophy and etymology, but I think this matters in the classroom, too. The authoritative teacher does not simply "read the book" and then lecture, which at a certain level is simply reading the book to the students. Good teachers understand the material at a deeper level. They use it. They apply the techniques and makes connections to other topics, even other course and disciplines. They create new ideas with the course material. They don't simply relay information to students; they also create knowledge.

Such teachers are able to teach with authority.

One of the big challenges for new instructors, especially new ones, is to learn how to teach with this kind of authority. Some assume that their status as professors grants them the sort of authority founded in power, and that is what they project to their students. They know their stuff, yet they don't speak with the sort of authority that resonates with their students. When the students respond with less than capitulation, the professor reacts with power or defensiveness. These only make the problem worse.

It takes experience in the classroom for most of us to figure this out. I know it took me a while. It also takes an openness to change and a willingness to learn. Fortunately, these are traits possessed by many people who want to teach.

When things are going wrong in my classes these days, I try to step back from the situation. I ask myself, Am I being authentic? Am I speaking with an authority born out of scholarship, rather than an authority born out of status? Am I letting my students see these things?

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 23, 2012 4:06 PM

Learning To Do Things, and the Liberal Arts

Last summer, I wrote an entry, Failure and the Liberal Arts, which suggested (among things) that the value of a liberal education lies in the intersections between disciplines and ideas. This contrasts with the view seemingly held by many people that a liberal education means to study the "liberal arts", mostly the humanities and arts.

The traditional liberal arts education faces a lot of challenges in the current cultural and economic climate. As colleges and universities cope with rising costs, increasing competition for students, and an unbundling of the university's products engendered by technology, undergraduate degrees filled only with history, literature, and upper-class culture -- and no obvious preparation for economic life after graduation -- creates another barrier to making a sale to prospective parents and students.

The goals of a liberal education are laudable. How can we craft undergraduate experiences that help students to develop broad skills in reading, writing, and thinking? Isn't that what the traditional liberal arts do best?

Timothy Burke nails the answer to these questions in this comment on his own blog:

... a person becomes most skilled at creating, making, innovating, thinking, in ways that have value in existing professions *and* in life through indirect means. Studying communication doesn't make you a better communicator, studying entrepreneurship doesn't make you a better entrepreneur, and so on.

But this has huge, huge implications as a perspective for the content and practice of "liberal arts" as an educational object. ... Maybe learning to do things (creating, innovating, expressing, etc.) isn't advanced by anything resembling intensive study of a fixed body of knowledge, but by doing.

I couldn't agree more. This idea lies at the heart of my blog, in both theme and name: knowing and doing. Learning to do things is best accomplished by doing things, not (just) by studying a fixed body of knowledge, however intensive the studying or valuable the body of knowledge. Sure, at some point, you gotta know stuff, and intensive study of a domain is a useful and valuable enterprise. But we learn best by doing.

My compilers students will -- if all goes as we plan and hope -- learn much more this semester than how to write a compiler. They will learn things that I can't teach them directly with a lecture. And even when I could give them the lecture, they would not learn it in the same way as when they learn it on the ground, in the trenches, one shovel of dirt at a time.

Liberal arts colleges -- and universities with liberal arts cores and general education programs -- would do well to take this lesson to heart sooner rather than later.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

January 19, 2012 4:23 PM

An Adventure in Knowing Too Much ... Or Thinking Too Little

To be honest, it is a little of both.

I gave the students in my compilers course a small homework assignment. It's a relatively simple translation problem whose primary goals are to help students refresh their programming skills in their language of choice and to think about the issues we will be studying in depth in coming weeks: scanning, parsing, validating, and generating.

I sat down the other day to write a solution ... and promptly made a mess.

In retrospect, my problem was that I was somewhere in between "do it 'simple'" and "do it 'right'". Unlike most of the students in the class, already know a lot about building compilers. I could use all that knowledge and build a multi-stage processor that converts a string in the source language (a simple template language) into a string in the target language (ASCII text). But writing a scanner, a parser, a static analyzer, and a text generator seems like overkill for such a simple problem. Besides, my students aren't likely to write such a solution, which would make my experience less valuable helping them to solve the problem, and my program less valuable as an example of a reasonable solution.

So I decided to keep things simple. Unfortunately, though, I didn't follow my own agile advice and do the simplest thing that could possibly work. As if with the full-compiler option, I don't really want the simplest program that could possibly work. This problem is simple enough to solve with a single pass algorithm, processing the input stream at the level of individual characters. That approach would work but would obscure the issues we are exploring in the course in a lot of low-level code managing states and conditions. Our goal for the assignment is understanding, not efficiency or language hackery.

I was frustrated with myself, so I walked away.

Later in the day, I was diddling around the house and occasionally mulling over my situation. Suddenly I saw a solution in mind. It embodied a simple understanding of my the problem, in the middle ground between too simple and too complex that was just right.

I had written my original code in a test-first way, but that didn't help me avoid my mess. I know that pair programming would have. My partner would surely have seen through the complexity I was spewing to the fact that I was off track and said, "Huh? Cut that out." Pair programming is an unsung hero in cases like this.

I wonder if this pitfall is a particular risk for CS academics. We teach courses that are full of details, with the goal of helping students understand the full depth of a domain. We often write quick and dirty code for our own purposes. These are at opposite ends of the software development spectrum. In the end, we have to help students learn to think somewhere in the middle. So, we try to show students well-designed solutions that are simple enough, but no simpler. That's a much more challenging task that writing a program at either extreme. Not being full-time developers, perhaps our instincts for finding the happy medium aren't as sharp as they might be.

As always, though, I had fun writing code.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

January 10, 2012 4:05 PM

Looking Forward: Preparing for Compilers

Spring semester is underway. My compilers course met for the first time today. After all these years, I still get excited at the prospect of writing a compiler. On top of that, we get to talk about programming languages and programming all semester.

I've been preparing for the course since last semester, during the programming languages course I debriefed recently. I've written blog entries as I planned previous offerings of the compiler course, on topics such as short iterations and teaching by example, fifteen compilers in fifteen weeks and teaching the course backwards. I haven't written anything yet this time for one of the same reasons I haven't been writing about my knee rehab: I haven't had much to say. Actually, I have two small things.

First, on textbooks. I found that the textbook I've used for the last few offering of the course now costs students over $140, even at Amazon. That's no $274.70, but sheesh. I looked at several other popular undergrad compiler texts and found them all to be well over $100. The books my students might want to keep for their professional careers are not suitable for an undergrad course, and the ones that are suitable are expensive. I understand the reasons why yet can't stomach hitting my students with such a large bill. The Dragon book is the standard, of course, but I'm not convinced it's a good book for my audience -- too few examples, and so much material. (At least it's relatively inexpensive, at closer to $105.)

I found a few compiler textbooks available free on-line, including that $275 book I like. Ultimately I settled on Torben Mogensen's Basics of Compiler Design. It covers the basic material without too much fluff, though it lacks a running example with full implementation. I'll augment it with my own material and web readings. The price is certainly attractive. I'll let you know how it works out.

Second, as I was filing a pile of papers over break, I ran across the student assessments from last offering of the course. Perfect timing! I re-read them and am able to take into account student feedback. The last group was pretty pleased with the course and offered two broad suggestions for improvement: more low-level details and more code examples. I concur in both. It's easy when covering so many new ideas to stay at an abstract level, and the compiler course is no exception. Code examples help students connect the ideas we discuss with the reality of their own projects.

These are time consuming improvements to make, and time will be at a premium with a new textbook for the course. This new text makes them even more important, though, because it has few code examples. My goal is to add one new code example to each week of the course. I'll be happy if I manage one really good example every other week.

And we are off.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

January 04, 2012 3:26 PM

Looking Backward: Closing the Book on Fall Semester

At Week 2 of my fall programming languages course, I planned to write an entry called More Concrete Ideas for the course, following up on an entry about extravagant ideas. I even had a geeky Scheme joke to open, asking whether that was ((More Concrete) Ideas) or (More (Concrete Ideas)).

Suddenly, or so it seemed, we were two-thirds of the way through the course, and I still hadn't written. Now we are three weeks past the end of the course. Time to debrief. (At least I'm not waiting to debrief the course until the next time we offer it, in Spring 2013!)

Due to a recent change in my department's prerequisite structure, I had a more diverse student population this semester than I have ever had in Programming Languages. Their diversity covered multiple dimensions, including the languages they knew well and the level of experience they had writing larger programs. Many did not know OOP or Java yet, which have long been staples of my reference set.

This led to an interesting phenomenon. One particular student was at the low end of background, experience, and skill set. In most ways, he was more of a rank beginner than the usual student in the course. Over the course of the semester, he asked me questions unlike any I had been asked in this course for a long while, if ever. His questions caused me to step back and focus on basic elements of the craft of programming that I have never discussed in the context of this course and functional programming. At times it was frustrating, but most of the time it was a good challenge.

More importantly, his questions exposed for me the sort of difficulties that I often miss when students are doing better. Even students who are doing well in the course develop subtle misconceptions about the material, or start with misconceptions that my teaching does address. However, their decent scores on quizzes and their general understanding of the material allow me to gloss over misconceptions that come back to trouble us later.

This is the frame of mind I was in when I wrote an entry on open dialogue a few weeks back. Even with my beginner-like student, I did not enough questions soon enough. For all I know, students in previous semesters have had similar problems that I never addressed in class. After this round, I am thinking more concretely about where I have gone wrong and how I might do better in the future.

As mentioned earlier, I offered students a chance to document some of their epic failures during the semester. Of course, students always have that opportunity; this time, I offered extra credit as an inducement to take on the challenge.

In the end, roughly a third of the students submitted essays, though all came in very late in the semester, enough so that anything I might fold back into the course will have to wait until the next time around.

I enjoyed several of the essays. One student learned one valuable lesson this semester: "There are only so many hours in a day." Another student wrote,

My usual strategy with all but the most basic of the problems is to stare at them for a while and then walk away.

(A former student wondered, "But does he ever come back?")

Like many programmers, this student lets his brain work on a problem over time and eventually arrives at a solution without the stress of staring at the problem until he gets it right the first time. However, he found this semester that even this approach cannot alleviate all stress, because sometimes no answer seems to work. What then?

I'll try the Epic Fail idea again in coming semesters. I'll probably give students a bit more guidance about the kind of failures I mean (not run-of-the-mill stuff, really) and the kind of write-up that I'm looking for. As always, students appreciate examples!

In that vein, I had thought this semester that I would provide students with occasional example programs (1) to illustrate functional programming and Scheme outside the realm of the study of programming languages and (2) to illustrate programming language topics with non-FP code. The idea was to broaden the reach of the course, to help students struggling with FP and to open doors for those students who were really interested and ready to see more.

Unfortunately, I did not do nearly enough of this. Next time, I will find a way, somehow. Among the topics that can benefit most from this idea are continuation-based web servers and MapReduce.

As I graded the final exam, I was most disappointed with student understanding of curried functions and of the equivalence between local variables and function application. Currying is such a beautiful idea at the heart of functional style, which we saw repeatedly throughout the semester. The local variable/function app idea is one we discussed in some detail at least three times during the semester, in different contexts, and which I highlighted as "something that will be on the final".

I'll think long and hard before I teach the course again about ways to help students understand these ideas better. If you have any suggestions, please let me know!

One student tweeted after the course:

I think the larger examples are what helped ideas stick in my mind. This class had the most information I've learned in a class before. It was a challenge which was really refreshing. Gliding through classes put me into a lull for awhile.

It's important for me and other teachers to remember just how heavy some courses can be. I'm fully aware that this course is different than any students have taken before and that it will challenge them in new ways. I tell them so on Day 1. Still, it's good for me to remember just what this means for students on a day-to-day basis, and to find ways to help them face the challenge while they are in the trenches.

All in all, this was a good group of students. They were game for a challenging, deep course and gave me a lot of their time and energy. I can't ask for much more than that. The good news is that they made decent progress and should be better prepared for some of the challenges they'll face in their careers. And a good number of them are going on to the compilers course. Great fun will be had by all.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 19, 2011 4:49 PM

"I Love The Stuff You Never See"

I occasionally read and hear people give advice about how to find a career, vocation, or avocation that someone will enjoy and succeed in. There is a lot of talk about passion, which is understandable. Surely, we will enjoy things we are passionate about, and perhaps then we want to put in the hours required to succeed. Still, "finding your passion" seems a little abstract, especially for someone who is struggling to find one.

This weekend, I read A Man, A Ball, A Hoop, A Bench (and an Alleged Thread)... Teller!. It's a story about the magician Teller, one half of the wonderful team Penn & Teller, and his years-long pursuit of a particular illusion. While discussing his work habits, Teller said something deceptively simple:

I love the stuff you never see.

I knew immediately just what he meant.

I can say this about teaching. I love the hours spent creating examples, writing sample code, improving it, writing and rewriting lecture notes, and creating and solving homework assignments. When a course doesn't go as I had planned, I like figuring out why and trying to fix it. Students see the finished product, not the hours spent creating it. I enjoy both.

I don't necessarily enjoy all of the behind-the-scenes work. I don't really enjoy grading. But my enjoyment of the preparation and my enjoyment of the class itself -- the teaching equivalent of "the performance" -- carries me through.

I can also say the same thing about programming. I love to fiddle with source code, organizing and rewriting it until it's all just so. I love to factor out repetition and discover abstractions. I enjoy tweaking interfaces, both the interfaces inside my code and the interfaces my code's users see. I love that sudden moment of pleasure when a program runs for the first time. Users see the finished product, not the hours spent creating it. I enjoy both.

Again, I don't necessarily enjoy everything that I have to do the behind the scenes. I don't enjoy twiddling with configuration files, especially at the interface to the OS. Unlike many of my friends, I don't always enjoy installing and uninstalling, all the libraries I need to make everything work in the current version of the OS and interpreter. But that time seems small compared the time I spend living inside the code, and that carries me through.

In many ways, I think that Teller's simple declaration is a much better predictor of what you will enjoy in a career or avocation than other, fancier advice you'll receive. If you love the stuff other folks never see, you are probably doing the right thing for you.

Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

December 15, 2011 4:08 PM

Learning More Than What Is Helpful Right Now

Stanley Fish wrote this week about the end of a course he taught this semester, on "law, liberalism and religion". In this course, his students a number of essays and articles outside the usual legal literature, including works by Locke, Rawls, Hobbes, Kant, and Rorty. Fish uses this essay to respond to recent criticisms that law schools teach too many courses like this, which are not helpful to most students, who will, by and large, graduate to practice the law.

Most anyone who teaches in a university hears criticisms of this sort now and then. When you teach computer science, you hear them frequently. Most of our students graduate and enter practice the software development. How useful are the theory of computation and the principles of programming languages? Teach 'em Java Enterprise Edition and Eclipse and XSLT and Rails.

My recent entry Impractical Programming, With Benefits starts from the same basic premise that Fish starts from: There is more to know about the tools and methodologies we use in practice than meets the eye. Understanding why something is as it is, and knowing that something could be better, are valuable parts of a professional's preparation for the world.

Fish talks about these values in terms of the "purposive" nature of the enterprise in which we practice. You want to be able to thing about the bigger picture, because that determines where you are going and why you are going there. I like his connection to Searle's speech acts and how they help us to see how the story we tell gives rise to the meaning of the details in the story. He uses football as his example, but he could have used computer science.

He sums up his argument in this way

That understanding is what law schools offer (among other things). Law schools ask and answer the question, "What's the game here?"; the ins and outs of the game you learn later, as in any profession. The complaint ... is that law firms must teach their new hires tricks of the trade they never learned in their contracts, torts and (God forbid) jurisprudence classes. But learning the tricks would not amount to much and might well be impossible for someone who did not know -- in a deep sense of know -- what the trade is and why it is important to practice it.

Such a deep understanding is even more important in a discipline like computing, because our practices evolve at a much faster rate than legal practices. Our tools change even more frequently. When we taught functional programming ten or fifteen years ago, many of our students simply humored me. This wasn't going to help them with Windows programming, but, hey, they'd learn it for our sake. Now they live in a world where Scala, Clojure, and F# are in the vanguard. I hope what they learned in our Programming Languages course has helped them cope with the change. Some of them are even leading the charge.

The practical test of whether my Programming Languages students learned anything useful this semester will come not next year, but ten or fifteen years down the road. And, as I said in the Impractical Programming piece, a little whimsy can be fun in its own right, easy while it stretches your brain.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

December 12, 2011 4:43 PM

Getting Serious About Open, Honest Dialogue

A couple of weeks ago, I read this article about the syllabi that late author David Foster Wallace wrote for his intro lit courses. This weekend, I finally got to read the syllabi themselves. I've generally found Wallace's long fiction ponderous, but I enjoyed reading him at the scale of a syllabus.

He sounds like a good teacher. This passage, from pages 3-4 of syllabus, is an awesome encouragement and defense of asking questions in class:

Anybody gets to ask any question about any fiction-related issues she wants. No question about literature is stupid. You are forbidden to keep yourself from asking a question or making a comment because you fear it will sound obvious our unsophisticated or lame or stupid. Because critical reading and prose fiction are such hard, weird things to try to study, a stupid-seeming comment or question can end up being valuable or even profound. I am deadly serious about creating a classroom environment where everyone feels free to ask or speak about anything she wishes. So any student who groans, smirks, mimes machine-gunning or onanism, eye-rolls, or in any way ridicules some other student's in-class question/comment will be warned once in private and on the second offense will be kicked out of class and flunked, no matter what week it is. If the offender is male, I am also apt to find him off-campus and beat him up.


Perhaps this stands out in greater relief to me as this semester's Programming Languages course winds down. We did not have a problem with students shutting down other students' desire to comment and inquire, at least not that I noticed. My problem was getting students to ask questions at all. Some groups of students take care of themselves in this regard; others need encouragement.

I didn't react quickly enough this semester to recognize this as a too-quiet group and to do more to get them to open up. The real problem only became apparent to me at about the 75% mark of the course. It has been driven home further over the last couple of weeks, as a few students have begun to ask questions in preparation for the final. Some of their misconceptions run deep, and we would have all been better off to uncover and address them long ago. I'll be more careful next time.

The above paragraph sets a high standard, one I'm not sure I have the energy or acumen to deliver. Encouragement and policies like this create a huge burden on instructor, who must to walk a very tough walk. Promises made and unkept are usually worse than promises never made at all. This is especially true when trust they seek to develop involve the fears and personal integrity of student. With promises like these, the professor's personal integrity is on the line, too.

Still, I can aspire to do more. Even if I don't reach Wallace's level, perhaps I can make my course enough better that students will achieve more.

(And I love reading a syllabus that makes me look up the definition of a word. Those literature professors...)

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

December 02, 2011 4:28 PM

Impractical Programming, with Benefits

I linked to Jacob Harris's recent In Praise of Impractical Programming in my previous entry, in the context of programming's integral role in the modern newsroom. But as the title of his article indicates, it's not really about the gritty details of publishing a modern web site. Harris rhapsodizes about wizards and the magic of programming, and about a language that is for many of my colleagues the poster child for impractical programming languages, Scheme.

If you clicked through to the article but stopped reading when you ran into MIT 6.001, you might want to go back and finish reading. It is a story of how one programmer looks back on college courses that seemed impractical at the time but that, in hindsight, made him a better programmer.

There is a tension in any undergraduate CS program between the skills and languages of today and big ideas that will last. Students naturally tend to prefer the former, as they are relevant now. Many professors -- though not all -- prefer academic concepts and historical languages. I encounter this tension every time I teach Programming Languages and think, should I continue to use Scheme as the course's primary language?

As recently as the 1990s, this question didn't amount to much. There weren't any functional programming languages at the forefront of industry, and languages such as C++, Java, and Ada didn't offer the course much.

But now there are Scala and Clojure and F#, all languages in play in industry, not too mention several "pretty good Lisps". Wouldn't my students benefit from the extensive libraries of these languages? Their web-readiness? The communities connecting the languages to Hadoop and databases and data analytics?

I seriously consider these questions each time I prep the course, but I keep returning to Scheme. Ironically, one reason is precisely that it doesn't have all those things. As Harris learned,

Because Scheme's core syntax is remarkably impoverished, the student is constantly pulling herself up by her bootstraps, building more advanced structures off simpler constructs.

In a course on the principles of programming languages, small is a virtue. We have to build most of what we want to talk about. And there is nothing quite so stark as looking at half a page of code and realizing, "OMG, that's what object-oriented programming is!", or "You mean that's all a variable reference is?" Strange as it may sound, the best way to learn deeply the big concepts of language may be to look at the smallest atoms you can find -- or build them yourself.

Harris "won't argue that "journalism schools should squander ... dearly-won computer-science credits on whimsical introductions to programming" such as this. I won't even argue that we in CS spend too many of our limited credits on whimsy. But we shouldn't renounce our magic altogether, either, for perfectly practical reasons of learning.

And let's not downplay too much the joy of whimsy itself. Students have their entire careers to muck around in a swamp of XML and REST and Linux device drivers, if that's what they desire. There's something pretty cool about watching Dr. Racket spew, in a matter of a second or two, twenty-five lines of digits as the value of a trivial computation.

As Harris says,

... if you want to advance as a programmer, you need to take some impractical detours.

He closes with a few suggestions, none of which lapse into the sort of navel-gazing and academic irrelevance that articles like this one sometimes do. They all come down to having fun and growing along the way. I second his advice.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 30, 2011 12:07 PM

Being Less Helpful

I've noticed myself being less helpful in the last couple of weeks. Students have come to me with questions -- about homework questions, and quizzes earlier in the semester, and topics we've covered in lecture. Advisees have come to me with questions about degree requirements and spring registration. In most cases, they expect to receive answers: direct, specific, concrete solutions to their problems.

But lately, that's not what I've been giving most of them.

I've been asking a lot of questions myself. I've pointed them to specific resources: lecture notes, department web pages, and the university catalog. All of these resources are available on-line. In the case of my course, detailed lectures notes, all assignments, and all source code are posted on the course web site. Answers to many questions are in there, sometimes lurking, other times screaming in the headlines.

But mostly, I've been asking a lot of questions.

I am not trying to be unhelpful. I'm trying to be more helpful in the longer run. Often, students have forgotten that we had covered a topic in detail in class. They will be better off re-reading the material, engaging it again, than being given a rote answer now. If they've read the material and still have questions, the questions are likely to be more focused. We will be able to discuss more specific misunderstandings. I will usually be able to diagnose a misconception more accurately and address it specifically.

In general, being less helpful is essential to helping students learn to be self-sufficient, to learn better study skills, and to develop better study habits. It's just as important a part of their education as any lecture on higher-order procedures.

(Dan Meyer has been agitating around this idea for a long time in how we teach K-12 mathematics. I often find neat ways to think about being less helpful on his blog, especially in the design of the problems I give my students.)

However, I have to be careful not to be cavalier about being less helpful. First of all, it's easy for being less helpful to morph into being unhelpful. Good habits can become bad habits when left untended.

Second, and more important, trends in the questions that students ask can indicate larger issues. Sometimes, they can do something better to improve their learning, and sometimes I can do something better. For example, from my recent run of being less helpful, I've learned that...

  • Students are not receiving the same kind of information they used to receive about scheduling and programs of study. To address this, I'll be communicating more information to my advisees earlier in the process.
  • Students in Programming Languages are struggling with a certain class of homework problems. To fix this, I plan to revamp several class sessions in the middle of the semester to help them learn some of these ideas better and sooner. I think I'll also provide more concrete pointers on a couple of homework assignments.

Being less helpful now is a good strategy only if both the students and I have done the right kind of work earlier.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

November 18, 2011 2:10 PM

Teachers Working Themselves Out Of a Job

Dave Rooney wrote a recent entry on what he does when coaching in the realm of agile software development. He summarizes his five main tasks as:

  • Listen
  • Ask boatloads of questions
  • Challenge assumptions
  • Teach/coach Agile practices
  • Work myself out of a job

All but the fourth is a part of my job every day. Listening, asking questions, and challenging assumptions are essential parts of helping people to learn, whatever one's discipline or level of instruction. As a CS prof, I teach a lot of courses that instruct or require programming, and I look for opportunities to inject pragmatic programming skills and agile development practices.

What of working myself out of a job? For consultants like Rooney, this is indeed the goal: help an organization get on a track where they don't need his advice, where they can provide coaching from inside, and where they become sufficient in their own practices.

In a literal sense, this is not part of my job. If I do my job well, I will remain employed by the same organization, or at least have that option available to me.

But in another sense, my goals with respect to working myself out of a job are the same as as a consultant's, only at the level of individual students. I want to help students reach a point where they can learn effectively on their own. As much as possible, I hope for them to become more self-sufficient, able to learn as an equal member of the larger community of programmers and computer scientists.

A teacher's goal is, in part, to prepare students to move on to a new kind of learning, where they don't need us to structure the learning environment or organize the stream of ideas and information they learn from. Many students come to us pretty well able to do this already; they need only to realize that they don't me!

With most universities structured more around courses than one-on-one tutorials, I don't get to see the process through with every student I teach. One of the great joys is to have the chance to work with the same student many times over the years, through multiple courses and one-on-one through projects and research.

In any case, I think it's healthy for teachers to approach their jobs from the perspective of working themselves out of a job. Not to worry; there is always another class of students coming along.

Of course, universities as we know them may be disappearing. But the teachers among us will always find people who want to learn.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

November 16, 2011 2:38 PM

Memories of a Programming Assignment

Last Friday afternoon, the CS faculty met with the department's advisory board. The most recent addition to the board is an alumnus who graduated a decade ago or so. At one point in the meeting, he said that a particular programming project from his undergraduate days stands out in his mind after all these years. It was a series of assignments from my Object-Oriented Programming course called Cat and Mouse.

I can't take credit for the basic assignment. I got the idea from Mike Clancy in the mid 1990s. (He later presented his version of the assignment on the original Nifty Assignments panel at SIGCSE 1999.) The problem includes so many cool features, from coordinate systems to stopping conditions. Whatever one's programming style or language, it is a fun challenge. When done OO in Java, with great support for graphics, it is even more fun.

But those properties aren't what he remembers best about the project. He recalls that the project took place over several weeks and that each week, I changed the requirements of assignment. Sometimes, I added a new feature. Other times, I generalized an existing feature.

What stands out in his mind after all these years is getting the new assignment each week, going home, reading it, and thinking,

@#$%#^. I have to rewrite my entire program.

You, see he had hard-coded assumptions throughout his code. Concerns were commingled, not separated. Objects were buried inside larger pieces of code. Model was tangled up with view.

So, he started from scratch. Over the course of several weeks, he built an object-oriented system. He came to understand dynamic polymorphism and patterns such as MVC and decorator, and found ways to use them effectively in his code.

He remembers the dread, but also that this experience helped him learn how to write software.

I never know exactly what part of what I'm doing in class will stick most with students. From semester to semester and student to student, it probably varies quite a bit. But the experience of growing a piece of software over time in the face of growing and changing requirements is almost always a powerful teacher.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

November 12, 2011 10:40 AM

Tools, Software Development, and Teaching

Last week, Bret Victor published a provocative essay on the future of interaction design that reminds us we should be more ambitious in our vision of human-computer interaction. I think it also reminds us that we can and should be more ambitious in our vision of most of our pursuits.

I couldn't help but think of how Victor's particular argument applies to software development. First he defines "tool":

Before we think about how we should interact with our Tools Of The Future, let's consider what a tool is in the first place.

I like this definition: A tool addresses human needs by amplifying human capabilities.

a tool addresses human needs by amplifying human capabilities

That is, a tool converts what we can do into what we want to do. A great tool is designed to fit both sides.

The key point of the essay is that our hands have much more consequential capabilities than our current interfaces use. They feel. They participate with our brains in numerous tactile assessments of the objects we hold and manipulate: "texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them". Indeed, this tactile sense is more powerful than the touch-and-slide interfaces we have now and, in many ways, is more powerful than even sight. These tactile senses are real, not metaphorical.

As I read the essay, I thought of the software tools we use, from language to text editors to development processes. When I am working on a program, especially a big one, I feel much more than I see. At various times, I experience discomfort, dread, relief, and joy.

Some of my colleagues tell me that these "feelings" are metaphorical, but I don't think so. A big part of my affinity for so-called agile approaches is how these sensations come into play. When I am afraid to change the code, it often means that I need to write more or better unit tests. When I am reluctant to add a new feature, it often means that I need to refactor the code to be more hospitable. When I come across a "code smell", I need to clean up, even if I only have time for a small fix. YAGNI and doing the simplest thing that can possibly work are ways that I feel my way along the path to a more complete program, staying in tune with the code as I go. Pair programming is a social practice that engages more of my mind than programming alone.

Victor closes with some inspiration for inspiration:

In 1968 -- three years before the invention of the microprocessor -- Alan Kay stumbled across Don Bitzer's early flat-panel display. Its resolution was 16 pixels by 16 pixels -- an impressive improvement over their earlier 4 pixel by 4 pixel display.

Alan saw those 256 glowing orange squares, and he went home, and he picked up a pen, and he drew a picture of a goddamn iPad.

We can think bigger about so much of what we do. The challenge I take from Victor's essay is to think about the tools I to teach: what needs do they fulfill, and how well do they amplify my own capabilities? Just as important are the tools we give our students as they learn: what needs do they fulfill, and how well do they amplify our students' capabilities?

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

November 05, 2011 10:09 AM

Is Computing Too Hard, Too Foreign, or Too Disconnected?

A lot of people are discussing a piece published in the New York Times piece yesterday, Why Science Majors Change Their Minds (It's Just So Darn Hard). It considers many factors that may be contributing to the phenomenon, such as low grades and insufficient work habits.

Grades are typically much lower in STEM departments, and students aren't used to getting that kind of marks. Ben Deaton argues that this sort of rigor is essential, quoting his undergrad steel design prof: "If you earn an A in this course, I am giving you a license to kill." Still, many students think that a low grade -- even a B! -- is a sign that they are not suited for the major, or for the specialty area. (I've had students drop their specialty in AI after getting a B in the foundations course.)

Most of the students who drop under such circumstances are more than capable of succeeding. Unfortunately, they have not usually developed the disciplined work habits they need to succeed in such challenging majors. It's a lot easier to switch to a different major where their current skills suffice.

I think there are two more important factors at play. On the first, the Times article paraphrases Peter Kilpatrick, Notre Dame's Dean of Engineering:

... it's inevitable that students will be lost. Some new students do not have a good feel for how deeply technical engineering is.

In computer science, our challenge is even bigger: most HS students don't have any clue at all what computer science is. My university is nearing the end of its fall visit days for prospective students, who are in the process of choosing a college and a major. The most common question I am asked is, "What is computer science?", or its cousin, "What do computer scientists do?". This question comes from even the brightest students, ones already considering math or physics. Even more students walk by the CS table with their parents with blank looks on their faces. I'm sure some are thinking, "Why consider a major I have no clue about?"

This issue also plagues students who decide to major in CS and then change their minds, which is the topic of the Times article. Students begin the major not really knowing what CS is, they find out that they don't like it as much as they thought they might, and they change. Given what they know coming into the university, it really is inevitable that a lot of students will start and leave CS before finishing.

On the second factor I think most important, here is the money paragraph from the Times piece:

But as Mr. Moniz (a student exceedingly well prepared to study engineering) sat in his mechanics class in 2009, he realized he had already had enough. "I was trying to memorize equations, and engineering's all about the application, which they really didn't teach too well," he says. "It was just like, 'Do these practice problems, then you're on your own.'" And as he looked ahead at the curriculum, he did not see much relief on the horizon.

I have written many times here about the importance of building instructions around problems, beginning with Problems Are The Thing. Students like to work on problems, especially problems that matter to someone in the world. Taken to the next level, as many engineering schools are trying to do, courses should -- whenever possible -- be built around projects. Projects ground theory and motivate students, who will put in a surprising effort on a project they care about or think matters in the world. Projects are also often the best way to help students understand why they are working so hard to memorize and practice tough material.

In closing, I can take heart that schools like mine are doing a better job retaining majors:

But if you take two students who have the same high school grade-point average and SAT scores, and you put one in a highly selective school like Berkeley and the other in a school with lower average scores like Cal State, that Berkeley student is at least 13 percent less likely than the one at Cal State to finish a STEM degree.

Schools tend to teach less abstractly than our research-focused sister schools. We tend to provide more opportunities early in the curriculum to work on projects and to do real research with professors. I think the other public universities in my state do a good job, but if a student is considering an undergrad STEM major, they will be much better served at my university.

There is one more reason for the better retention rate at the "less selective" schools: pressure. The students at the more selective schools are likely to be more competitive about grades and success than the students at the less selective schools. This creates an environment more conducive to learning for most students. In my department, we try not to "treat the freshman year as a 'sink or swim' experience and accept attrition as inevitable" for reasons of Darwinian competition. As the Times article says, this is both unfair to students and wasteful of resources.

By changing our curricula and focusing more on student learning than on how we want to teach, universities can address the problem of motivation and relevance. But that will leave us with the problem of students not really knowing what CS or engineering are, or just how technical and rigorous they need to be. This is an education problem of another sort, one situated in the broader population and in our HS students. We need to find ways to both share the thrill and help more people see just what the STEM disciplines are and what they entail.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 27, 2011 6:44 PM

A Perfect Place to Cultivate an Obsession

And I urge you to please notice when you are happy,
and exclaim or murmur or think at some point,
"If this isn't nice, I don't know what is."
-- Kurt Vonnegut, Jr.

I spent the entire day teaching and preparing to teach, including writing some very satisfying code. It was a way to spend a birthday.

With so much attention devoted to watching my students learn, I found myself thinking consciously about my teaching and also about some learning I have been doing lately, including remembering how to write idiomatic Ruby. Many of my students really want to be able to write solid, idiomatic Scheme programs to process little languages. I see them struggle with the gap between their desire and their ability. It brought to mind something poet Mary Jo Bang said in recent interview about her long effort to become a good writer:

For a long time the desire to write and knowing how to write well remained two separate things. I recognized good writing when I saw it but I didn't know how to create it.

I do all I can to give students examples of good programs from which they can learn, and also to help them with the process of good programming. In the end, the only way to close the gap is to write a lot of code. Writing deliberately and reflectively can shorten the path.

Bang sees the same in her students:

Industriousness can compensate for a lot. And industry plus imagination is a very promising combination.

Hard work is the one variable we all control while learning something new. Some of us are blessed with more natural capacity to imagine, but I think we can stretch our imaginations with practice. Some CS students think that they are learning to "engineer" software, a cold, calculating process. But imagination plays a huge role in understanding difficult problems, abstract problems.

Together, industry and time eventually close the gap between desire and ability:

And I saw how, if you steadily worked at something, what you don't know gradually erodes and what you do know slowly grows and at some point you've gained a degree of mastery. What you know becomes what you are. You know photography and you are a photographer. You know writing and you are a writer.

... You know programming, and you are a programmer.

Erosion and growth can be slow processes. As time passes, we sometimes find our learning accelerates, a sort of negative splits for mental exercise.

We work hardest when we are passionate about what we do. It's hard for homework assigned in school to arouse passion, but many of us professors do what we can. The best way to have passion is to pick the thing you want to do. Many of my best students have had a passion for something and then found ways to focus their energies on assigned work in the interest of learning the skills and gaining the experience they need to fulfill their passion.

One last passage from Bang captures perfectly for me what educators should strive to make "school":

It was the perfect place to cultivate an obsession that has lasted until the present.

As a teacher, I see a large gap between my desire to create the perfect place to cultivate an obsession and my ability to deliver. For, now the desire and the ability remain two separate things. I recognize good learning experiences when I see them, and occasionally I stumble into creating one, but I don't yet know how to create them reliably.

Hard work and imagination... I'll keep at it.

If this isn't nice, I don't know what is.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 24, 2011 7:38 PM

Simple/Complex Versus Easy/Hard

A few years ago, I heard a deacon give a rather compelling talk to a group of college students on campus. When confronted with a recommended way to live or act, students will often say that living or acting that way is hard. These same students are frustrated with the people who recommend that way of living or acting, because the recommenders -- often their parents or teachers -- act as if it is easy to live or act that way. The deacon told the students that their parents and teachers don't think it is easy, but they might well think it is simple.

How can this be? The students were confounding "simple" and "easy". A lot of times, life is simple, because we know what we should do. But that does not make life easy, because doing a simple thing may be quite difficult.

This made an impression on me, because I recognized that conflict in my own life. Often, I know just what to do. That part is simple. Yet I don't really want to do it. To do it requires sacrifice or pain, at least in the short term. To do it means not doing something else, and I am not ready or willing to forego that something. That part is difficult.

Switch the verb from "do" to "be", and the conflict becomes even harder to reconcile. I may know what I want to be. However, the gap between who I am and who I want to be may be quite large. Do I really want to do what it takes to get there? There may be a lot of steps to take which individually are difficult. The knowing is simple, but the doing is hard.

This gap surely faces college students, too, whether it means wanting to get better grades, wanting to live a healthier life, or wanting to reach a specific ambitious goal.

When I heard the deacon's story, I immediately thought of some of my friends, who like very much the idea of being a "writer" or a "programmer", but they don't really want to do the hard work that is writing or programming. Too much work, too much disappointment. I thought of myself, too. We all face this conflict in all aspects of life, not just as it relates to personal choices and values. I see it in my teaching and learning. I see it in building software.

I thought of this old story today when I watched Rich Hickey's talk from StrangeLoop 2011, Simple Made Easy. I had put off watching this for a few days, after tiring of a big fuss that blew up a few weeks ago over Hickey's purported views about agile software development techniques. I knew, though, that the dust-up was about more than just Hickey's talk, and several of my friends recommended it strongly. So today I watched. I'm glad I did; it is a good talk. I recommend it to you!

Based only on what I heard in this talk, I would guess that Hickey misunderstands the key ideas behind XP's practices of test-driven development and refactoring. But this could well be a product of how some agilistas talk about them. Proponents of agile and XP need to be careful not to imply that tests and refactoring make change or any other part of software development easy. They don't. The programmer still has to understand the domain and be able to think deeply about the code.

Fortunately, I don't base what I think about XP practices on what other people think, even if they are people I admire for other reasons. And if you can skip or ignore any references Hickey makes to "tests as guard rails" or to statements that imply refactoring is debugging, I think you will find this really is a very good talk.

Hickey's important point is that simple/complex and easy/hard are different dimensions. Simplicity should be our goal when writing code, not complexity. Doing something that is hard should be our goal when it makes us better, especially when it makes us better able to create simplicity.

Simplicity and complexity are about the interconnectedness of a system. In this dimension, we can imagine objective measures. Ease and difficulty are about what is most readily at hand, what is most familiar. Defined as they are in terms of a person's experience or environment, this dimension is almost entirely subjective.

And that is good because, as Hickey says a couple of times in the talk, "You can solve the familiarity problem for yourself." We are not limited to our previous experience or our current environment; we can take on a difficult challenge and grow.

a Marin mountain bike

Alan Kay often talks about how it is worth learning to play a musical instrument, even though playing is difficult, at least at the start. Without that skill, we are limited in our ability to "make music" to turning on the radio or firing up YouTube. With it, you are able make music. Likewise riding a bicycle versus walking, or learning to fly an airplane versus learning to drive a car. None of these skills is necessarily difficult once we learn them, and they enable new kinds of behaviors that can be simple or complex in their own right.

One of the things I try to help my students see is the value in learning a new, seemingly more difficult language: it empowers us to think new and different thoughts. Likewise making the move from imperative procedural style to OOP or to functional programming. Doing so stretches us. We think and program differently afterward. A bonus is that something that seemed difficult before is now less daunting. We are able to work more effectively in a bigger world.

In retrospect, what Hickey says about simplicity and complexity is actually quite compatible with the key principles of XP and other agile methods. Writing tests is a part of how we create systems that are as simple as we can in the local neighborhood of a new feature. Tests can also help us to recognize complexity as it seeps into our program, though they are not enough by themselves to help us see complexity. Refactoring is an essential part of how we eliminate complexity by improving design globally. Refactoring in the presence of unit tests does not make programming easy. It doesn't replace thinking about design; indeed, it is thinking about design. Unit tests and refactoring do help us to grapple with complexity in our code.

Also in retrospect, I gotta make sure I get down to St. Louis for StrangeLoop 2012. I missed the energy this year.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

October 13, 2011 3:10 PM

Learning and New Kinds of Problems

I recently passed this classic by Reg Braithwaite to a grad student who is reading in the areas of functional programming and Ruby. I love how Braithwaite prefaces the technical content of the entry with an exhortation to learners:

... to obtain the deepest benefit from learning a new language, you must learn to think in the new language, not just learn to translate your favourite programming language syntax and idioms into it.

The more different the thing you are learning from what you already know, the more important this advice. You are already good at solving the problems your current languages solve well!

And worse, when a new tool is applied to a problem you think you know well, you will probably dismiss the things the new tool does well. Look at how many people dismiss brevity of code. Note that all of the people ignore the statistics about the constant ratio between bugs and lines of code use verbose languages. Look at how many people dismiss continuation-based servers as a design approach. Note that all of them use programming languages bereft of control flow abstractions.

Real programmers know Y.

This is great advice for people trying to learn functional programming, which is all the rage these days. Many people come to a language like Scheme, find it lacking for the problems they have been solving in Python, C, and Java, and assume something is wrong with Scheme, or with functional programming more generally. It's easy to forget that the languages you know and the problems you solve are usually connected in a variety of ways, not the least of which for university students is that we teach them to solve problems most easily solved by the languages we teach them!

If you keep working on the problems your current language solves well, then you miss out on the strengths of something different. You need to stretch not only your skill set but also your imagination.

If you buy this argument, schedule some time to work through Braithwaite's derivation of the Y combinator in Ruby. It will, as my daughter likes to say, make your brain hurt. That's a good thing. Just like with physical exercise, sometimes we need to stretch our minds, and make them hurt a bit, on the way to making them stronger.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 10, 2011 2:56 PM

Making Your Own Mistakes

Earlier today, @johndmitchell retweeted a link from Tara "Miss Rogue" Hunt:

RT @missrogue: My presentation from this morning at #ennovation: The 10 Mistakes I've you don't have to

Danger ahead!

I liked the title and so followed the link to the slide deck. The talk includes a few good quotes and communicates some solid experience on how to fail as a start-up, and also how to succeed. I was glad to have read.

The title notwithstanding, though, be prepared. Other people making mistakes will not -- cannot -- save you from making the same mistakes. You'll have to make them yourself.

There are certain kinds of mistakes that don't need to be made again, but that happens when we eliminate an entire class of problems. As a programmer, I mostly don't have to re-make the mistakes my forebears made when writing code in assembly. They learned from their mistakes and made tools that shield me from the problems I faced. Now, I write code in a higher-level language and let the tools implement the right solution for me.

Of course, that means I face a new class of problems, or an old class of problems in a new way. So I make new kinds of mistakes. In the case of assembly and compilers, I am more comfortable working at that level and am thus glad to have been shielded from those old error traps, by the pioneers who preceded me.

Starting a start up isn't the sort of problem we are able to bypass so easily. Collectively, we aren't good at all at reliably creating successful start-ups. Because the challenges involve other people and economic forces, they will likely remain a challenge well into our future.

Warning, proceed at your risk!

Even though Hunt and other people who have tried and failed at start-ups can't save us from making these mistakes, they still do us a service when they reflect on their experiences and share with us. They put up guideposts that say "Danger ahead!" and "Don't go there!"

Why isn't that enough to save us? We may miss the signs in the noise of our world and walk into the thicket on our own. We may see the warning sign, think "My situation is different...", and proceed anyway. We may heed their advice, do everything we can to avoid the pitfall, and fail anyway. Perhaps we misunderstood the signs. Perhaps we aren't smart enough yet to solve the problem. Perhaps no one is, yet. Sometimes, we won't be until we have made the mistake once ourselves -- or thrice.

Despite this, it is valuable to read about our forebears' experiences. Perhaps we will recognize the problem part of the way in and realize that we need to turn around before going any farther. Knowing other people's experiences can leave us better prepared not to go too far down into the abyss. A mistake partially made is often better than a mistake made all the way.

If nothing else, we fail and are better able to recognize our mistake after we have made it. Other people's experience can help us put our own mistake into context. We may be able to understand the problem and solution better by bringing those other experiences to bear on our own experience.

While I know that we have to make mistakes to learn, I don't romanticize failure. We should take reasonable measures to avoid problems and to recognize them as soon as possible. That's the ultimate value in learning what Hunt and other people can teach us.

Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

October 06, 2011 3:21 PM

Programming != Teaching

A few weeks ago I wrote a few entries that made connections to Roger Rosenblatt's Unless It Moves the Human Heart: The Craft and Art of Writing. As I am prone to doing, I found a lot of connections between writing, as described by Rosenblatt, and programming. I also saw connections between teaching of writers and teaching of programmers. The most recent entry in that series highlighted how teachers want their students to learn how to think the same way, not how to write the same way.

Rosenblatt also occasionally explores similarities between writing and teaching. Toward the end of the book, he points out a very important difference between the two:

Wouldn't it be nice if you knew that your teaching had shape and unity, and that when a semester came to an end, you could see that every individual thing you said had coalesced into one overarching statement? But who knows? I liken teaching to writing, but the two enterprises diverge here, because any perception of a grand scheme depends on what the students pick up. You may intend a lovely consistency in what you're tossing them, but they still have to catch it. In fact, I do see unity to my teaching. What they see, I have no clue. It probably doesn't matter if they accept the parts without the whole. A few things are learned, and my wish for more may be plain vanity.

Novelists, poets, and essayists can achieve closure and create a particular whole. Their raw material are words and ideas, which the writer can make to dance. The writer can have an overarching statement in mind, and making it real is just a matter of hard work and time.

Programmers have that sort of control over their raw material, too. As a programmer, I relish taken on the challenge of a hard problem and creating a solution that meets the needs of a person. If I have a goal for a program, I can almost always make it happen. I like that.

Teachers may have a grand scheme in mind, too, but they have no reliable way of making sure that their scheme comes true. Their raw material consists not only of words and ideas. Indeed, their most important raw material, their most unpredictable raw material, are students. Try as they might, teachers don't control what students do, learn, or think.

I am acutely aware of this thought as we wrap up the first half of our programming languages course. I have introduced students to functional programming and recursive programming techniques. I have a pretty good idea what I hope they know and can do now, but that scheme remains in my head.

Rosenblatt is right. It is vanity for us teachers to expect students to learn exactly what we want for them. It's okay if they don't. Our job is to do what we can to help them grow. After that, we have to step aside and let them run.

Students will create their own wholes. They will assemble their wholes from the parts they catch from us, but also from parts they catch everywhere else. This is a good thing, because the world has a lot more to teach than I can teach them on my own. Recognizing this makes it a lot easier for me as a teacher to do the best I can to help them grow and then get out of their way.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 04, 2011 4:43 PM

Programming in Context: Digital History

Last April I mentioned The Programming Historian, a textbook aimed at a specific set of non-programmers who want or need to learn how to program in order to do their job in the digital age. I was browsing through the textbook today and came across a paragraph that applies to more than just historians or so-called applied programmers:

Many books about programming fall into one of two categories: (1) books about particular programming languages, and (2) books about computer science that demonstrate abstract ideas using a particular programming language. When you're first getting started, it's easy to lose patience with both of these kinds of books. On the one hand, a systematic tour of the features of a given language and the style(s) of programming that it supports can seem rather remote from the tasks that you'd like to accomplish. On the other hand, you may find it hard to see how the abstractions of computer science are related to your specific application.

I don't think this feeling is limited to people with a specific job to do, like historians or economists. Students who come to the university intending to major in Computer Science lose patience with many of our CS1 textbooks and CS1 courses for the very same reasons. Focusing too much on all the features of a language is overkill when you are just trying to make something work. The abstractions we throw at them don't have a home in their understanding of programming or CS yet and so seem, well, too abstract.

Writing for the aspiring applied programmer has an advantage over writing for CS1: your readers have something specific they want to do, and they know just what it is. Turkel and MacEachern can teach a subset of several tools, including Python and Javascript, focused on what historians want to be able to do. Greg Wilson and his colleagues can teach what scientists want and need to know, even if the book is pitched more broadly.

In CS1, your students don't have a specific task in mind and do eventually need to take a systematic tour of a language's features and to learn a programming style or three. They do, eventually, need to learn a set of abstractions and make sense of them in the context of several languages. But when they start, they are much like any other person learning to program: they would like to do something that matters. The problems we ask them to solve matter.

Guzdial, Ericson, and their colleagues have used media computation as context in which to learn how to program, with the idea that many students, CS majors and non-majors alike, can be enticed to manipulate images, sounds, and video, the raw materials out of which students' digital lives are now constructed. It's not quite the same -- students still need to be enticed, rather than starting with their own motivation -- but it's a shorter leap to caring than the run-off-the-mill CS textbook has to make.

Some faculty argue that we need a CS0 course that all students take, in which they can learn basic programming skills in a selected context before moving onto the major's first course. The context can be general enough, say, media manipulation or simple text processing on the web, that the tools students learn will be useful after the course whether they continue on or not. Students who elect to major in CS move on to take a systematic tour of a language's features, to learn about OO or FP style, and to begin learning the abstractions of the discipline.

My university used to follow this approach, back in the early and mid 1990s. Students had to take a one-year HS programming course or a one-semester programming course at the university before taking CS1. We dropped this requirement when faculty began asking, why shouldn't we put the same care into teaching low-level programming skills in CS1 as we do into teaching CS0? The new approach hasn't always been as successful as we hoped, due to the difficulty of finding contexts that motivate students as well as we want, but I think the approach is fundamentally sound. It means that CS1 may not teach all the things that it did when the course had a prerequisite.

That said, students who take one of our non-majors programming courses, C and Visual Basic, and then move decide to major in CS perform better on average in CS1 than students who come in fresh. We have work to do.

Finally, one sentence from The Programming Historian made me smile. It embodies the "programming for all" theme that permeates this blog:

Programming is for digital historians what sketching is for artists or architects: a mode of creative expression and a means of exploration.

I once said that being able to program is like having superhuman strength. But it is both more mundane and more magical than that. For digital historians, being able to program means being able to do the mundane, everyday tasks of manipulating text. It also gives digital historians a way to express themselves creatively and to explore ideas in ways hard to imagine otherwise.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

October 03, 2011 8:11 AM

What to Build and How to Build

Update: This entry originally appeared on September 29. I bungled my blog directory and lost two posts, and the simplest way to get the content back on-line is to repost.

I remember back in the late 1990s and early 2000s when patterns were still a hot topic in the software world, and many pattern writers trying to make the conceptual move to pattern languages. It was a fun time to talk about software design. At some point, there was a long and illuminating discussion on the patterns mailing list about whether patterns should describe what to build or how to build. Richard Gabriel and Ron Goldman -- creators of the marvelous essay-as-performance-art Mob Software -- patiently taught the community that the ultimate goal is what. Of course, if we move to a higher level of abstraction, a what-pattern becomes a how-pattern. But the most valuable pattern languages teach us what to build and when, with some freedom in the how.

This is the real challenge that novice programmers face, in courses like CS1 or in self-education: figuring out what to build. It is easy enough for many students to "get" the syntax of the programming language they are learning. Knowing when to use a loop, or a procedure, or a class -- that's the bigger challenge.

Our CS students are usually in the same situation even later in their studies. They are still learning what to build, even as we teach them new libraries, new languages, and new styles.

I see this a lot when students who are learning to program in a functional style. Mentally, many think they are focused on the how (e.g., How do I write this in Scheme?). But when we probe deeper, we usually find that they are really struggling with what to say. We spend some time talking about the problem, and they begin to see more clearly what they are trying to accomplish. Suddenly, writing the code becomes a lot easier, if not downright easy.

This is one of the things I really respect in the How to Design Programs curriculum. Its design recipes give beginning students a detailed, complete, repeatable process for thinking about problems and what they need to solve a new problem. Data, contracts, and examples are essential elements in understanding what to build. Template solutions help bridge the what and the how, but even they are, at the student's current level of abstraction, more about what than how.

The structural recursion patterns I use in my course are an attempt to help students think about what to build. The how usually follows directly from that. As students become fluent in their functional programming language, the how is almost incidental.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

October 03, 2011 7:20 AM

Softmax, Recursion, and Higher-Order Procedures

Update: This entry originally appeared on September 28. I bungled my blog directory and lost two posts, and the simplest way to get the content back on-line is to repost.

John Cook recently reported that he has bundled up some of his earlier writings about the soft maximum as a tech report. The soft maximum is "a smooth approximation to the maximum of two real variables":

    softmax(x, y) = log(exp(x) + exp(y))

When John posted his first blog entry about the softmax, I grabbed the idea and made it a homework problem for my students, who were writing their first Scheme procedures. I gave them a link to John's page, so they had access to this basic formula as well as a Python implementation of it. That was fine with me, because I was simply trying to help students become more comfortable using Scheme's unusual syntax:

    (define softmax
      (lambda (x y)
        (log (+ (exp x)
                (exp y)))))

On the next assignment, I asked students to generalize the definition of softmax to more than two variables. This gave them an opportunity to write a variable arity procedure in Scheme. At that point, they had seen only a couple simple examples of variable arity, such as this implementation of addition using a binary + operator:

    (define plus              ;; notice: no parentheses around
      (lambda args            ;; the args parameter in lambda
        (if (null? args)
            (+ (car args) (apply plus (cdr args))) )))

Many students followed this pattern directly for softmax:

    (define softmax-var
      (lambda args
        (if (null? (cdr args))
            (car args)
            (softmax (car args)
                     (apply softmax-var (cdr args))))))

Some of their friends tried a different approach. They saw that they could use higher-order procedures to solve the problem -- without explicitly using recursion:

    (define softmax-var
      (lambda args
        (log (apply + (map exp args)))))

When students saw each other's solutions, they wondered -- as students often do -- which one is correct?

John's original blog post on the softmax tells us that the function generalizes as we might expect:

    softmax(x1, x2, ..., xn) = log(exp(x1) + exp(x2) + ... + exp(xn))

Not many students had looked back for that formula, I think, but we can see that it matches the higher-order softmax almost perfectly. (map exp args) constructs a list of the exp(xi) values. (apply + ...) adds them up. (log ...) produces the final answer.

What about the recursive solution? If we look at how its recursive calls unfold, we see that this procedure computes:

    softmax(x1, softmax(x2, ..., softmax(xn-1, xn)...))

This is an interesting take on the idea of a soft maximum, but it is not what John's generalized definition says, nor is it particularly faithful to the original 2-argument function.

How might we roll our own recursive solution that computes the generalized function faithfully? The key is to realize that the function needs to iterate not over the maximizing behavior but the summing behavior. So we might write:

    (define softmax-var
      (lambda args
        (log (accumulate-exps args))))

(define accumulate-exps (lambda (args) (if (null? args) 0 (+ (exp (car args)) (accumulate-exps (cdr args))))))

This solution turns softmax-var into interface procedure and then uses structural recursion over a flat list of arguments. One advantage of using an interface procedure is that the recursive procedure accumulate-exps no longer has to deal with variable arity, as it receives a list of arguments.

It was remarkable to me and some of my students just how close the answers produced by the two student implementations of softmax were, given how different the underlying behaviors are. Often, the answers were identical. When different, they differed only in the 12th or 15th decimal digit. As several blog readers pointed out, softmax is associative, so the two solutions are identical mathematically. The differences in the values of the functions result from the vagaries of floating-point precision.

The programmer in me left the exercise impressed by the smoothness of the soft maximum. The idea is resilient across multiple implementations, which makes it seem all the more useful to me.

More important, though, this programming exercise led to several interesting discussions with students about programming techniques, higher-order procedures, and the importance of implementing solutions that are faithful to the problem domain. The teacher in me left the exercise pleased.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Software Development, Teaching and Learning

September 27, 2011 8:08 PM

Following Through: A Contest Rewarding Epic Failure

As promised back in May, I am offering a prize for the best failed idea of the semester in the my programming languages course. We are now about halfway through the second unit of the course, on recursive programming techniques. Students finally have tools powerful enough to solve interesting problems, but also tools powerful enough to go spectacularly astray. I want to encourage students to trust their ideas enough to fail, but that's how they will learn to trust their ideas enough to succeed. Rather than fear those inevitable failures, we will have fun with them.

Feel free to check out describing the contest and how to participate. As always, I welcome feedback from my readers.

In particular, I am still looking for a catchy, evocative, culturally relevant, funny name for the prize. If this were the late 1980s or early 1990s, I might name it for the film Ishtar, and everyone would know what I meant. For now, I have opted for a more recent high-profile film flop -- a spectacular loser on the balance sheet -- but it doesn't have the same pop for me. I'm not as hip as I used to be and can use some help.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 24, 2011 8:04 PM

Much Code To Study, Learning with Students

I mentioned last time that I've been spending some time with Norvig's work on Strachey's checkers program in CPL. This is fun stuff that can be used in my programming languages course. But it isn't the only new stuff I've been learning. When you work with students on research projects and independent studies, opportunities to learn await at every turn.

A grad student is taking the undergrad programming languages course and so has to do some extra projects to earn his grad credit. He is a lover of Ruby and has been looking at a couple of Scheme interpreters implemented in Ruby, Heist and Bus-Scheme. I'm not sure where this will lead yet, but that is part of the exercise. The undergrad who faced the "refactor or rewrite?" decision a few weeks ago teaches me something new every week, not only through his experiences writing a language processor but also about his program's source and target languages, Photoshop and HTML/CSS.

Another grad student is working on a web application and teaching me other things about Javascript. Now we are expanding into one tool I've long wanted to study in greater detail, Processing.js and perhaps into another I only just learned of from Dave Humphrey, a beautiful little data presentation library called D3.

And as if that weren't enough, someone tweets that Avdi Grimm is sharing is code and notes as he implements Smalltalk Best Practice Patterns in Ruby. Awesome. This Avdi guy is rapidly becoming one of my heroes.

All of these projects are good news. One of the great advantages of working at a university is working with students and learning along with him. Right now, I have a lot on my plate. It's daunting but fun.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 23, 2011 3:52 PM

Grading and Learning in the Age of Social Media

Yesterday morning, I was grading the first quiz from my programming languages course and was so surprised by the responses to the first short-answer question that I tweeted in faux despair:

Wow. We discussed a particular something every day in class for three weeks. Student quiz answers give no evidence of this.

Colleagues around the globe consoled me and commiserated. But I forgot that I am also followed by several of my student, and their reaction was more like... panic. Even though I soon followed up with a tweet saying that their Scheme code made me happy, they were alarmed about that first tweet.

It's a new world. I never used to grade with my students watching or to think out loud while I was grading. Twitter changes that, unless I change me and stop using Twitter. On balance, I think I'm still better off. When I got to class, students all had smiles on their faces, some more nervous than others. We chatted. I did my best to calm them. We made a good start on the day with them all focused on the course.

We have reached the end of Week 5, one-third of the way through the course. Despite the alarm I set off in students' minds, they have performed on par with students in recent offerings over the first three homeworks and the first quiz. At this point, I am more concerned with my performance than theirs. After class yesterday, I was keenly aware of the pace of the session being out of sync with the learning curve of material. The places where I've been slowing down aren't always the best places to slow down, and the places where I've been speeding up (whether intentional or unintentional) aren't always the best places to speed up. A chat with one student that afternoon cemented my impression.

Even with years of experience, teaching is hard to get right. One shortcoming of teaching a course only every third semester is that the turnaround time on improvements is so long. What I need to do is use my realization to improve the rest of this offering, first of all this unit on recursive programming.

I spent some time early this week digging into Peter Norvig's Prescient but Not Perfect, a reconsideration of Christopher Strachey's 1966 Sci Am article and in particular Strachey's CPL program to play checkers. Norvig did usual wonderful job with the code. It's hard to find a CPL compiler these days, and has been since about 1980, so he wrote a CPL-to-Python translator, encoded and debugged Strachey's original program, and published the checkers program and a literate program essay that explains his work.

This is, of course, a great topic for a programming languages course. Norvig exemplifies the attitude I encourage to my students on Day 1: if you need a language processor, write one. It's just another program. I am not sure yet when I will bring this topic into my course; perhaps when we first talk in detail about interpreters, or perhaps when we talk about parsing and parser generators. (Norvig uses YAPPS, a Python parser generator, to convert a representation of CPL's grammar into a CPL parser written in Python.)

There are some days when I had designed all of my course sessions to be 60 minutes instead of 75 minutes, so that we had more lüft for opportunistic topics like this one. Or that I could design a free day into the course every 2-3 weeks for the same purpose. Alas, the CS curriculum depends on this course to expose students to a number of important ideas and practices, and the learning curve for some of the material is non-trivial. I'll do my best to provide at least a cursory coverage of Norvig's article and program. I hope that a few students will turn out to his approach to the world -- the computer scientist's mindset.

If nothing else, working through his paper and code excite me, and that will leak over into the rest of my work.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 16, 2011 4:13 PM

The Real Technological Revolution in the Classroom Hasn't Happened Yet

Earlier this month, the New York Times ran a long article exploring the topic of technology in K-12 classrooms, in particular the lack of evidence that the mad rush to create the "classroom of future" is having any effect on student learning. Standardized test scores are stagnant in most places, and in schools showing improvements, research has not been able to separate the effect of using technology from the effect of extra teacher training.

We should not be surprised. It is unlikely that simply using a new technology will have any effect on student learning. If we teach the same way and have students do the same things, then we should expect student learning to be about the same whether they are writing on paper, using a typewriter, or typing on a computer keyboard. There are certainly some very cool things one can do with, say, Keynote, and I think having those features available can augment a student's experience. But I doubt that those features can have a substantial effect on learning. Technology is like a small booster rocket for students who are already learning a lot and like training wheels for those who are not.

As I read that article, one fact screamed out at me. Computers are being used in classrooms everywhere for almost everything. Everything, that is, except the one thing that makes them useful at all: computer programming.

After reading the Times piece, Clive Thompson pulled the critical question out of it and asked, What can computers teach that textbooks and paper can't? Mark Guzdial has written thoughtfully on this topic in the past as well. Thompson offers two answers: teaching complexity and seeing patterns, (His third answer is a meta-answer, more effective communication between teacher and student.) We can improve both teaching complexity and seeing patterns by using the right software, but -- thankfully! -- Thompson points out that we can do even better if we teach kids even a little computer programming.

Writing a little code is a great vehicle for exploring a complex problem and trying to create and communicate understanding. Using or writing a program to visualize data and relationships among them is, too.

Of course, I don't have hard evidence for these claims, either. But it is human nature to want to tinker, to hack, to create. If I am going to go out on a limb without data, I'd rather do it with students creating tools that help them understand their world than with students mostly consuming media using canned tools. And I'm convinced that we can expand our students' minds more effectively by showing them how to program than by teaching most of what we teach in K-12. Programming can be tedious, like many learning tasks, and students need to learn how to work through tedium to deeper understanding. But programming offers rewards in a way and on a scale that, say, the odd problems at the end of a chapter in an algebra textbook can never do by themselves.

Mark Surman wrote a very nice blog entry this week, Mozilla as teacher, expressing a corporate vision for educating the web-using public that puts technology in context:

... people who make stuff on the internet are better creators and better online citizens if they know at least a little bit about the web's basic building blocks.

As I've written before, we do future teachers, journalists, artists, filmmakers, scientists, citizens, and curious kids a disservice if we do not teach them a little bit of code. Without this knowledge, they face unnecessary limits on their ability to write, create, and think. They deserve the ability to tinker, to hack, to trick out their digital worlds. The rest of us often benefit when they do, because some of things they create make all of our lives better.

(And increasingly, the digital world intersects with the physical world in surprising ways!)

I will go a step further than Surman's claim. I think that people who create and who are engaged with the world they inhabit have an opportunity to better citizens, period. They will be more able, more willing participants in civic life when they understand more clearly their connections to and dependence on the wider society around them. By giving them the tools they need to think more deeply and to create more broadly, education can enable them to participate in the economy of the world and improve all our lots.

I don't know if K-12 standardized test scores would get better if we taught more students programming, but I do think there would be benefits.

As I often am, I am drawn back to the vision Alan Kay has for education. We can use technology -- computers -- to teach different content in different ways, but ultimately it all comes back to new and better ways to think in a new medium. Until we make the choice to cross over into that new land, we can spend all the money we want on technology in the classroom and not change much at all.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

September 13, 2011 7:17 PM

Learning to Think the Same, Not Write the Same

We have begun to write code in my programming languages course. The last couple of sessions we have been writing higher-order procedures, and next time we begin a three-week unit learning to write recursive programs following the structural recursion patterns in my small pattern language Roundabout.

One of the challenges for students is to learn how to use the patterns without feeling a need to mimic my style perfectly. Some students, especially the better ones, will chafe if I try to make them write code exactly as I do. They are already good programmers in other styles, and they can become good functional programmers without aping me. Many will see the patterns as a restriction on how they think, though in fact the patterns are source of great freedom. They force you to write code in a particular way; they give you tools for thinking about problems as you program.

Again, there is something for us to learn from our writing brethren. Consider a writing course like the one Roger Rosenblatt describes in his book, Unless It Moves the Human Heart: The Craft and Art of Writing, which I have referred to several times, most recently in The Summer Smalltalk Taught Me OOP. No student in Rosenblatt's course wants him to expect them to leave the course writing just like he does. They are in the course to learn elements of craft, to share and critique work, and to get advice from someone with extensive experience as a writer. Rosenblatt is aware of this, too:

Wordsworth quoted Coleridge as saying that every poet must create the taste by which he is relished. The same is true of teachers. I really don't want my students to write as I do, but I want them to think about writing as I do. In them I am consciously creating a certain taste for what I believe constitutes skillful and effective writing.

The course is more about learning how to think about writing as much as it is about learning how to write itself. That's what a good pattern language can do for us: help us learn to think about a class of problems or a class of solutions.

I think this happens whether a teacher intends it consciously or not. Students learn how to think and do by observing their teachers thinking and doing. A programming course is usually better if the teacher designs the course deliberately, with careful consideration of the patterns to demonstrate and the order in which students experience them in problems and solutions.

In the end, I want my students to think about writing recursive programs as I do, because experience as both a programmer and as a teacher tells me that this way of thinking will help them become good functional programmers as soon as possible. But I do not want them to write exactly as I so; they need to find their own style, their own taste.

This is yet another example of the tremendous power teachers wield every time they step foot in a classroom. As a student, I was always most fond of the teachers wielded it carefully and deliberately. So many of them live on in me in how I think. As a teacher, I have come to respect this power in my own hands and try to wield it respectfully with my students.

P.S. For what it's worth, Coleridge is one of my favorite poets!

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 12, 2011 12:14 PM

You Keep Using That Word....

Inigo Montoya, from The Princess Bride

Elisabeth Hendrickson recently posted a good short piece about Testing is a Whole Team Activity. She gives examples of some of the comments she hears frequently which indicate a misunderstanding about the relationship between coding and testing. My favorite example was the first:

"We're implementing stories up to the last minute, so we can never finish testing within the sprint."

Maybe I like this one so much because I hear it from students so often, especially on code that they find challenging and are having a hard time finishing.

"If we took the time to test our code, we would not get done on time."

"What evidence do you have that your code works for the example cases?"

"Well, none, really, but..."

"Then how do you know you are done?"

"'done'?" You keep using that word. I do not think it means what you think it means.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

September 06, 2011 9:00 PM

Two Kinds of 50% Student

I returned graded Homework 1 today at end of my Programming Languages class. I'm still getting to know faces and names in the group, so I paid attention to each person as he took his assignment from me. Occasionally, I glanced at the grade written atop the paper as I was handing it back. One grade caught my eye, and I made eye contact with the student. The grade was 10/20. Or it could have been 12/20, or 8/20, no matter. The point is the same: not an auspicious start to the semester.

Earlier, during class, I had noticed that this student was not taking notes. Sometimes, that's the sign of a really good student. He looked confident. Halfway through class, I gave a small exercise for students to work on, box-and-pointer diagrams for Scheme pairs and lists. Still nothing from the students. Perhaps his head was roiling in furious thought, but there was no visible evidence of thought. A calm sea.

When I saw the grade on his assignment, I leapt reflexively to a conclusion. This guy will be an early casualty in the course. He may drop early. He may limp along, scoring 50%, plus or minus a few points, all semester, and end up with a D or an F or, if he is truly fortunate, the C- he needs to count it toward his program of study.

That may seem like a harsh sort of profiling, but I've seen this situation many times, especially in this course. If you don't take Scheme seriously, if you don't take functional programming seriously, ideas and patterns and challenges can snowball quickly. I have a firend at another university who speaks of a colleague as telling his students on Day One of every course, There are only two ways to take my course: seriously, and again. That's how I feel about most of my courses, but especially this one. We can have a lot of fun, when students are present and engaged, seeking mastery. If not, well, it can be a long semester.

Don't think that my reaction means I will shirk my duty and let this student fail. When students struggle early, I try to engage them one-on-one, to see if we can figure out what the impediment is and how we night get them over it. Such efforts succeed too rarely for my tastes. Sadly, the behavior I see in class and on early homework is usually a window into their approach to the course.

So, I was sad for the student. I'll do what I can, but I felt guilty, like a swami looking into his crystal ball and seeing clearly a future that already is.

After class, though, I was sitting in my office and realized that all is not lost. I remembered that there is a second type of student who can start the course with this performance profile. That sort of student is rarer, but common enough to give me hope.

I did not actually think of a second set of students. I thought of a particular student from a recent graduating class. He started this course slowly. The profile wasn't quite the same, because this student took notes in class. Still, he seemed generally disengaged, and his homework scores were mediocre at best. Then, he did poorly on Quiz 1.

But at that point I noticed a change in his demeanor. He looked alive in class. He asked questions after class. He gave evidence of having read the assigned readings and of starting sooner on programs. His next homework was much better. His Quiz 2 was a complete reversal of the first.

This is the second kind of student: poor performance on the early homeworks and quizzes are a wake-up call. The student realizes, "I can't do what I've always done". He sees the initial grades in the course not as predictor of an immutable future but as a trigger to work differently, to work harder. The student changes his behavior, and as a result changes his results.

The particular student I remembered went on to excel in Programming Languages. He grasped many of the course's deeper concepts. Next he took the compilers course and was part of a strong team.

Sometimes, students are caught off guard by the foreign feel of the ideas in Scheme and functional programming, by the steep learning curve. The ones in the second group, the ones who usually succeed in the course after all, pay attention to early data and feed them back into their behavior.

So, there is hope for the student whose grade I glimpsed this afternoon. That strengthens my resolve to hang in there, offer help, and hope that I can reach him soon enough. I hope he is a Group 2 guy, or can be coaxed into crossing the border from Group 1 to Group 2.

The future is malleable, not frozen.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

September 04, 2011 7:28 PM

In Praise of Usefulness

In two recent entries [ 1 | 2 ], I mentioned that I had been recently reading Roger Rosenblatt's Unless It Moves the Human Heart: The Craft and Art of Writing. Many of you indulge me my fascination with writers talking about writing, and I often see parallels between what writers of code and writers of prose and poetry do. That Rosenblatt also connects writing to teaching, another significant theme of my blog, only makes this book more stimulating to me.

"Unless it moves the human heart" is the sort of thing writers say about their calling, but not something many programmers say. (The book title quotes Aussie poet A. D. Hope.) It is clearly at the heart of Rosenblatt's views of writing and teaching. But in his closing chapter, Rosenblatt includes a letter written to his students as a postscript on his course that speaks to a desire most programmers have for the lives' work: usefulness. To be great, he says, your writing must be useful to the world. The fiction writer's sense of utility may differ from the programmer's, but at one level the two share an honorable motive.

This paragraph grabbed me as advice as important for us programmers as it is for creative programmers. (Which software people do you think of as you read it?)

How can you know what is useful to the world? The world will not tell you. The world will merely let you know what it wants, which changes from moment to moment, and is nearly always cockeyed. You cannot allow yourself to be directed by its tastes. When a writer wonders, "Will it sell?" he is lost, not because he is looking to make an extra buck or two, but rather because, by dint of asking the question in the first place, he has oriented himself to the expectations of others. The world is not a focus group. The world is an appetite waiting to be defined. The greatest love you can show it is to create what it needs, which means you must know that yourself.

What a brilliant sentence: The world is an appetite waiting to be defined. I don't think Ward Cunningham went around asking people if they needed wiki. He built it and gave it to them, and when they saw it, their appetite took form. It is indeed a great form of love to create what the world needs, whether the people know it yet or not.

(I imagine that at least a few of you were thinking of Steve Jobs and the vision that gave us the Mac, iTunes, and the iPad. I was too, though Ward has always been my hero when it comes to making useful things I had not anticipated.)

Rosenblatt tells his students that, to write great stories and poems and essays, they need to know the world well and deeply. This is also sound advice to programmers, especially those who want to start the next great company or revolutionize their current employers from the inside out. This is another good reason to read, study, and think broadly. To know the world outside of one's Ruby interpreter, outside the Javascript spec and the HTML 5.0, one must live in it and think about it.

It seems fitting on this Labor Day weekend for us to think about all the people who make the world we live in and keep it running. Increasingly, those people are using -- and writing -- software to give us useful things.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 27, 2011 9:36 AM

Extravagant Ideas

One of the students in my just-started Programming Languages course recently mentioned that he has started a company, Glass Cannon Games, to write games for the Box and Android platforms. He is working out of my university's student incubator.

Last summer, I wrote a bit about entrepreneurship and a recent student of mine, Nick Cash, who has started Book Hatchery to "help authors publish their works digitally".

Going a bit further back, I mentioned an alumnus, Wade Arnold, winning a statewide award for his company, T8 Webware. Readers of this blog most recently encountered Wade in my entry on the power of intersections.

Over the last decade, Wade has taken a few big ideas and worked hard to make them real. That's what Nick and, presumably, Ian are doing, too.

Most entrepreneurs start with big thoughts. I try to encourage students to think big thoughts, to consider an entrepreneurial career. The more ideas they have, the more options they have in careers and in life. Going to work for a big company is the right path for some, but some want more and can do their own thing -- if only they have the courage to start.

This is a more important idea than just for starting start-ups. We can "think big and write small" even for the more ordinary programs we write. Sometimes we need a big idea to get us started writing code. Sometimes, we even need hubris. Every problem a novice faces can appear bigger than it is. Students who are able to think big often have more confidence. That is the confidence they need to start, and to persevere.

It is fun as a teacher to be able to encourage students to think big. As writer Roger Rosenblatt says,

One of the pleasures of teaching writing courses is that you can encourage extravagant thoughts like this in your students. These are the thoughts that will be concealed in plain and modest sentences when they write. But before that artistic reduction occurs, you want your students to think big and write small.

Many students come into our programming courses unsure, even a little afraid. Helping them free themselves to have extravagant ideas is one of the best things a teacher can do for them. Then they will be motivated to do the work they need to master syntax and idioms, patterns and styles.

A select few of them will go a step further and believe something even more audacious, that

... there's no purpose to writing programs unless you believe in significant ideas.

Those will be the students who start the Glass Cannons, the Book Hatcheries, and the T8s. We are all better off when they do.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 26, 2011 2:29 PM

Thoughts at the Start of the Semester

We have survived Week 1. This semester, I again get to teach Programming Languages, a course I love and about which I blog for a while every eighteen months of so.

I had thought I might blog as I prepped for the course, but between my knee and department duties, time was tight. I've also been slow to settle on new ideas for the course. In my blog ideas folder, I found notes for an entry debriefing the last offering of the course, from Spring 2010, and thought that might crystallize some ideas for me. Alas, the notes held nothing useful. They were just a reminder to write, which went unheeded during a May term teaching agile software development.

Yesterday I started reading a new book -- not a book related to my course, but Roger Rosenblatt's Unless It Moves the Human Heart: The Craft and Art of Writing. I love to read writers talking about writing, and this book has an even better premise: it is a writer talking about writing as he teaches writing to novices! So there is plenty of inspiration in it for me, even though it contains not a single line of Scheme or Ruby.

Rosenblatt recounts teaching a course a course called "Writing Everything". Most the students in the course want to learn how to write fiction, especially short stories. Rosenblatt has them also read write poems, in which they can concentrate on language and sounds, and essays, in which they can learn to develop ideas.

This is not the sort of course you find in CS departments. The first analogy that came to mind was a course in which students wrote, say, a process scheduler for an OS, a CRUD database app for a business, and an AI program. The breadth and diversity of apps might get the students to think about commonalities and differences in their programming practice. But a more parallel course would ask students to write a few procedural programs, object-oriented programs, and functional programs. Each programming style would let the student focus on different programming concepts and distinct elements of their craft.

I'd have a great time teaching such a "writing" course. Writing programs is fun and hard to learn, and we don't have many opportunities in a CS program to talk about the process of writing and revising code. Software engineering courses have a lot of their own content, and even courses on software design and development often deal more with new content than with programming practice. In most people's minds, there is not room for a new course like this one in the curriculum. In CS programs, we have theory and applications courses to teach. In Software Engineering programs, they seem far too serious about imitating other engineering disciplines to have room for something this soft. If only more schools would implement Richard Gabriel's idea of an MFA in software...

Despite all these impediments, I think a course in which students simply practiced programming in the large(r) and focused on their craft could be of great value to most CS grads.

I will let Rosenblatt's book inspire me and leak into my Programming Languages course where helpful. But I will keep our focus on the content and skills that our curriculum specifies for the course. By learning the functional style of programming and a lot about how programming languages work, students will get a chance to develop a few practical skills, which we hope will pay off in helping them to be better programmers all around, whether in Java, Python, Ruby, Scala, Clojure, or Ada.

One meta-message I hope to communicate both explicitly and implicitly is that programmers never stop learning, including their professor. Rosenblatt has the same goal in his writing course:

I never fail to say "we" to my students, because I do not want them to get the idea that you ever learn how to write, no matter how long you've done it.

Beyond that, perhaps the best I can do is let my students that I am still mesmerized by the cool things we are learning. As Rosenblatt says,

Observing a teacher who is lost in the mystery of the material can be oddly seductive.

Once students are seduced, they will take care of their own motivation and their own learning. They won't be able to help themselves.

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 21, 2011 9:32 AM

Overcoming a Disconnect Between Knowing and Doing

Before reading interviews with Hemingway and Jobs, I read a report of Ansel Adams's last interview. Adams was one of America's greatest photographers of the 20th century, of course, and several of his experiences seem to me to apply to important issues in software development.

It turns out that both photography and software development share a disconnect between teaching and doing:

One of the problems is the teaching of photography. In England, I was told, there's an institute in which nobody can teach photography until they've had five years' experience in the field, until they've had to make a go of it professionally.

Would you recommend that?

I think that teachers should certainly have far more experience than most of the ones I know of have had. I think very few of them have had practical experience in the world. Maybe it's an impossibility. But most of the teachers work pretty much the same way. The students differ more from each other than the teachers do.

Academics often teach without having experience making a living from the material they teach. In computer science, that may make sense for topics like discrete structures. There is a bigger burden in most of the topics we teach, which are done in industry and which evolve at a more rapid rate. New CS profs usually come out of grad school on the cutting edge of their specialties, though not necessarily on top of all the trends in industry. Those who take research-oriented positions stay on the cutting edge of their areas, but the academic pressure is often to become narrower in focus and thus farther from contemporary practice. Those who take positions at teaching schools have to work really hard to stay on top of changes out in the world. Teaching a broad variety of courses makes it difficult to stay on top of everything.

Adams's comment does not address the long-term issue, but it takes a position on the beginning of careers. If every new faculty member had five years or professional programming experience, I dare say most undergrad CS courses would be different. Some of the changes might be tied too closely to those experiences (someone who spent five years at Rockwell Collins writing SRSs and coding in Ada would learn different things from someone who spent five years writing e-commerce sites in Rails), but I think would usually be some common experiences that would improve their courses.

When I first read Adams's comment, I was thinking about how the practitioner would learn and hone elements of craft that the inexperienced teacher didn't know. But the most important thing that most practitioners would learn is humility. It's easy to lecture rhapsodically about some abstract approach to software development when you haven't felt the pain it causes, or faced the challenges left even when it succeeds. Humility can be a useful personal characteristic in a teacher. It helps us see the student's experience more accurately and to respond by changing how and what we teach.

Short of having five years of professional experience, teachers of programming and software development need to read and study all the time -- and not just theoretical tomes, but also the work of professional developers. Our industry is blessed with great books by accomplished developers and writes, such as Design Patterns and Refactoring. The web and practitioners' conferences such as StrangeLoop are an incredible resource, too. As Fogus tweeted recently, "We've reached an exciting time in our industry: colleges professors influenced by Steve Yegge are holding lectures."

Other passages in the Adams interview stood out to me. When he shared his intention to become a professional photographer, instead of a concert pianist:

Some friends said, "Oh, don't give up music. ... A camera cannot express the human soul." The only argument I had for that was that maybe the camera couldn't, but I might try through the camera.

What a wonderful response. Many programmers feel this way about their code. CS attracts a lot of music students, either during their undergrad studies or after they have spent a few years in the music world. I think this is one reason: they see another way to create beauty. Good news for them: their music experience often gives them an advantage over those who don't have it. Adams believed that studying music was valuable to him as a photographer:

How has music affected your life?

Well, in music you have this absolutely necessary discipline from the very beginning. And you are constructing various shapes and controlling values. Your notes have to be accurate or else there's no use playing. There's no casual approximation.

Discipline. Creation and control. Accuracy and precision. Being close isn't good enough. That sounds a lot like programming to me!

Posted by Eugene Wallingford | Permalink | Categories: Software Development, Teaching and Learning

August 18, 2011 3:52 PM

Some Thoughts on "Perlis Languages"

Alan Perlis

Fogus recently wrote a blog entry, Perlis Languages, that has traveled quickly through parts of software world. He bases his title on one of Alan Perlis's epigrams: "A language that doesn't affect the way you think about programming is not worth knowing." Long-time Knowing and Doing readers may remember this quote from my entry, Keeping Up Versus Settling Down. If you are a programmer, you should read Fogus's article, which lists a few languages he thinks might change how you think about programming.

There can be no single list of Perlis languages that works for everyone. Perlis says that a language is worth knowing if it affects how you think about programming. That depends on you: your background, your current stage of development as a programmer, and the kind of problems you work on every day. As an example, in the Java world, the rise of Scala and Clojure offered great opportunities for programmers to expand their thinking about programming. To Haskell and Scheme programmers, the opportunity was much smaller, perhaps non-existent.

The key to this epigram is that each programmer should be thinking about her knowledge and on the look out for languages that can expand her mind. For most of us, there is plenty of room for growth. We tend to work in one or two styles on a daily basis. Languages that go deep in a different style or make a new idea their basic building block can change us.

That said, some languages will show up lots of peoples' Perlis lists, if only because they are so different from the languages most people know and use on a daily basis. Lisp is one of the languages that used to be a near universal in this regard. It has a strangely small and consistent syntax, with symbols as first-order objects, multiple ways to work with functions, and macros for extending the language in a seamless way. With the appearance of Clojure, more and more people are being exposed to the wonders of Lisp, so perhaps won't be on everyone's Perlis list in 10 years. Fogus mentions Clojure only in passing; he has written one of the better early Clojure books, and he doesn't want to make a self-serving suggestion.

I won't offer my own Perlis list here. This blog often talks about languages that interest me, so readers have plenty of chances to hear my thoughts. I will add my thoughts about two of the languages Fogus mentions in his article.

Joy. *Love* it! It's one of my favorite little languages, and one that remains very different from what most programmers know. Scripting languages have put a lot of OOP and functional programming concepts before mainstream programmers across the board, but the idea of concatenative programming is still "out there" for most.

Fogus suggests the Forth programming language in this space. I cannot argue too strongly against this and have explained my own fascination with it in a previous entry. Forth is very cool. Still, I prefer Joy as a first step into the world of concatenative programming. It is clean, simple, and easy to learn. It is also easy to write a Joy interpreter in your favorite language, which I think is one of the best ways to grok a language in a deep way. As I mentioned in the Forth entry linked above, I spent a few months playing with Joy and writing an interpreter for it while on sabbatical a decade ago.

If you play with Joy and like it, you may find yourself wanting more than Joy offers. Then pick up Forth. It will not disappoint you.

APL. Fogus says, "I will be honest. I have never used APL and as a result find it impenetrable." Many things are incomprehensible before we try them. (A student or two will be telling me that Scheme is incomprehensible in the next few weeks...) I was fortunate to write a few programs in APL back in my undergrad programming languages course. I'm sure if I wrote a lot of APL it would become more understandable, but every time I return to the language, it is incomprehensible again to me for a while.

David Ungar told one of my favorite APL stories at OOPSLA 2003, which I mentioned in my report on his keynote address. The punchline of that story fits very well with the theme of so-called Perlis languages: "They could have done the same thing [I] did in APL -- but they didn't think of it!"

There are modern descendants of APL, but I still think there is something special about the language's unique character set. I miss the one-liners consisting or five or twenty Greek symbols, punctuation, and numbers, which accomplished unfathomable tasks such as implementing a set of accounting books.

I do second Fogus's reason for recommending APL despite never having programmed in it: creator Kenneth Iverson's classic text, A Programming Language. It is an unusually lucid account of the design of a programming language -- a new language, not an adaptation of a language we already know. Read it. I had the wonderful opportunity to meet Iverson when he spoke at Michigan State in the 1980s, as described in my entry on Iverson's passing.

... So, I encourage you to follow the spirit of Fogus's article, if not its letter. Find the languages that can change how you think, and learn them. I begin helping a new set of students on this path next week, when we begin our study of Scheme, functional programming, and the key ideas of programming languages and styles.

Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

August 13, 2011 2:47 PM

Trust People, Not Technology

Another of the interviews I've read recently was The Rolling Stone's 1994 interview with Steve Jobs, when he was still at NeXT. This interview starts slowly but gets better as it goes on. The best parts are about people, not about technology. Consider this, on the source Jobs' optimism:

Do you still have as much faith in technology today as you did when you started out 20 years ago?

Oh, sure. It's not a faith in technology. It's faith in people.

Explain that.

Technology is nothing. What's important is that you have a faith in people, that they're basically good and smart, and if you give them tools, they'll do wonderful things with them. It's not the tools that you have faith in -- tools are just tools. They work, or they don't work. It's people you have faith in or not.

I think this is a basic attitude held by many CS teachers, about both technology and about the most important set of people we work with: students. Give them tools, and they will do wonderful things with them. Expose them to ideas -- intellectual tools -- and they will do wonderful things. This mentality drives me forward in much the same way as Jobs's optimism does about the people he wants to use Apple's tools.

I also think that this is an essential attitude when you work as part of a software development team. You can have all the cool build, test, development, and debugging tools money can buy, but in the end you are trusting people, not technology.

Then, on people from a different angle:

Are you uncomfortable with your status as a celebrity in Silicon Valley?

I think of it as my well-known twin brother. It's not me. Because otherwise, you go crazy. You read some negative article some idiot writes about you -- you just can't take it too personally. But then that teaches you not to take the really great ones too personally either. People like symbols, and they write about symbols.

I don't have to deal with celebrity status in Silicon Valley or anywhere else. I do get to read reviews of my work, though. Every three years, the faculty of my department evaluate my performance as part of the dean's review of my work and his decision to consider for me another term. I went through my second such review last winter. And, of course, frequent readers here have seen my comments on student assessments, which we do at the end of each semester. I wrote about assessments of my spring Intelligent Systems course back in May. Despite my twice annual therapy sessions in the form of blog entries, I have a pretty good handle on these reviews, both intellectually and emotionally. Yet there is something visceral about reading even one negative comment that never quite goes away. Guys like Jobs probably do there best not to read newspaper articles and unsolicited third-party evals.

I'll have to try the twin brother gambit next semester. My favorite lesson from Jobs's answer, though, is the second part: While you learn to steel yourself against bad reviews, you learn not to take the really great ones too personally, either. Outliers is outliers. As Kipling said, all people should count with you, but none too much. The key in these evaluations to gather information and use it to improve your performance. And that most always comes out of the middle of the curve. Treating raves and rants alike with equanimity keeps you humble and sane.

Ultimately, I think one's stance toward what others say comes back to the critical element in the first passage from Jobs: trust. If you trust people, then you can train yourself to accept reviews as a source of valuable information. If you don't, then the best you can do is ignore the feedback you receive; the worst is that you'll damage your psyche every time you read them. I'm fortunate to work in a department where I can trust. And, like Jobs, I have a surprising faith in my students' fairness and honesty. It took a few years to develop that trust and, once I did, teaching came to feel much safer.

Posted by Eugene Wallingford | Permalink | Categories: Managing and Leading, Personal, Software Development, Teaching and Learning

August 11, 2011 7:59 PM

Methods, Names, and Assumptions in Adding New Code to a Program

Since the mid-1990s, there has been a healthy conversation around refactoring, the restructuring of code to improve its internal structure without changing its external behavior. Thanks to Martin Fowler, we have a catalog of techniques for refactoring that help us restructure code safely and reliably. It is a wonderful tool for learners and practitioners alike.

When it comes to writing new code, we are not so lucky. Most of us learn to program by learning to write new code, yet we rarely learn techniques for adding code to a program in a way that is as safe and reliable as effective as the refactorings we know and love.

You might think that adding code would be relatively simple, at least compared to restructuring a large, interconnected web of components. But how can we move with the same confidence when adding code as we do when we follow a meticulous refactoring recipe under the protection of good unit tests permits? Test-driven design is a help, but I have never felt like I had the same sort of support writing new code as when I refactor.

So I was quite happy a couple of months ago to run across J.B. Rainsberger's Adding Behavior with Confidence. Very, very nice! I only wish I had read it a couple of months ago when I first saw the link. Don't make the same mistake; read it now.

Rainsberger gives a four-step process that works well for him:

  1. Identify an assumption that the new behavior needs to break.
  2. Find the code that implements that assumption.
  3. Extract that code into a method whose name represents the generalisation you're about to make.
  4. Enhance the extracted method to include the generalisation.

I was first drawn to the idea that a key step in adding new behavior is to make a new method, procedure, or function. This is one of the basic skills of computer programming. It is one of the earliest topics covered in many CS1 courses, and it should be taught sooner in many others.

Even still, most beginners seem to fear creating new methods. Even more advanced students will regress a bit when learning a new language, especially one that works differently than the languages they know well. A function call introduces a new point of failure: parameter passing. When worried about succeeding, students generally try to minimize the number of potential points of failure.

Notice, though, that Rainsberger starts not with a brand new method, empty except for the new code to be written. This technique asks us first to factor out existing code into a new method. This breaks the job of writing the new code into two, smaller steps: First refactor, relying on a well-known technique and the existing tests to provide safety. Second, add the new code. (These are Steps 3 and 4 in Rainsberger's technique.)

That isn't what really grabbed my attention first, however. The real beauty for me is that extracting a method forces us to give it us a name. I think that naming gives us great power, and not just in programming. A lot of times, CS textbooks make a deal about procedures as a form of abstraction, and they are. But that often feels so abstract... For programmers, especially beginners, we might better focus on the fact that help us to name things in our programs. Names, we get.

By naming a procedure that contains a few lines of code, we get to say what the code does. Even the best factored code that uses good variable names tends to say how something is done, not what it is doing. Creating and calling a method separates the two: the client does what the method does, and the server implements how it is done. This separation gives us new power: to refactor the code in other ways, certainly. Rainsberger reminds us that it also gives us power to add code more reliably!

"How can I add code to a program? Write a new function." This is an unsurprising, unhelpful answer most of the time, especially for novices who just see this as begging the question. "Okay, but what do I do then?" Rainsberger makes it a helpful answer, if a bit surprising. But he also puts it in a context with more support, what to do before we start writing the new code.

Creating and naming procedures was the strongest take-home point for me when I first read this article. As the ideas steeped in my mind for a few days, I began to have a greater appreciation for Rainsberger's focus on assumptions. Novice thinkers have trouble with assumptions. This is true whether they are learning to program, learning to read and analyze literature, or learning to understand and argue public policy issues. They have a hard time seeing assumptions, both the ones they make and the ones made by other writers. When the assumptions are pointed out, they are often unsure what to do with them, and are tempted to skip right over them. Assumptions are easy to ignore sometimes, because they are implicit and thus easy to lose track of when deep in a argument.

Learning to understand and reason about assumptions is another important step on the way to mature thinking. In CS courses, we often introduce the idea of preconditions and postconditions in Data Structures. (Students also see them in a discrete structures course, but there they tend to be presented as mathematical tools. Many students dismiss their value out of hand). Writing pre- and postconditions for a method is a way to make assumptions in your program explicit. Unfortunately, most beginning don't yet see the value in writing them. They feel like an extra, unnecessary step in a process dominated by the uncertainty they feel about their code. Assuring them that these invariants help is usually like pushing a rock up a hill. Tomorrow, you get to do it again.

One thing I like about Rainsberger's article is that it puts assumptions into the context of a larger process aimed at helping us write code more safely. Mathematical reasoning about code does that, too, but again, students often see it as something apart from the task of programming. Rainsberger's approach is undeniably about code. This technique may encourage programmers to begin thinking about assumptions sooner, more often, and more seriously.

As I said, I haven't seen many articles or books talk about adding code to a program in quite this way. Back in January, "Uncle Bob" Martin wrote an article in the same spirit as this, called The Transformation Priority Premise. It offers a grander vision, a speculative framework for all additions to code. If you know Uncle Bob's teachings about TDD, this article will seem familiar; it fits quite nicely with the mentality he encourages when using tests to drive the growth of a program. While his article is more speculative, it seems worthy or more thought. It encourages the tiniest of steps as each new test provokes new code in our program. Unfortunately, it takes such small steps that I fear I'd have trouble getting my students, especially the novices, to give it a fair try. I have a hard enough time getting most students to grok the value of TDD, even my seniors!

I have similar concerns about Rainsberger's technique, but his pragmatism and unabashed focus on code gibes me hope that it may be useful teaching students how to add functionality to their programs.

Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

August 09, 2011 8:35 PM

(No) Degree Required?

I ran across an ad today for a part-time position to help promote a local private school. The ad ended, "Marketing degree required."

As surprising as it may seem for a university professor, my first thought was, "Why require a university degree? What you need is a person with interesting ideas and a plan."

I quickly realized that I was being a little rash. There is certainly value in holding a degree, for knowledge and practices learned while completing it. Maybe I was being unfair to marketing in particular? Having studied accounting as well as CS as an undergrad, I took a little marketing, too, and wasn't all that impressed. But that is surely the assessment of someone who is mostly ignorant about the deep content of that discipline. (I do, however, know enough to think that the biggest part of a marketing degree ought to be based in psychology, both of individuals and groups.)

Would I have reacted the same way if the ad had said, "Computer Science degree required"? Actually, I think so. Over the last few years, I have grown increasingly cynical about the almost unthinking use of university degrees as credentials.

As department head and as teacher, I frequently run into non-traditional students who are coming back from the tech industry to earn the CS degrees they don't have but need to advance in their careers, or even to switch jobs or companies in lateral moves. They are far more experienced than our new graduates, and often more ready for any job than our traditional students. A degree can, in fact, be useful for most of these students. They are missing the CS theory that lies at the foundation of the discipline, and often they lack the overarching perspective that ties it all together. An undergrad degree in CS can give them that and help them appreciate the knowledge they already have even more.

But should they really need an undergraduate degree to get a programming job after 10, 15, even 25 years in the field? In the end, it's about what you can do, not what you know. Some of these folks can do plenty -- and know plenty, to boot. They just don't have a degree.

I figure that, all other things being equal, our grads ought to have a big advantage when applying for a job in our field, whether the job requires a degree or not. Unless a job demands a skill we can't or won't give them, say, x years experience in a particular industry language, then our grads should be ready to tackle almost any job in the industry with a solid background and an ability to learn new languages, skills, and practices quickly and well.

If not, then we in the university are doing something wrong.

Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

July 29, 2011 8:56 PM

A Teacher Learns from Coaches -- Run to the Roar

A Teacher Learns from Coaches -- Run to the Roar

For what is genius, I ask you,
but the capacity to be obsessed?

Gustav Detter, a 2009 Trinity College squash player

One thing about recovering from knee surgery: it gives you lots of time to read. In between bouts of pain, therapy, and sleep, I have been reading newspapers, magazines, and a few books lying around the house, including the rest of a Dave Barry book and the excellent but flawed law school memoir One L. Right now I am enjoying immensely re-reading Isaac Asimov's Foundation trilogy. (Psychohistory -- what a great computational challenge!)

The most unusual book I've read thus far has been Run to the Roar. This is not great literature, but it is an enjoyable read. It draws its power to attract readers from perhaps the most remarkable sports streak in history at any level: the men's squash team at Trinity College has won thirteen consecutive national titles and has not lost a single match during the streak. The thread that ties the book together as a story is the tale of the eleventh national championship match, in 2009 against Princeton University. This match is considered by many to be the greatest collegiate squash match of all time, won 5-4 by Trinity in see-saw fashion. Six of the nine matches went the full five games, with three of those requiring comebacks from 0-2 down, and only one match ended in straights. Two great teams, eighteen great players, battling to the last point. In the end, Trinity survived as much as it won.

I didn't know much about squash before the match, though I used to play a little racquetball. Fortunately, the book told me enough about the game and its history that I could begin to appreciate the drama of the match and the impressive nature of the streak. Unbelievable story, really.

But the book is also about its co-author, Trinity coach Paul Assaiante, both his life and his coaching methods. The book's subtitle is "Coaching to Overcome Fear", which captures in one phrase Assaiante's approach as well as any could. He works to help his players play in the moment, with no thoughts of external concerns weighing on their minds; enjoying the game they love and the privilege they have to test their preparation and efforts on the court in battle.

Assaiante views himself as a teacher, which makes what he says and the way he says it interesting to the teacher in me. There were many passages that struck a chord with me, whether as an "I have that pattern" moment or as an idea that I might try in my classroom. In the end, I saved two passages for more thought.

The first is the passage that leads off this entry. Assaiante attributed it to Steven Millhauser. I had never heard the quote, so I googled it. I learned that Millhauser is an award-winning author. Most hits took me pages with the quote as the start of a longer passage:

For what is genius, I ask you, but the capacity to be obsessed? Every normal child has that capacity; we have all been geniuses, you and I; but sooner or later it is beaten out of us, the glory fades, and by the age of seven most of us are nothing but wretched little adults.

What a marvelous pair of sentences. It's easy to see why the sentiment means so much to Assaiante. His players are obsessive in their training and their playing. Their coach is obsessive in his preparation and his coaching. (The subtitle of one of the better on-line articles about Trinity's streak begins "Led by an obsessive coach...".)

My favorite story of his coaching obsessiveness was how he strives to make each practice different -- different lengths, different drills, different times of day, different level of intensity, and so on. He talks of spending hours to get each practice ready for the team, ready to achieve a specific goal in the course of a season aimed at the national championship.

Indeed, Assaiante is seemingly obsessive in all parts of his life; the book relates how he conquered several personal and coaching challenges through prolonged, intense efforts to